As the company prepared to expand its offerings in Asia and North America, it needed to scale for hyper-growth. So how could the company achieve frequent deployment capabilities with zero downtime and minimal errors? And do so while growing their engineering efforts? Achieving these goals required a less monolithic and more automated way to deploy its application platform. The company looked for a deployment solution that bridged multiple systems: AWS that hosts the application platform, Jenkins for CI/CD, and SaltStack for managing the configuration.
In AWS, the company's application platform was historically deployed and managed as several AWS services in CloudFormation templates. Over time, the CloudFormation templates became so long and nested that they were hard to maintain or even make small updates to because everything was contained within a single template. It was monolithic. But to model hyper-growth companies and make it easy to update any one service at a time without affecting them all, the company wanted to break up the design pattern into microservices. Now the IT operations team manage the application platform as easy-to-manage, modular CloudFormation templates using the box model in Cloud Application Manager. CloudFormation boxes allow operations engineers to roll out and maintain all the application service dependencies and connect them together using bindings at deploy time.
Hyper-growth is also about speed. As the way to speed up production application updates, the company wanted to expand Jenkins continuous integration and delivery to continuous deployments. “Using the Cloud Application Manager API and Jenkins plugin, we are trying to achieve continuous deployments at scale. This setup allows us to pass the AWS AMIs built by Jenkins continuous integration onto Cloud Application Manager, which can deploy to EC2 via CloudFormation. Essentially, we pull down the latest AMI and update a running instance using Cloud Application Manager API calls,” the company's Lead Linux Systems Administrator reported.
Updating the application platform across multiple testing, staging, and production environments involves multiple integration points, tools, and scripts. The engineering team experienced a huge overhead with manual integrations and spent hours maintaining things in different places. To slash the overhead, the company used Cloud Application Manager, which integrates a plethora of AWS services centrally. Such AWS services include: EC2, Elastic Loadbalancing, Autoscaling, S3, Elastic Block Storage, RDS, RedShift, SQS, Kinesis, VPC, CloudFront, and CloudWatch. As the Lead Linux Systems Administrator stated, “With Cloud Application Manager we integrate all the AWS services via API calls, which save us from writing and maintaining integration scripts manually.”
For the company, driving collaboration between different functions is pivotal to being agile. A team of three IT operations engineers can’t do it all. Where possible, developers and QA help with operational tasks. “We truly practice the DevOps philosophy. People in development, QA, and IT ops roles work together in a complementary fashion. Cloud Application Manager access control features provide the right level of visibility for people in different roles to help us automate tasks,” the Lead Linux Systems Administrator said.
“The majority of the the company's application platform stack runs on AWS EC2 as a collection of PHP web apps, AngularJS front-end, Scala and Java services,” said the company's Engineering Manager. It's easy to imagine how environments across test, staging, production can quickly grow inconsistent because of several version or dependency variances in parts of the stack. It was manual and painstaking to update and maintain the entire stack in different places simultaneously. Plus, it was important to maintain environments that match the production stack to minimize application delivery errors in production. “We reduce errors because Cloud Application Manager helps us maintain dependencies consistently with a single source of management. From one place in Cloud Application Manager, we provision the frontend and backend services,” the Engineering Manager added.
The company has been deploying application workloads with Cloud Application Manager for some time now. Can you guess their deployment speed? Compared to 100 deployments per month before Cloud Application Manager, they now average about 700 deployments. That’s over a seven-fold increase.
By reusing the same application templates (or boxes) from Cloud Application Manager, the company's QA and operations teams configure and update testing, staging, and production environments in minutes rather than weeks. The company's Applications Engineer said, “These environments are really useful for our QA engineers. Because they’re on the same cloud infrastructure, they can take a lot of traffic from our automated tests - with autoscaling turned on - we can go on to test how key components perform under load; having all the same libraries, services and configuration reduces bugs as there are no surprises when we release changes.”
Because the company manages all the AWS services for its application needs from a central place in Cloud Application Manager, IT Ops can consistently launch new or updated environments, and thereby reduce deployment errors by 50%. A central place to manage all infrastructure dependencies and scripts means it’s easy to troubleshoot machine and application lifecycle states. “Thanks to Cloud Application Manager, we easily deploy infrastructure changes across our environments and ensure consistency,” said the company's Lead Linux Systems Administrator.
Cloud Application Manager integration with Jenkins, AWS, and SaltStack enables IT operations engineers to deploy and shut down environments in AWS automatically. Updates that previously took days now take only a few minutes. The Applications Engineer added, “We power continuous integration and continuous delivery through the Cloud Application Manager Jenkins Plugin and its APIs. Next up we’re building the test confidence to take us to that level of continuous deployment.”