Cloud Bursting is a hot topic in cloud computing today. It’s a model that gauges an organization’s ability to use internal resources to host services critical to their business, and during demand spikes consume resources from public clouds on a pay-as-you-go approach.
Cloud Bursting Use Cases
Today only a handful of businesses face real cloud bursting challenges because of their special use cases. They run the sort of applications where the burst is high due to compute-intensive processes like image processing, scientific computing, monthly calculations, and such or relate to running the development, test environments. Since the latter typically don’t involve client data, they are not subject to strict regulatory compliance, which means you can run them on any infrastructure.
To support a strong cloud bursting model, several parts must come together:
- Shared network between the public clouds and the datacenter.
- Automated and repeatable deployments to launch to the required clouds regardless of platform differences.
- Single management console to consistently support and maintain all deployment artifacts.
- Ability to specify the amount, ratio, and priority of cloud resources the applications can consume.
- Ability to identify the load needs of an application and configure them into the tools that manage scaling.
Most cloud providers offer the first piece today, either as dedicated connections or as virtual network infrastructure.
For the second and third, a robust solution like Cloud Application Manager can address the requirements. We enable automated and consistent deployments across different cloud platforms from a unified interface to manage deployment artifacts with a level of built-in IT governance.
Now, the question is, how to predictably detect demand spikes to scale resource consumption in public clouds?
One answer is to integrate Cloud Application Manager with basic monitoring tools in the private cloud. You can achieve this in Cloud Application Manager by building a simple prediction model in a box and defining the events to trigger public cloud deployments.
Implementing Cloud Bursting
Say there’s a cloud bursting use case where we want to use the public cloud. Cloud Application Manager gives us two options:
Option 1: Admin Driven
As an admin, you use a monitoring tool like New Relic or AppDynamics to perform checks on infrastructure health and load. When the tool detects a threshold, you get an alert, and you manually deploy additional instances to a public cloud provider you choose. Then, you use the Cloud Application Manager instance scheduler to scale back the number of deployed instances.
Option 2: Fully Automated
Here you apply the same process as in option one from a monitoring perspective, but, in this case, an auto-scaling policy defines the parameters for minimum and maximum number of instances. When the policy triggers an alert, new instances launch, and after the demand spike subsides, the instances automatically scale back.
So what’s your use case for cloud bursting? How do you handle it today? Do you want to automate it end-to-end with the help of Cloud Application Manager? Talk to us for a demo.
Want to Learn More About Cloud Application Manager and ElasticKube?
Cloud Application Manager is a powerful, scalable platform for deploying applications into production across any cloud infrastructure – private, public or hosted. It provides interactive visualization to automate application provisioning, including configuration, deployment, scaling, updating and migration of applications in real-time. Offering two approaches to cloud orchestration — Cloud Application Manager and ElasticKube — enterprise IT and developers alike can benefit from multi-cloud flexibility.
Explore ElasticKube by visiting GitHub (curl -s https://elastickube.com | bash).
Visit the Cloud Application Manager product page to learn more.