Infrastructure is changing. It’s nearly impossible to design a product or application without carefully planning how it’ll deploy on specific infrastructure. I should know because in running a business that automates application deployments, I deal with infrastructure every day. So here’s my take on how I view infrastructure and its evolving changes.
The notion of cloud computing and a dynamic datacenter has matured a lot in the past decade. First we moved from the monolithic IBM computers to datacenters on commodity servers. Now we’re on the journey of commoditizing and standardizing infrastructure.
Cloud Reality Check
A few years ago, when we envisioned what this journey would look like we said infrastructure is a commodity, we called the CIO defunct and said platforms-as-a-service would fully abstract infrastructure. But look at the cloud reality now.
- Infrastructure is a differentiating game. Though most providers offer the same things like rich computing types, software-driven networking (SDN), software-driven storage (SDS), some rate better in certain areas.
- The CIO’s role is not redundant. She leads an important DevOps cultural transformation in the enterprise.
- Platform as a service is only one part of the IaaS journey. Infrastructure is starting to become just like code in templates and containers.
Cloud Provider Capabilities
And the capabilities keep evolving. What’s clear is that not all providers are equal. Some fare better in some respects. To understand the nuances of their offerings, it helps to categorize them across the board:
- On-demand API to spin up compute instances and other services.
- Programmable SDS to call the API and attach storage to an instance.
- Programmable SDN to call the API and launch in a specific network and connect to applications across distributed services.
- Platform services to launch things like databases and load balancers that form the fabric of infrastructure as a service.
- Declarative infrastructure templates to replace numerous API calls with documents that represent the resources you need. The cloud provider translates those resource needs into API calls.
- Management traceability for visibility into what happened and who did what.
- Programmable events for the latest state-of-the-art tech to do more. Hook up custom code to orchestrate external solutions like an external CMDB. Customize infrastructure by injecting your code on top of the services from the provider.
Perfect Blend of IaaS
For a clearer view of the cloud landscape, I further refine the providers by their cloud maturity level.
- Level 1 is programmable on-demand API to launch compute instances and attach storage. Digital Ocean is one example.
- Level 2 is fully programmable infrastructure to get networking, storage, and compute services. Examples are Google and maybe Azure.
- Level 3 is fully programmable infrastructure with SLAs and programmable events. AWS leads here though Azure follows closely. I expect Google will soon be there when they have a solution more usable than the rest.
SLAs are a key factor in determining the right cloud provider. For computing services, SLAs baseline a reliable CPU performance, CPU/GPU instruction sets, and homogenous computing units. SLAs around storage define SSD versus magnetic capabilities, guaranteed IOPS and storage management. Network SLAs provide the bandwidth, dedicated Internet paths, multiple network interface cards, network isolation, load balancers, firewalls, and more.
Ultimately good IaaS is both a mix of cloud maturity plus SLAs. Take our case as an example. We chose Google to host Cloud Application Manager in the cloud because they provide SLAs better than others. Both in terms of bandwidth and SDN capabilities, Google is more advanced. They let us configure networking better.
How do you sift through the different provider offerings and choose the right IaaS? Given how complex it is to decide, many people find it easier to follow those that already made a choice.
True Infrastructure Abstraction
To ease this cloud confusion, you need tools like the deployment policies in Cloud Application Manager. A policy maps infrastructure choices. It captures the gray area between software and infrastructure and separates applications and related components from all the infrastructure decisions. Using the policy, you can easily adapt to any infrastructure and experiment with what works and what doesn’t so you can find the right infrastructure mix to complement deployments.
The philosophy behind deployment policies is to consume capabilities and SLAs of a cloud provider without depending on any particular one. And that means power to demand infrastructure from different providers. How cool is that?
Want to Learn More About Cloud Application Manager and ElasticKube?
Cloud Application Manager is a powerful, scalable platform for deploying applications into production across any cloud infrastructure – private, public or hosted. It provides interactive visualization to automate application provisioning, including configuration, deployment, scaling, updating and migration of applications in real-time. Offering two approaches to cloud orchestration — Cloud Application Manager and ElasticKube — enterprise IT and developers alike can benefit from multi-cloud flexibility.
Explore ElasticKube by visiting GitHub (curl -s https://elastickube.com | bash).
Visit the Cloud Application Manager product page to learn more.