“In recent history, the basis of telephone company value has been the sharing of scarce resources — wires, switches, etc. – to create premium-priced services. Over the last few years, glass fibers have gotten clearer, lasers are faster and cheaper, and processors have become many orders of magnitude more capable and available. In other words, the scarcity assumption has disappeared, which poses a challenge to the telcos’ “Intelligent Network” model. A new type of open, flexible communications infrastructure, the “Stupid Network,” is poised to deliver increased user control, more innovation, and greater value.” - Isenberg, D. S., (1998). “The dawn of the stupid network”. ACM netWorker 2, 1, 24-31.

Intelligent Applications and Dumb Infrastructure

Much has changed since the late 90’s that drove the Telco companies to essentially abandon their drive for supremacy in the intelligent services creation, delivery, and assurance business and take a back seat in the information services market to manage the ‘stupid network’ that merely carries the information services. You only have to look at the demise of major R&D companies such as AT&T Bell Labs, Lucent, Nortel, Alcatel and the rise of a new generation of services platforms from Apple, Amazon, Google, Facebook, Twitter, Oracle, and Microsoft to notice the sea change that has occurred in a short span of time. The data center has replaced the central office to become the hub from which myriad voice, video, and data services are created and delivered on a global scale.

However, today it can equally be said that the basis for current generation data center value has been the sharing of scarce and expensive resources - CPU, memory, network bandwidth, latency tolerance, and storage IOPs, throughput and capacity - to create premium priced applications with high quality of service (QoS) dealing with availability, performance, and security with compliance constraints. Over the last decade, the availability of commodity computing infrastructure (multi-core servers, server, and network virtualization technology and cheaper virtual storage etc.) in the form of clouds, the scarcity assumption has disappeared, which poses a challenge to the current "data center" model.

If the cloud providers can deliver the same QoS using the shared commodity resources, the scales of economy will make application creation, delivery, and QoS assurance more efficient, scalable, and tolerant to fluctuations in both workloads and user experience constraints.

The cloud providers recognize this and are on a race to out-compete with each other to bring the same services in their clouds. While Amazon had a first strike advantage and has created a competitive differentiation with a variety of services to address the non-functional requirements dealing with application QoS, and decouple application development (functional requirement fulfillment using computing functions, workflows and processes) others are catching up to duplicate similar services. This has led to cloud islands and the choice between dreaded vendor lock-in or complexity of using different clouds (private, public, or hybrid) with a plethora of tools, point solutions, and their perpetual integration costs.

Déjà vu

The story of service islands and their eventual integration with interoperability to improve exponentially the efficiency through scales of economy while fostering competition has played out before in the evolution of telephone networks, the Internet, and voice over IP (VOIP). As technology has progressed, the interoperability framework has moved from hardware solutions in telephony (SS7, STP, SCP, etc.) to pure software solutions based on virtualization. As infrastructure becomes a commodity, most cloud providers are forced to provide other services that facilitate the migration of existing applications to the cloud and attempt to provide the same QoS that they are accustomed to in current data centers through availability, security, mobility, and compliance zones within their sphere of influence.

This is done through optimizing their infrastructure and hiding the complexity through a Platform-as-a-Service(PaaS). However, as competing cloud providers offer their own differentiating PaaS, the cloud consumers are left with complexity of choice, innovation chaos, and perpetual integration cost. Telephone companies who have gone through this exercise before know the value of interoperable islands without proprietary lock-in. They also know that they have the global network connectivity that is essential to provide that interoperability which any cloud provider has to either leverage or build their own (like Google has).

CenturyLink is Pointing to a New Direction with Cloud Agnostic Computing and Interoperable Cloud Solutions

CenturyLink with a pedigree from both telecommunications and data centers seems to have realized this advantage and has gone from offering a competing cloud with similar features as any other cloud provider to providing interoperability among multiple clouds using their expertise in computing (data centers and managed services) and communications (network services). They are extending the application availability, security, mobility, and compliance zones with a policy based application management framework that spans across multiple clouds only using different provisioning processes offered by the competing cloud providers.

In addition, they also eliminate the need to orchestrate or move virtual machine images in order to provide cloud interoperability. This eliminates the need for a plethora of point solutions and tools. It also eliminates the need for cloud consumers to depend on non-functional requirement fulfillment using different PaaS offerings. The application management framework that CenturyLink offers was recently shared at a Cloudwalk event in San Francisco. This technology uses provisioning of virtual machines in different clouds (including CenturyLink Cloud) and provides availability zones across clouds where applications (using web servers, application managers, and databases) can be migrated to fulfill both recovery time objectives and recovery point objectives with zero down time.

The novelty of this approach seems to put on an equal footing both stateful and stateless application components and eliminates the need for cloud native computing. Applications are treated as cloud agnostic and require no changes to the application or the operating system on which they are executed. Furthermore, no changes to infrastructure provisioning processes are required. As long as the operating system is the same in the source and target execution venues (containers, VMs, or physical servers), applications can be deployed and moved across any cloud with their framework. The cloud providers are used as mere commodity infrastructure providers on a global scale with application interoperability and QoS assurance.

Are Cloud Agnostic Computing and Interoperable Clouds Finally Here?

The message to cloud consumers seems to be "instead of managing clouds with myriad cloud management platforms, start managing your applications on any cloud using a policy-based cloud application management platform."

It's worth finding out for yourselves.

Learn More

Blog: Migrate Your Apps to any Cloud With Zero Downtime Using C3DNA

Solutions: Transform Your Business with Hosted Applications in the Cloud

Getting Started Knowledge Base: Getting Started with C3DNA Appliance on CenturyLink Cloud