Providing our customers with the most reliable services on the most competitive technologies requires continuously evaluating and updating our product catalog. Over the past year, we’ve worked diligently to improve CenturyLink Cloud’s standard compute service so we can fulfill more of our customers’ cloud hosting needs, and stay at par with the best of the best in the competitive cloud landscape.
Over the past year, we’ve standardized our storage infrastructure on all-flash platforms so customers can host distributed workloads on a common compute product and maximize performance. Based on the positive feedback we received about these improvements, we realized that customers no longer needed Hyperscale, our specialized all-flash storage cloud compute service.
When we launched Hyperscale in 2014, it was a new approach to enterprise storage. For businesses that needed to store and process massive amounts of data — in industries like banking, oil/gas, and healthcare — Hyperscale was an exciting opportunity to adopt more effective and nimble data collection strategies.
While a corporate data center might support hundreds of physical servers and thousands of virtual machines, Hyperscale could support thousands of physical servers and millions of virtual machines to quickly accommodate increased demands for internet-facing and back-end computing resources. It was a big deal. And our customers were crazy about it.
But times and the demand for storage continue to change. Four years ago, Hyperscale instances were available in only one data center and could support up to one terabyte (1,024 GB) of total storage. That was more than six times the previously available capacity for web-scale and distributed architectures and other big data jobs. But now our standard cloud servers can be configured with up to 4 TB local storage. Moore’s Law at work!
After we announced in June that we would end availability of Hyperscale, we removed links from the CenturyLink Cloud control portal in July. Our infrastructure engineering teams then began a white-glove migration of existing Hyperscale servers to our standard virtual compute infrastructure. The goal was to ensure zero pain, zero downtime and zero hassle for our customers. The migration was completed in August, with customers seeing no gaps in service or performance.
Now, with their instances migrated to standard compute, these customers are experiencing even better performance. In our testing of common storage-intensive workloads, we saw drastic improvements in I/O performance.
“This was sort of a no-brainer. We realized that shifting Hyperscale customers over to standard compute was a win-win-win,” said Matt Schwabenbauer, Product Owner for the CenturyLink Cloud. “It means customers see improved performance and reduced costs, while CenturyLink is able to simplify our product catalog to focus our energy on delivering the best possible user experience.”
While we believe our standard compute product is a good fit for most cloud workloads, some use cases may still require specialized services. For customers with applications requiring the highest possible performance from their compute infrastructure, such as Big Data or Artificial Intelligence, our Bare Metal service offers the power of physical servers with the flexibility of virtual machines. Or, for customers who want to use a specialized database product and offload some of their application deployment and management tasks, we offer Relational Database as a Service.
Customers previously using Hyperscale virtual machines are also seeing immediate cost reductions. The per-gigabyte price for block storage attached to standard compute instances is 58 percent less than the per-gigabyte price for block storage attached to Hyperscale compute instances. Price changes for regions outside the United States may vary; more information can be found in our public price catalog.
For any questions related to Hyperscale’s retirement, please reach out to our customer care team at firstname.lastname@example.org.