As hybrid cloud adoption grows, proper architecture and design of these solutions becomes critical. In the first part of this article series, we discussed the challenges any organization faces when linking public and private cloud environments. The second article outlined strategies for mitigating the network and security challenges of hybrid cloud. In this third of four articles, we will assess success strategies for application integration and system management in hybrid clouds.

Solutions to Hybrid Cloud Challenges

Data and Application Integration. Nearly every useful system is made up of data and business logic from multiple applications. Siloed, monolithic systems are fading in popularity as more dynamic systems take their place. But as you look to work with data and applications in a hybrid cloud, you need to keep a few things in mind.

  *Recognize the presence of data gravity. The concept of data gravity—a principle identified by Dave McCrory that claims that applications and services are drawn closer to large collections of data—comes to play in a hybrid cloud. Do you find yourself shuttling data back and forth over long distances? Would it make sense to move some of your large data repositories to whichever cloud most of the consuming applications are running in? Bulk data movement between on-premises and public cloud systems can get slow, so look for ways to optimize placement based on known integration points.

  *Map secure integration paths. Some services in your hybrid cloud may be software-as-a-service (SaaS) products that don’t offer private network tunnels for communication. When creating a hybrid application integration strategy, consider tools—such as the Informatica Cloud or SnapLogic—that make it possible to securely transfer data from public SaaS platforms to systems behind your corporate firewall.

  *Know your technical constraints. The applications in your data center are probably only limited by the hardware they run on. However, most multi-tenant cloud systems apply resource governors to make sure that no single consumer can swamp the platform with requests. Make sure that you understand the constraints of each public cloud in your hybrid architecture and refactor any integration processes that would obviously violate these constraints.

  *Design for failure. When systems span environments in a hybrid scenario, the risk of localized failures goes up. Microservices and distributed components make for a more flexible architecture. The flipside, however, is that your system requires greater resilience. Work with your architects and developers to ensure that hybrid cloud applications can fail fast or apply circuit breakers to bypass failed components.

System Management—Work Smarter, Not Harder. This seems to be one of those areas that doesn’t factor heavily into a company’s first assessment of cloud costs. Ongoing maintenance is a part of nearly every server environment, unless you’re among the few who successfully run immutable servers. How can you mitigate this challenge?

  *Invest in configuration management. Configuration management tools like Chef, Ansible, Puppet, and Salt are now mainstream and you can find plenty of expert material on how to use each platform. Why do those tools matter? It’s one thing to have inconsistencies in a small server environment where manual intervention is annoying, but not catastrophic. It’s another thing entirely to tolerate “configuration drift” at scale! If you set up configuration management across your hybrid environment, it becomes possible to manage a constantly growing fleet of servers without corresponding increases in administrator headcount.

  *Look for ways to perform management in bulk. Even if you do not have a full configuration management platform in place, aggressively pursue options that let you manage your assets in bulk instead of one at a time. Use scripting to programmatically interact with many servers at once, or leverage group-based management capabilities found in platforms like CenturyLink Cloud.

  *Consider agent-based monitoring solutions that feed a centralized repository. In the public cloud, you will likely not have the same level of control that you have in a private environment. Don’t assume that you can tap into the underlying virtualization layer of the public cloud, but rather, use server-based agents that can provide granular machine-level statistics. If you want to apply a standardized alerting process across your hybrid cloud, collect all the monitoring data into a centralized repository where it can be analyzed and acted on.

  *Make it easy to find cloud resources. Classic configuration management databases won’t survive in a hybrid environment. Clouds are defined by their elasticity, and servers will be created and torn down at will. Trying to manually keep a tracking system in place is a fool’s errand. Instead, figure out how to organize and find your dynamic compute resources in a way that helps your team. In the CenturyLink Cloud, you can use Server Groups to create collections of related servers,  and leverage our Global Search to quickly find assets across any data center.

What’s Next?

System management can be an unexpected – but critical – new cost of hybrid cloud computing. Your focus should be on streamlining processes and management at scale, not preserving all aspects of the current state. Data and application integration strategies for hybrid cloud help you place workloads where they make the most sense, and not sacrifice the benefits of each environment. In our final article of the series, we will take a look at how to succeed in the face of compatibility, portability, and tooling challenges.