Posts Tagged: Automation

Spin Up a Cloud Relational Database in Minutes

October 10, 2016
By Daniel Morton

Relational DB Logo

Reading Time: about 4 minutes

CenturyLink demonstrates its commitment to provide automation and productivity on our Hybrid IT Services Platform, which makes it easy for you to manage and control your virtualized application and network resources.

Relational DB is a MySQL-compatible Database-as-a-Service (RDBaaS) designed to meet the needs of the developer for rapid software development while providing accelerated IT on-demand. For example, you can easily provision database environments without incurring any of the usual costs related to dedicated hardware and licensing. Developers can spin up high-performing, dynamic MySQL-compatible databases instantly to support their software delivery needs while minimizing infrastructure costs and reducing the time required to manage.

Deployment Options

Companies look for an advantage in the marketplace while also looking to control costs. There are basically two database deployment options facing many enterprises both large or small. In either case, the option begins with defining your need. The decision is determining which of the two makes the most sense for your business.

Traditional Self-Service Relational DB
Staff the required database administrators Define the budget
Procure or rent the hardware Consume services on an as-needed basis
Define infrastructure requirements Scale up or down as necessary
Define maintenance and downtime process
Manage security
Implement load balancing
Set up backup, failover, and failback procedures
Database application installation, configuration, automation, tuning,
...

Read on...

Hadoop Add-On Components Made Easy: Click to Install

May 5, 2016
By Daniel Morton

Big Data Footprint

Hadoop is one of the hottest enterprise big data technologies in the cloud today. To make big data in the cloud easier for our customers, 8 Cloudera Hadoop Add-on Services are now available for CenturyLink Cloud®. Through an ecosystem of open-source components, Hadoop Add-on Services fundamentally changes the way enterprises store, process, and analyze data to solve all kinds of big data problems.

For many, Hadoop is too difficult to deploy and manage. Automating a four-node cluster can be a daunting task. But not with the Cloudera Blueprint for CenturyLink Cloud! We've taken the complexity out of Hadoop by automating the dozens of deployment steps associated with a new cluster environment. We reduced all these steps to just a few clicks of the mouse.

Benefits of Cloudera Blueprint for CenturyLink Cloud

CenturyLink's Cloudera Blueprint simplifies the process of installing, configuring, and adding components for an entire Hadoop cluster. The Blueprints come in 1 and 4 server configurations. If you need more than 4 nodes, additional nodes can be added in minutes by upgrading the Cloudera Blueprint version.

  • Log into the Control Portal. Control Portal
  • Search for Cloudera in the Blueprints library.
  • Click the Blueprint version and cluster configuration you want.
  • Fill in the appropriate details.
  • Select your Cloudera version and
  • ...

    Read on...

    Meet Runner, the Newest Multi-Cloud Automation and Orchestration Service

    May 2, 2016
    By Chris Kent, Product Owner

    CenturyLink Cloud Runner from CenturyLink Cloud on Vimeo.

    CenturyLink is excited to announce the launch of Runner, a configuration management and orchestration service that works across Hybrid-IT architectures and diverse cloud environments. Runner addresses the potential time and resource drain confronting organizations that want their own private clouds. It reduces private cloud complexity and administrative workload by allowing for fast and easy automation of infrastructure in any cloud or data center. It offers fast, easy automation and orchestration on the CenturyLink Cloud® Platform, as well as on third-party cloud providers and on-premise infrastructures and devices. With Runner, you can quickly provision and modify resources on any environment.

    What is Runner?

    Runner is automation made simple! Runner exposes an open source automation and orchestration engine as a service. On top of the engine, we’ve created custom services and APIs to enhance job execution capabilities. Runner was created to enable users to quickly and efficiently manage their infrastructure, wherever it is. Runner securely connects customers to their infrastructure whether on the CenturyLink Platform, other clouds, or private data centers, allowing for both push and pull-based communication. Whether provisioning, configuring, or deploying, Runner makes it easy to quickly create and run jobs, report on the...

    Read on...

    Migration and DR to CLC: Lessons Learned from the Front Lines

    April 25, 2016
    By Gautam Thockchom

    Periodically, we turn over control of the CenturyLink Cloud® blog to members of our certified technology Ecosystem to share how they leverage our platform to enable customer success. This week’s guest author from the Cloud Marketplace Provider Program is Gautam Thockchom for Sureline Systems.

    Sureline is leading the way with a complete, easy-to-deploy application mobility solution that is flexible, provides the highest quality recovery points and replicates them remotely for safety, delivers zero data loss failover and failback, and provides the ability to test disaster recovery (DR) plans easily and frequently without locking the customer to a specific cloud. We're an industry leader in cloud migration, DR software, and business continuity and disaster recovery (BCDR) solutions. Sureline fundamentally solves the problem of any-to-any DR and migration, enabling seamless data migration and DR from any environment -- physical or virtual -- to CenturyLink Cloud.

    sureedge-migration-diagram.png

    In our experience moving customers to the cloud and protecting thousands of machines on the cloud, we've noticed that customers are quite serious about beginning the conversion process. However, a lot of cloud projects are slow to start because of a simple reason -- the question of how to move to the cloud is typically not answered clearly.

    In order...

    Read on...

    How to Ansible with Runner

    January 25, 2016
    By Chris Kent - Product Owner, Runner

    how-ansible-logo

    Introduction

    Here at CenturyLink Cloud, we use a technology called Ansible pretty extensively throughout our platform. Ansible is an IT automation and orchestration engine that enables configuration management, provisioning, deployment, as well as many other IT needs. Runner wraps all of Ansible’s goodness into a Job Service, along with other micro services such as SSH, VPN, status, queuing, and scheduling. Next we'll look at what Runner is and how you would use it.

    What Is Runner?

    Runner is a new product from CenturyLink Cloud that enables fast, easy automation and orchestration on the CenturyLink Cloud Platform, as well as third-party cloud providers and on-premises infrastructure and devices. Runner provides the ability to quickly provision and modify resources on any environment, and gives users a true Hybrid IT solution, regardless of where their resources are.

    On a more granular level, Runner is an automation and orchestration engine that we exposed as a service, and coupled with services mentioned below, enhance the Runner experience. Runner, at its core, is an Ansible engine. On top of that engine exists several other custom services and APIs we've created, many of which were created in tandem with the Runner job service to enhance the job execution capabilities.

    Here is a...

    Read on...

    Managed OS Services -- Working Hard to be Better for You

    January 13, 2016
    By Ben Swoboda

    By design, clicking the “Make Managed” button while configuring a server is very simple. It may be hard to believe, but that effortless task activates thousands of trained technicians, worldwide, to help you with your server (and you’re only paying them pennies per hour!). Everything after that – the ongoing support - should be as effortless as clicking that button.

    Think of all the data a technician must consider to serve a customer: sales data, product data, customer data, network, monitoring, communication, change management, performance, and incident histories, just to name a few. The challenge is to funnel all that information to the technician supporting you in a way that makes sense. We’ve lived up to that challenge for years, but at CenturyLink, we are constantly striving to do a better job for our customers.

    Our Operations team creates and updates internal tools that allow them to serve our customers in the best ways possible. They utilize customizable tools which enable them to respond quicker, more efficiently, and with greater impact. Here are some recent updates that we made to our tools so that we can continue to improve our service.

    Monitoring Updates

    Recent improvements to our monitoring solution eliminate waste and streamline processes....

    Read on...

    How to Deliver Applications with High Availability

    December 17, 2015
    By

    High Availability

    How many times have you felt less-than-confident about the stability of your most critical applications in case of a disaster? We have all experienced the frustration of losing control of our systems, processes, applications and infrastructure resources while not being able to easily recover from downtime (planned or unplanned).

    With the latest release of Cloud Application Manager, those times have come to an end.

    Cloud Application Manager has expanded its support to a new type of deployment in case of a scenario where a failover is needed. Our appliance supports an active/passive topology. So…how does it work?

    The recommended way to implement this capability is to run a two-node cluster (the main node and the backup node which has the data shared). This active/passive configuration offers resiliency in the event of a failover and is fairly easy to operate, as the back-up node can be activated and configured so you can continue working normally. Important considerations to get you started!

    Make a snapshot of your Cloud Application Manager VM so you can recover to the previous instance before setting a new appliance as replica. Once a failure is detected, you can execute the failover via scripts and ensure that no data is lost or compromised.

    Let’s...

    Read on...

    Deliver Leading Hybrid Cloud Management for Business Critical Applications

    November 19, 2015
    By

    Visualization

    New application architecture visualization advances enterprise deployment and management.

    We’ve heard your feedback loud and clear. And as such, today marks the formal release of Cloud Application Manager 3.5 which is filled with new capabilities to enable IT to drive value, productivity and confidence throughout the enterprise. As a result, we continue to focus on the enhancements that simplifies the daily routine of IT experts by providing a highly adaptive model for standardizing the process of deployments and environment provisioning.

    Company Feedback
    451 Research “Cloud Application Manager does a good job of supporting the application deployment process and infrastructure management with its self-service portal and box blueprints for popular and widely-used software components, including containers. This means application owners and IT operations can more effectively collaborate on applications and releases with visualization for traditional, production, mission-critical and multi-tiered applications.” ~ Jay Lyman, Research Manager, Cloud Management and Containers, 451 Research
    Brainshark “Cloud Application Manager allows us to create predictable and repeatable processes removing the need for manual infrastructure provisioning and configuration. The solution offers a very powerful IT workflow and provisioning automation platform combined with an intuitive user interface we are proud to expose to our internal customers. Adding Cloud Application Manager to our continuous delivery pipeline has
    ...

    Read on...

    Preview Application Boxes: Deploy Like Never Before

    November 4, 2015
    By

    Application

    A couple of months ago, we announced general availability of public boxes, a knowledge repository of sample deployments for popular application stacks.

    But, we couldn’t just leave it there because we know that customers want more. How about simplifying your application deployment processes? Cloud Application Manager is now enabling customers to deploy complete application stacks with Application Boxes which include all of the dependent components and infrastructure that can be deployed in a predictable manner.

    Application boxes are inspired by the need to reduce and simplify the deployment of complex applications with multiple tiers and multiple instances. To deploy most applications, you need several instances cooperating together in a logical way. Application boxes are a way to define and reuse several boxes that work together to run an application.

    App Box

    Application boxes are smart boxes that allow you to define your topology, add boxes and bindings, and choose the variables for each one: name, version, tags, policy, etc in only a few minutes. Modeling your applications in these easy-to-deploy boxes prevents errors and saves time. Isn’t it magical?

    Get a step closer to democratizing software automation configuration and infrastructure. Learn more about Application Boxes and how they work. Create one of your own here.

    Want

    ...

    Read on...

    Deploying Environments and Applications with Source Code: The Cloud-based Approach to Application Creation

    October 28, 2015
    By Chris Kent

    Runner

    Creating and deploying applications is like a carefully choreographed dance. One that requires balancing business goals with expectation setting. This feat includes steps like writing groundbreaking code, solving timing restrictions, and overcoming the technical obstacles between all of the teams involved. Ensuring that the dependencies and timelines between the development and testing teams are in sync and align with the hosting company and their assigned help desk engineers can be a daunting task in itself, and it often creates a bottleneck in the process. Even though this dance can be repeated countless times, it’s usually this last step that is clumsy and always results in a different outcome. What if there was a more streamlined, succinct approach to creating and duplicating environments?

    What Is Killing Your Deployment Timeline?

    Often, the environments in which applications are developed, tested, and deployed compose one of the biggest variables that can impact an overall timeline. The elements of the environments are the same almost every time: a carefully constructed staging environment, which is a cleaner, more stable version of the development environment, both of which closely mimic QA and production environments. When the time comes for deploying to a live environment, unless the environments closely align,...

    Read on...

    What Biotech Service Cytobank Did to Save $$$ and Boost Efficiency

    September 22, 2015
    By

    Cytobank

    Today, we are proud to highlight the success story of one of our biotech customers, Cytobank. Cytobank is a cloud biopharma service that supports research labs around the world. We spoke to Robin Lee Powell, Director of IT Operations at Cytobank who shared how they leverage Cloud Application Manager to save on infrastructure costs and increase team efficiency.

    Cytobank helps researchers study life-threatening diseases such as cancer. Scientists around the globe rely on Cytobank to organize, share, and analyze single-cell cytometry data on a massive scale. Visualizing single-cell data involves loads of data transfer and often heavy computations between the client and the backend. On top of that, each customer site is completely isolated and unique from another. Cytobank not only serves customer labs but also provides demo, development and QA environments for their offshore team to test site changes.

    All this complexity requires high capacity resources for many uniquely configured site environments. Since the requirements and resource demands of each site vary uniquely, live usage metrics helps determine when to scale resources up or down. To fulfill the unique demands of customer, demo, and QA site environments, Cytobank leverages Cloud Application Manager as an integrated platform.

    Self-Service Catalog to Scale Efficiently

    At Cytobank,...

    Read on...

    How to Achieve the Top 3 IT Ops Objectives

    September 15, 2015
    By

    Jenkins Logo

    Back from an exciting week at the Jenkins User Conference in Santa Clara, I want to thank all of you who stopped by the Cloud Application Manager booth to share your thoughts and questions. From the 600+ attendees representing various enterprises, some key themes emerged that reflect their IT organizational challenges and objectives. I’ll go over the top three.

    A Desire to Increase Deployment Frequency

    Gene Kim, a keynote speaker at the conference rightly, said, “Deploy smaller changes more frequently.” While most enterprises deploy applications in some form or fashion, a huge amount of manual tasks and steps slows down the process. It’s a combination of a process and tools issue.

    Confidence in Successful Deployments

    Many IT and DevOps teams experience sleepless nights and get on pins-and-needles when it is time to deploy into production. Reducing errors, predictability and stable applications in production are key. As Gene Kim said at the conference, “At the end of each spring, we must have working and shippable code… demonstrated in an environment that resembles production.”

    Faster Lead-Time to Deployment

    Not only are frequent deployments a good thing, but reducing the overall time of a single deployment is a major IT goal. Deployment orchestration is the next step in driving...

    Read on...

    3 Steps to Connect Service Bindings of Complex Deployments

    September 9, 2015
    By

    Bindings

    Bindings make it easy to connect services together. They enable components of large-scale, multi-tier applications to interconnect in a virtual cloud deployment that can span hybrid clouds. Recently, Cloud Application Manager made bindings even more powerful. Now at deploy time, services can automatically detect dependencies — with the help of binding tags.

    Binding tags boost complex deployments in a couple of ways:

    • Dynamic bindings. Tagged bindings discover instance connectivity dynamically. They serve as an auto-discovery mechanism where instances with binding tags can automatically connect to other instances that match those tags.
    • One to many bindings. Bindings can connect one or many services together, again, using tags. Previously, every connection required an exclusive binding. But that’s no longer necessary.

    We’ll use an example to see how bindings work. Let’s suppose that an Nginx loadbalancer needs to detect freshly launched Node.js instances and automatically add them to its loadbalancing pool. We do three things to achieve this scenario. The first two are part of box automation:

  • Define binding variables

  • Configure bindings for your application

  • Tag bindings for instance connectivity

  • Step 1. Define Binding Variables

    Bindings are defined as variables in box automation. In the Nginx loadbalancer box, for example, we defined that the binding can connect to instances of...

    Read on...

    Why Is the Developer’s Dream an Operations Nightmare?

    August 19, 2015
    By

    Face Palm

    As a developer at Cloud Application Manager, I wear the IT Ops hat just as often as I spend time writing code. So I understand all too well the pain IT Ops endure to keep things running smoothly after a production update. In the continuous delivery world of devops where small, incremental code changes are deployed every day, chances are that IT Ops have to constantly put out fires. Luckily, in my own job, I’ve been able to leverage a combo of tools and workflows to catch the problems right where they originate — in the development environment.

    Defining Reusable Components for the Development Stack

    I believe that a developer’s dream should not turn into a nightmare for IT Ops. So I rely on a set of workflow and tools that help IT Ops like me sleep better at night. One such method is to provide a base level environment for the development stack. This stack usually comprises a company-approved base level runtime that is tested and production certified. The idea is that when it’s time to push updates in staging or production, the code is more stable and less likely to involve bugs or errors.

    Let me give a simple example. At...

    Read on...

    How to Securely Hook Up a Cloud Management Platform in Your Private Datacenter

    July 30, 2015
    By

    Platform Security

    Most customers prefer cloud application lifecycle management as a SaaS service. But we’re conscious of companies whose high-security constraints like limited datacenter Internet access or fully controlled periodic backups require an on-premise solution. For those companies and DevOps users, Cloud Application Manager is available as a virtual appliance.

    Today, the Cloud Application Manager virtual appliance is an OVF package for vCenter vSphere and in QCOW2 format for OpenStack. To get access to all the same functionality as the SaaS solution, the only thing you have to do is install the virtual appliance in your virtual platform and plug into your datacenter network. At which point, you experience Cloud Application Manager hosted on your infrastructure where you get the same controls to manage, backup, and restore as you do on other systems in your datacenter.

    At Cloud Application Manager, we care deeply about security and for this reason all the communication for the SaaS and the virtual appliance solutions are encrypted. By default, we ship the virtual appliance with a certificate signed by Cloud Application Manager. But using the appliance setup console, you can set up a certificate signed by a trusted CA or install your self-signed certificate.

    To create and install a self-signed...

    Read on...

    NTT Cloud Reality Check Reveals Global Enterprise Challenges to Cloud Adoption

    July 23, 2015
    By

    NTT Cloud Reality Check

    Earlier this year, NTT communications commissioned a poll of 1600 ICT decision makers including IT directors, CIOs, CTOs across the USA and Europe. These are people in charge of setting tactical and strategic policies for development teams. The NTT poll asked them various questions such as which applications best suit which infrastructure? Is there a link between application maturity and applications that suit the cloud versus corporate datacenter?

    The results from the poll and the analysis form the highlights of the NTT Cloud Reality Check report. The report shows key trends by country, industry, as well as by company size. We asked Len Padilla Vice President Product Strategy at NTT Europe about some of the key insights.

    NTT found that the ‘which cloud’ decision is not merely a technical one but a complex one. Can you explain?

    Let me give a little bit of background on the research first. What we were looking for with the research and the survey was to understand in which environment people were putting what kinds of applications.

    Remote IT is a spectrum with a lot of available options. We encountered everything from customers running applications in their datacenter and managing it themselves to having their datacenter and having...

    Read on...

    A Primer on Private, Hybrid, or Public Cloud Deployments in vCloud

    July 21, 2015
    By

    vCloud Blog

    If you are familiar with vCenter and vSphere, you’ve probably heard of vCloud Director and vCloud Air, or even plan to migrate your existing vSphere platform to vCloud. But for those new to vCloud, what is it?

    Here I’ll explain both vCloud Air and vCloud Director. vCloud Air is a hybrid cloud platform for high-performance production workloads. It provides virtual compute, storage, and networking infrastructure built on VMware vSphere. It offers services such as virtual private cloud, dedicated cloud, and disaster recovery, and is available both by subscription and on demand.

    vCloud Director, on the other hand, helps with building secure private clouds. Since it runs on the top of vCenter, it hides vCenter cluster resources from the vCloud users thereby providing a level of abstraction.

    Support for vCloud in Cloud Application Manager

    Cloud Application Manager integrates both vCloud Air and vCloud Director through the VMware vCloud Director API, which also works with vCloud Air. To start deploying workloads to either platform, register your vCloud Director or vCloud Air account as a provider in Cloud Application Manager. Cloud Application Manager identifies the organizations, virtual datacenters, and catalogs the user account can access and makes them available for automated deployments from Cloud Application Manager.

    Add vCloud Provider

    How vCloud

    ...

    Read on...

    Chef-Provisioning-vSphere driver now open sourced

    July 20, 2015
    By Matt Wrock

    I am happy to announce that we have recently open sourced our Chef provisioning driver for vSphere. This driver makes it easy to provision Chef nodes on VMware vSphere infrastructure.

    What is Chef-Provisioning?

    Chef-Provisioning (formerly known as Chef-Metal) is a fairly new offering from Chef that allows you to create Chef recipes to bootstrap machines. It extends the functionality of a recipe typically used for defining an individual node to potentially define all infrastructure for a distributed application or even an entire data center.

    Chef-Provisioning introduces a collection of new resources to your recipes and at the center of these is the machine resource. With the machine resource one describes:

    • Hypervisor or cloud-specific properties of a machine
    • Node attributes to associate with the machine
    • A runlist that the created machine will converge

    Chef Provisioning exposes a driver interface making it possible for any hypervisor, cloud or even some bare metal infrastructures to interact with these machine resources. There are currently several drivers available and today, CenturyLink introduces our own driver for vSphere.

    Chef Provisioning for the Enterprise

    The CenturyLink-released driver fills in a significant gap for provisioning Chef nodes in enterprise shops that use VMware for their core virtualization technology. We began working on this in the spring of...

    Read on...

    3 Fundamental Ways Cloud Application Manager Can Impact Your Bottom Line

    July 9, 2015
    By

    Soccer Net

    Watching the women’s world cup final last weekend was thrilling. Of course it was fun to see the USA women’s team win the soccer championship (football for those living outside of the US). Having coached my son’s soccer team for many years, I was awed by the team’s mastery of soccer fundamentals and appreciated how much drilling and practice led them to the championship.

    In business, the win isn’t always so crystal clear. From the executive suite, the demand on application teams and DevOps is to deliver faster, with higher quality. That’s the win – the outcome. To achieve that outcome, it’s necessary to drill, practice, and master the fundamentals of agile delivery.

    At the heart of agile delivery, DevOps’ promise is “speed to value” for organizations. This is what businesses are seeking as they’re under pressure to deliver faster speed to value. There’s a lot of discussion swirling about how to do it and how to measure the ROI. In speaking with enterprise prospects and customers, it’s clear that while it would be awesome to have a sophisticated model to measure the ROI of DevOps tools and resources, most teams are seeking foundational progress.

    1. “Our deployment process is chaotic and not...

    Read on...

    The Cloud Application Cycle Is Broken — Can It Be Fixed?

    June 30, 2015
    By

    Open Source

    I’ve had the honor of sitting in the same room with many customers and prospects who share their business models and technology challenges. In the exact words of one of our customers, “Our cloud deployment is a horrendous process.” From others, “Our developers work in their own private cloud, and they send us the release code and it takes us days to weeks to get it running in production.”

    Doesn’t the cloud promise faster, more agile development with reduced costs? Why then isn’t this promise being fulfilled at most companies adopting DevOps and cloud?

    Why Is It Broken?

    I keep hearing a recurring theme. Heavy use of open source tools and development and production environments that don’t mirror each other. Developers have their favorite list of tools and operations theirs. Neither group understands how to use and configure the tools the other group is using. Throw in multiple versions of these tools and it quickly becomes an unwieldy mess. Here’s a sample set of technologies and stacks we come across:

    We commonly see development teams using these technologies:

    • Python
    • MySQL
    • Git
    • Jenkins
    • MongoDB
    • Node.js
    • ADD MORE

    We commonly see operations teams using these technologies:

    • Chef
    • Puppet
    • Ansible
    • Nginx
    • Splunk
    • New Relic
    • AppDynamics
    • ADD MORE

    And we see development, test, QA, and production run on these clouds and platforms:

    • Amazon AWS
    • Google Compute
    • Microsoft Azure
    • VMware
    ...

    Read on...

    Automated Patching: Improving security and efficiency in the Cloud

    June 25, 2015
    By Navin Arora, Operating Systems Product Manager

    Cloud computing has automated the traditional IT world, reducing application development time, while increasing speed and agility. Most of the automation has focused on things that are mostly short term in nature, like spinning the servers up and down with the change in demand. However, when running critical applications, it’s important to keep servers patched and constantly up to date.

    Maintaining server patching is as crucial in the IT world as maintaining our cars in our day-to-day lives. Patching keeps servers healthy to fight malicious viruses, repel hacker attacks and perform like well-tuned cars. Most managed hosting customers have their servers manually patched, by scheduling this with their service provider. However, self-managed customers have to patch their own servers, a process that is tedious and time consuming, as they must manually check for updates and install them.

    CenturyLink Cloud now offers Patching as a Service to all our customers, both those that we manage as well as those that are self-managed, providing an automated, self-service patching approach that is both simple and provides for greater cloud security.

    CenturyLink customers can now patch their servers, whenever they want, through any of the following three methods:

  • Blueprint- simply run the appropriate blueprint for the OS -

  • ...

    Read on...

    Secrets to Win Over DevOps Buyers

    June 1, 2015
    By

    Buying Process

    In its recent report, Tech Go-to-Market: How to Win With DevOps Buyers, Gartner Research looks at the buying process of DevOps-centered organizations. And Gartner makes an important point. For technology providers to sell to organizations with a DevOps culture, traditional sales approaches don’t fly. In fact, developers and operations teams—whose synergy we collectively call DevOps—eschew traditional marketing and sales pitches. They are so technically discerning that they sniff out marketing lingo from a real product offering. The real danger is they can shun a product forever when it comes from marketing or sales channels. So what’s the best way to win over DevOps teams?

    Technology providers familiar with selling to traditional I&O (infrastructure and operations) teams find themselves on unfamiliar ground in a DevOps driven culture. A big change Gartner notes is that workloads increasingly migrate from the traditional datacenter to public or multi-cloud infrastructure like AWS, Azure, Google Cloud, vSphere.

    Gartner found that migrating workloads to the public cloud shifts decision-making power away from the traditional IT buyers to DevOps and agile practitioners. These personas include developers, DevOps managers, release managers, build/automation managers, and architects who influence buying decisions bottom-up. Moreover, DevOps philosophies and practices vary so much from one organization...

    Read on...

    3 Golden Rules of Microservice Deployments

    May 7, 2015
    By

    Honey Comb

    As a developer, you value the principles of SOA. You aspire to build applications as a set of consumable services via endpoints. Remember how Amazon used SOA to build the AWS platform and how Google is emulating AWS? However, not all is hunky-dory in the SOA world.

    Developing is one thing but running, managing, and maintaining services is a whole other beast. When it comes to the latter, many enterprises still act monolithic. They run and manage applications services as a unit on one or many servers. This approach fails to scale when the services themselves scale or when you need to update and maintain them regularly. So do you lament over the spiking costs and time spent on these efforts or fix the problem?

    Recent trends point to microservices as the answer. By definition, microservices are much smaller than services. In fact, Wikipedia says a microservice performs a small task often just one. There are many articles that go in deep about microservice architecture, but we cover an important part here, which is deployment automation. In other words, our daily job.

    The self-contained, independent, and reusable principles of microservice architecture help solve the problem of scaling and maintaining application services.

    • Self-contained. microservices are
    ...

    Read on...

    What’s the Cost of Build Versus Buy DevOps?

    May 5, 2015
    By

    Honey Comb

    Nearly every business today relies on faster, innovative technology to succeed in the marketplace. How about the coffee you drink, or the movie you’re watching, or the phone on which you’re watching it? Name every business or walk of life. You can argue that technology serves to make it better. If DevOps helps deliver the best services and experiences to you, then should companies making the coffee, movies, and phones also make their own DevOps solutions?

    I completely empathize with the challenges the developers and DevOps teams face in transforming a team or company from traditional development and delivery to an agile lifecycle. I experienced this firsthand at Trend. I led a business unit that acquired a SaaS company for online backup and storage; this young company updated and deployed code several times a day. It was painful to integrate their processes into ours. I wish Cloud Application Manager had been an option then. I cringe when I consider the time and costs of lost productivity, wasted development time, and delays in getting to market.

    Faster Deployment Nirvana

    Every day, I speak to several technology companies. Speed is at the top of their mind. They all experience the pain of not deploying and...

    Read on...

    Automate Like Never Before with Perfect Vcenter Placement

    April 16, 2015
    By

    vCenter Placement

    I regularly consume resources from the VMware vCenter private cloud setup here at Cloud Application Manager. I’m not alone in that. vCenter is the most commonly used private cloud out there. Enterprises and developers, you will see why the changes we recently made will help you scale automated deployments in vCenter.

    Those of you deploying to vCenter, you know about datacenters, templates, hosts, clusters, resource pools, compute resources, datastores, and such. Cloud Application Manager simplifies vCenter deployments by abstracting these infrastructure metadata from application metadata in boxes. Infrastructure metadata is in the deployment profile that you select right before deploying. Here we made it easy to visualize exactly where in the vCenter you’d like to place your VM and consume resources.

    Placing Your VM

    Until recently, to deploy an instance in vCenter, you selected a datacenter and from it a resource pool and virtual network. If you had several clusters or standalone hosts not part of a resource pool, you did not see them. Now you do.

    Compute Resource

    Now you see even the clusters and standalone hosts not part of a resource pool in a datacenter. In effect, you can place your VM in a resource pool, a cluster, or a standalone host provided they...

    Read on...

    AWS GovCloud and Cloud Application Manager: A Complementary Union

    April 15, 2015
    By

    GovCloud

    AWS GovCloud is one of the several popular clouds where Cloud Application Manager orchestrates and automates the lifecycle of applications. AWS GovCloud (US) is an isolated AWS Region for US government agencies and businesses to move sensitive workloads primarily because of regulatory and compliance requirements. If you’re curious about the use-cases for AWS GovCloud and the value Cloud Application Manager adds, you’re at the right spot.

    Amazon GovCloud targets two kinds of usage:

    • Businesses that don’t have ITAR data but want to embrace the extra security layer in this region.
    • Government agencies or businesses with confidential data that must enforce regulatory compliance and security measures.

    AWS GovCloud Use Cases

    I’ll talk about some of the key scenarios where it makes sense to use AWS GovCloud:

    • High availability is important for mission critical apps in Oracle, SAP, and Windows. Such apps rely on fault-tolerant availability zones.
    • High-performance computing matters for apps that process big data. They need massive clusters to spin up and process large data loads in a very short time.
    • High data volume means higher primary and backup storage needs. Such storage should meet data security and compliance standards.
    • Web applications scale with user demand. Predictable workloads need reserved instances and in times of spikes, those payloads require
    ...

    Read on...

    Self-Serve Ganglia Monitoring

    April 8, 2015
    By

    Ganglia

    As an operations team engineer, monitoring virtual infrastructure health is a pretty big deal. You aim to catch and resolve issues for hardware, devices, storage, memory, network, hosts, and the like in computing grids and clusters very early on. Monitoring systems like Ganglia are a must-have for such purposes. I’ll show a quick setup of Ganglia monitoring via Cloud Application Manager that you can add to any deployment and track infrastructure performance.

    In Cloud Application Manager, built-in monitoring is available as a service in a self-serve catalog for your engineering and operations teams to launch to any cloud infrastructure on-demand. Cloud Application Manager supports a wide range of configuration management tools, orchestrates provisioning on popular cloud providers, and allows teams to collaborate on deployment assets. If you follow or use Cloud Application Manager, you know that complex deployments happen much faster in a few clicks versus long hours.

    Ganglia is a useful monitoring service for large-scale web applications. It provides distributed monitoring at scale for clusters and grids. It’s popular because it’s easy to set up and tracks a ton of metrics. It monitors computing systems including hardware, storage, network, and software. You can port metrics for alerting and visualization by integrating...

    Read on...

    Three-in-One Benefits of LDAP Integration

    March 31, 2015
    By

    LDAP

    LDAP Groups is a new feature soon coming to Cloud Application Manager. It helps sync LDAP groups in your organization with Cloud Application Manager workspaces. A team of users in your org can now sign in with their org credentials and right away start working in a team assigned workspace in Cloud Application Manager.

    Consider how useful that is for your developers, operations engineers, or IT admins to access the same deployment assets and do their part in automating with necessary access levels.

    You already have LDAP single sign-on support in Cloud Application Manager today. Coupled with LDAP groups, you get advanced LDAP integration. You will be able to sync with groups of any Active Directory or OpenLDAP implementation in Cloud Application Manager. Through the Cloud Application Manager web or API interface, you can directly add LDAP groups as members of a workspace instead of searching and adding them one by one.

    If using the Cloud Application Manager Enterprise Edition with administrator access, you’ll find LDAP groups in the admin console. Once you sync with LDAP groups there, any user in your Cloud Application Manager organization can edit their team workspace and search or add the LDAP groups or a specific member of...

    Read on...

    Why Infrastructure Matters, But Also Doesn’t

    March 27, 2015
    By

    Secret Sauce

    Infrastructure is changing. It’s nearly impossible to design a product or application without carefully planning how it’ll deploy on specific infrastructure. I should know because in running a business that automates application deployments, I deal with infrastructure every day. So here’s my take on how I view infrastructure and its evolving changes.

    The notion of cloud computing and a dynamic datacenter has matured a lot in the past decade. First we moved from the monolithic IBM computers to datacenters on commodity servers. Now we’re on the journey of commoditizing and standardizing infrastructure.

    Cloud Reality Check

    A few years ago, when we envisioned what this journey would look like we said infrastructure is a commodity, we called the CIO defunct and said platforms-as-a-service would fully abstract infrastructure. But look at the cloud reality now.

    • Infrastructure is a differentiating game. Though most providers offer the same things like rich computing types, software-driven networking (SDN), software-driven storage (SDS), some rate better in certain areas.
    • The CIO’s role is not redundant. She leads an important DevOps cultural transformation in the enterprise.
    • Platform as a service is only one part of the IaaS journey. Infrastructure is starting to become just like code in templates and containers.

    Cloud Provider Capabilities

    And the capabilities keep...

    Read on...

    Go Undercover with the Cloud Application Manager Agent

    March 24, 2015
    By

    Agent Architecture

    It’s no secret that Cloud Application Manager performs deployments on your remote virtual machines using an agent. But what goes on behind the scenes? What makes the agent tick? Join me for a deep dive.

    Though not visibly apparent, when you trigger deployments from the web or API, the agent is the software we install on every virtual machine you deploy from Cloud Application Manager. Its sole purpose is to handle box deployments on the VM or service. It executes event scripts and runs lifecycle operations from the web or API calls. By itself the agent does not contain any other logic. It executes whatever Cloud Application Manager tells it to do and sends back the logs of the output.

    Agent Architecture

    We built the agent based on three important principles of software architecture:

    • To be platform interoperable, that is, work on any OS or platform.
    • To be network interoperable, that is, communicate over any network configuration easily.
    • To cover a small footprint, that is, consume the least amount of machine resources.

    Platform interoperability is pretty important. The agent works across all platforms on any OS and runtime libraries. It works cross-platform because it’s written in Python, and doesn’t require any dependencies.

    Agent Enhancements

    In recent times, we made...

    Read on...

    Semantic Versioning and Auto Updates: Your Backbone to Innovate

    March 19, 2015
    By

    Versioning

    Versioning is a critical part of software development. It gains in importance as team sizes and complexity of projects scale. Tools like Perforce, Subversion, GitHub, which attracts the most users, address this challenge. Imagine the challenge of managing version control for software automation. In the soon to come release in Cloud Application Manager, we addressed versioning challenges in a big way.

    Versioning for Deployment Automation

    Versioning in the context of application deployment automation is huge. For one, with a solution like Cloud Application Manager, many people in different DevOps roles across an organization need access to a given deployment configuration. As technologies evolve, so must capabilities of the applications and virtual infrastructure. To keep current environments running and stable while you improve and innovate for the future, you need to collaborate, build on top of existing configuration and at the same time preserve the old and the new.

    Though features like collaboration in Cloud Application Manager help people participate in creating content, versioning is the built-in capability that helps DevOps users and organizations manage changes smoothly and track what changed, who did what. Versioning systematically controls the chaos of ever changing applications as well as the infrastructure on which they deploy.

    Semantics of Versioning

    In...

    Read on...

    More Than One Way to Do Cloud Bursting

    March 10, 2015
    By

    Cloud Bursting

    Cloud Bursting is a hot topic in cloud computing today. It’s a model that gauges an organization’s ability to use internal resources to host services critical to their business, and during demand spikes consume resources from public clouds on a pay-as-you-go approach.

    Cloud Bursting Use Cases

    Today only a handful of businesses face real cloud bursting challenges because of their special use cases. They run the sort of applications where the burst is high due to compute-intensive processes like image processing, scientific computing, monthly calculations, and such or relate to running the development, test environments. Since the latter typically don’t involve client data, they are not subject to strict regulatory compliance, which means you can run them on any infrastructure.

    Support Model

    To support a strong cloud bursting model, several parts must come together:

    • Shared network between the public clouds and the datacenter.
    • Automated and repeatable deployments to launch to the required clouds regardless of platform differences.
    • Single management console to consistently support and maintain all deployment artifacts.
    • Ability to specify the amount, ratio, and priority of cloud resources the applications can consume.
    • Ability to identify the load needs of an application and configure them into the tools that manage scaling.

    Most cloud providers offer the first piece today, either...

    Read on...

    Stay on Top of Scheduled Instances with Notifications

    February 24, 2015
    By

    Scheduler Notification

    I bet you save on deployment costs today with the Instance Scheduler. But did you know you can keep track of when they expire?

    A couple of months ago, we introduced the Instance Scheduler that lets you set custom shutdown and termination policies in Cloud Application Manager when launching an instance. This type of scheduling is great to test a new deployment configuration or to spin up a test or development environment for a limited timespan.

    Scheduling is an effective way to control and manage resources and costs within an organization. However, there’s still a nagging question. How to know when the instances you’re working on will shut down or terminate? As a developer or operations engineer, I’d like to be notified some time before the scheduled instances that I have access to go offline or get decommissioned. Especially if I don’t have control over them, I need to ask the instance owner for more time, or prioritize work that requires those instances.

    Email Notification Service

    To address this need, Cloud Application Manager added a feature to notify the instance owners or collaborators by email of scheduled instances that are going to expire soon. Each day, around 12 pm UTC (4 am PST), those...

    Read on...

    All the Storage You Need for Large-Scale Deployments

    February 20, 2015
    By

    Automate Storage

    Large-scale deployments are a pain when you think of the many things that can go wrong. That’s why we’re here to ease the pain with deployment automation. I want to focus this post on block storage specifically for EC2 and on how you can set it up in advance to scale automated deployments.

    Amazon Web Services provides block devices called Elastic Block Storage (EBS) that range in gigabytes to terabytes in size at a pay-as-you-use cost. This type of storage gives instances far greater storage flexibility. The default volume on an EC2 instance generally assumes the lifespan of the instance, which means the data disappears once the instance does. Volumes, on the other hand, can persist in the after-life of the instance and make the data available for future use. You can take volume snapshots for backup or attach a volume to another instance, for example.

    EBS Volume Types

    Amazon offers three EBS types: Magnetic, General Purpose (SSD), and Provisioned IOPS (SSD).

    • The Magnetic disk is the default volume of an EC2 instance and the lowest cost option if you don’t need high read performance and are okay with sequential I/O. It’s a good option to store log files (if you don’t use message
    ...

    Read on...

    How to Deploy Pain-Free on OS X and Solaris OS Platforms

    February 17, 2015
    By

    Solaris OSX

    OS X and Solaris are two of my favorite platforms for developing native Apple and Node.js applications. Thanks to the deployment support for OS X and Solaris in Cloud Application Manager, today I can easily solve a common problem many developers like me face when coding in these environments.

    Here’s the problem: How do I integrate with Jenkins to automatically build the lifecycle of the application I’m coding so I can test and integrate code into different environments smoothly?

    Deploying on OS X

    When I develop native applications for Apple platforms (OS X or iOS), I need an OS X platform and the XCode tools to build and test my application. On top of this, I need VM instances to run builds and unit test the code in my development branch. So a private cloud like vSphere is the best scenario to do both. It allows me to create and dispose of OS X machine instances in an agile and flexible way.

    The support for OS X in Cloud Application Manager makes things easy for me. I can quickly automate the way applications deploy on an Apple platform and share that process with other developers in the organization. As a result, all our development...

    Read on...

    Learn How to Configure OpenStack Block Storage the Easy Way

    February 6, 2015
    By

    OpenStack Logo

    Early this week we talked about OpenStack as a popular choice to deploy both to public and private clouds because of its unified platform. We explained that Cloud Application Manager auto provisions your workloads predictably no matter where you deploy. Today, we talk about how to auto configure storage for your OpenStack deployments.

    Storage for OpenStack

    First of all, to configure storage in OpenStack, you need to activate Cinder or block storage. A nice thing about automating deployments through Cloud Application Manager is you don’t have to configure disk storage separately. Along with the rest of the deployment, specify the volume storage with a simple add button and let Cloud Application Manager do the brunt of the work like clockwork.

    We’ll show how to add volumes when you set up your deployment in your private OpenStack cloud or publicly managed HP Cloud through the Cloud Application Manager deployment profile. There are two types of volumes you can add, an image or a hard disk volume.

    Hard Volumes

    Click Add in the Additional Volumes section and set the disk size for each volume. Extra volume for an instance means you can store information such as databases or logs. It means you can move data from one...

    Read on...

    Rackspace or HP Cloud? Host on Your Favorite OpenStack Flavor

    February 3, 2015
    By

    Pick Your Flavor

    Today, Rackspace and HP Cloud are popular public clouds based on OpenStack. To run your workloads on OpenStack, find out if Cloud Application Manager is the right DevOps platform.

    OpenStack is an open source infrastructure as a service (IaaS) cloud platform. While providers in the ecosystem like Rackspace and HP Cloud built public clouds, some organizations build their private clouds using OpenStack. All these cloud implementations share the same common OpenStack platform. That’s why those organizations find it easy to manage their deployment workloads in any OpenStack model, be it private, public, or hybrid.

    Auto-Provision and Orchestrate

    By all means, you can implement a private cloud using OpenStack, or go the managed hosting route with Rackspace or HP Cloud. In either case, you expend a lot of valuable time and resources to manually provision servers and set up machines to handle workloads. On top of infrastructure costs, you incur a lot of IT setup overhead.

    Wouldn’t you prefer to offload this work and speed up your deployments, automate all the remote provisioning and orchestrating? As you’ll see below, a few steps is all it takes in Cloud Application Manager to auto-provision and orchestrate application workloads remotely.

    Deploy to Rackspace or HP Cloud in 4 Easy

    ...

    Read on...

    SoftLayer in Cloud Application Manager: Deployments Made Easy

    January 22, 2015
    By

    SoftLayer Logo

    Isn’t freedom a beautiful thing? Think zero lock-in with IaaS and PaaS cloud vendors and cross-cloud workload compatibility. Well, that’s the kind of freedom you experience when you host on the IBM SoftLayer public cloud through Cloud Application Manager.

    Cloud Application Manager supports SoftLayer like many popular clouds. Besides SoftLayer, we automate applications and manage their lifecycle on clouds including AWS, Google Cloud, Azure, VMware vCenter, OpenStack, CloudStack, Rackspace, and HP Cloud.

    Developers Focus on Applications

    As a developer, you shouldn’t have to care about machine resources or infrastructure provisioning. You care about what matters most like getting your development environment set up fast. You care about deploying and testing complex application stacks quickly. You care about running Jenkins builds to deploy reliably and rapidly in staging and production.

    IT Operations Focus on Infrastructure

    On the infrastructure side, as IT operations you care about how much resources developer teams consume, how to provision specific services, what ready-to-consume services to provide on top of OS platforms.

    Whether you automate through a user interface or do it through API calls, in Cloud Application Manager you provision infrastructure resources through settings that are common to all cloud providers.

    Deploying to SoftLayer

    SoftLayer deployments are no exceptions. When launching workloads, developers pick...

    Read on...

    Turn IT into a Service Catalog

    January 16, 2015
    By

    Service Catalog

    Unless you live under a rock in the world of cloud computing, you’re probably aware of a growing generational gap between public clouds and enterprise IT.

    Public Clouds Versus Enterprise IT

    Consider the size of budgets public cloud companies invest in R&D. The budgets are astronomically huge when you compare to the budget of a typical enterprise IT. This economy of scale is only going to widen with time. Most enterprises will be unable to match the scale or technical expertise of public clouds in their private datacenters.

    This gap forces innovative developers in enterprises to play mavericks. They bypass traditional IT departments and operate outside enterprise control to look for the next-generation services and infrastructure. To avoid this gap, IT should focus not on provisioning infrastructure. But provide infrastructure and application components as a service to empower developers.

    IT Service Catalog

    Service catalogs are the wave of the future for IT teams. It sets them up to serve as true internal service providers to their customers who are mainly developers, QA, and the like.

    A service catalog provides components to build your app. It is a collection of services that organize the available technology resources within an organization. Just as how you combine Legos to...

    Read on...

    3 Steps to Launch a RabbitMQ Docker Container

    January 12, 2015
    By

    RabbitMQ and Docker

    Do you want to deploy RabbitMQ as micro services in multiple virtual machines? Do you want the freedom to launch a RabbitMQ Docker container in any cloud, any infrastructure? Then my friend, you’re in the right place. Read on to find out how.

    In Cloud Application Manager, you can deploy a RabbitMQ Docker container out of a box.

    RabbitMQ, as you may already know, is an open source message queuing system based on the AMQP standard. RabbitMQ allows application components and services to talk to each other over a variety of protocols. You may also be aware that you can configure RabbitMQ as a cluster or as a federation to queue and route messages.

    In this post, I’m going to show you how I define and deploy a RabbitMQ Docker container using Cloud Application Manager.

    Defining a RabbitMQ Docker Container

  • On the Boxes page in Cloud Application Manager, I select a pre-defined Docker RabbitMQ box. To get this box, contact me.

    Select RabbitMQ Docker Box

  • In the Docker box, I edit the Dockerfile to customize it. To deploy the RabbitMQ container, in the Dockerfile, I configure the upstart command to not start RabbitMQ after installing. I add instructions to install RabbitMQ, and define an endpoint to start the RabbitMQ server.

  • ...

    Read on...

    Bird’s Eye View of IT as a Service

    January 7, 2015
    By

    ITaaS Diagram

    In working closely with large enterprises for several years, we consistently hear businesses talk of IT needs to achieve critical and strategic goals. Commonly these large enterprises span many verticals like technology, online media, finance, banking, and more.

    At the heart of their business goals, lie technology hurdles they need to overcome to scale as enterprises of the next decade.

    Visualizing IT as a Service

    So we captured in this mind map high-level business goals that require a broad set of IT services ranging from infrastructure, applications, management, integration, service provider, and architecture.

    ITaaS Thumb

    If you look to the left, you see needs centered around applications. These detail all the services required to provision, deploy, and manage applications including application architecture models to design automated deployments and serve all the right infrastructure resources.

    On the right, you see needs that revolve around compliance, reporting, and integrating with various service providers to supply virtual infrastructure and a slew of related services to keep the infrastructure running smoothly and reliably.

    We find that this visual captures the entire IT-as-a-service industry as a whole and this exercise helps to understand where the customers’ needs are and where in that spectrum we assist them.

    See if you can pinpoint the areas where...

    Read on...

    Create your ElasticSearch Cluster in Four Steps

    December 12, 2014
    By

    ElasticSearch

    In this blog post, we show how to install an ElasticSearch Cluster on any cloud in four easy steps. If you’re new to ElasticSearch, it is a powerful open source search and analytics engine that makes it easy to explore, query, and manipulate big data. It’s built on top of Apache Lucene, a Java search engine library. To process large amounts of data, it helps to configure ElasticSearch as a cluster on which different ElasticSearch nodes process data in parallel.

    To install an ElasticSearch cluster on a Linux distribution using Cloud Application Manager, you just need our ElasticSearch box where you can adjust configuration parameters to suit your scenario.

    Get to Know Some ElasticSearch Cluster Concepts

    First, let’s go over some of the concepts in our cluster configuration:

    • Shard is the Lucene instance that stores each document (data) also known as primary shards.
    • Number of replicas are the number of shards with a copy of the primary shard.
    • Data nodes are those on which ElasticSearch distributes primary and replica shards.
    • Master node is the one in charge of managing cluster operations. Although not recommended, you can configure a master node to act as a data node.

    Deploy an ElasticSearch Cluster in Cloud Application Manager

  • Click New Instance and select

  • ...

    Read on...

    Control Resources and Costs with Instance Scheduler

    December 2, 2014
    By

    Instance Scheduler

    In infrastructure-centric organizations, the IT operations team centrally manages infrastructure needs to efficiently utilize resources and streamline costs. To that end, IT defines policies and plans resources based on expected workload, budget, and general business requirements. Developers depend on IT for new machines to deploy applications and services, and again to dispose of or recycle them after use. From a developer’s point of view, this dependency slows down their development cycle.

    But what happens when IT is delivered and consumed as a service, and when developers can deploy their applications on the organization’s infrastructure at will, at the click of a button?

    IT as a Service with Cloud Application Manager

    In application-centric organizations, IT operations remove as much friction as possible. Infrastructure is provisioned from the top down, based on the resources needed to serve the application.

    IT as a Service empowers both the IT and the development teams. Empowers IT because they no longer focus on low-level constraints around infrastructure, but on high-value activities around applications. Empowers developers because they can shape their ideas into reality in production faster, focus 100% on writing code rather than tinker with the underlying infrastructure.

    Developers now have access to infrastructure on demand and consume resources without process...

    Read on...

    Announcing Cloud Reports – A Simple and Easy Way to Track and Optimize Your Cloud Spend

    November 11, 2014
    By

    Today, Cloud Application Manager announces the launch of Cloud Reports to help companies manage and control their cloud footprint and resulting costs across cloud service providers. This new offering provides IT managers with comprehensive tools they need to understand, track and optimize spend and usage across all their cloud resources on AWS, Google Cloud, and Microsoft Azure.

    We’ve found that more than 50 percent of our customers companies today use more than one cloud provider. As more companies disperse their resources across multiple cloud services, it’s increasingly difficult for them to fully understand the return on those investments.

    Enter Cloud Application Manager Cloud Reports, a new product that aims to help companies manage their spend and monitor usage across multiple cloud platforms all in one place. Beyond spend-tracking or reporting, it provides insights into how teams or applications are using resources so companies can optimize costs for each provider and deploy resources where they make the most sense.

    With Cloud Application Manager Cloud Reports companies can:

    Track spending on cloud providers such as AWS, Google Cloud, and Microsoft Azure

    In addition to tracking overall spend on each cloud provider, customers can also track spend for different provider accounts, services, instance sizes, and datacenter locations.

    Spending Tracker

    Compare...

    Read on...

    Connect Your Private Network to Google Cloud

    October 29, 2014
    By

    Google Cloud

    Google is a relatively new player in cloud computing. They were one of the first clouds to provide a PaaS solution with Google App Engine, but one of the last to provide IaaS options.

    One of the biggest differentiators and a draw for Google Cloud is their Google Compute Engine global software defined networking (SDN) capability. Unlike other providers, every network in Google Compute Engine (GCE) is a global network, that is, a network that spans every region and availability zone. Further, latency and throughput between regions almost defies the laws of physics. These network capabilities dramatically simplify deploying applications that run on a global scale.

    If you want to take advantage of one of the fastest growing IaaS technologies in the market, how do you go about moving all your data and infrastructure to GCE? In this post, we’ll show you how to connect your private network with GCE using Cloud Application Manager.

    For this purpose, we created an IPSEC box that creates a tunnel between the 2 networks using Linux and Openswan.

    IPSEC Tunnel

    The first instance of this box is deployed in the GCE network. It is assigned an ephemeral IP and supports IP Forwarding:

    New Instance

    The box is deployed without bindings, which sets it...

    Read on...

    Surprised by Your Cloud Bill Every Month? Try Cross-Cloud Tagging

    October 23, 2014
    By

    Couchbase Logo

    A bit of awesome sauce for today — we are thrilled to announce our partnership with Couchbase, the only enterprise grade NoSQL database in the market.

    Together with Couchbase, we have created Couchbase Server and Couchbase Sync Gateway Boxes for Cloud Application Manager, allowing our users to quickly deploy and run web and mobile applications powered by Couchbase technology in the cloud.

    The new Couchbase Boxes are bringing unprecedented opportunities for developers to build complex NoSQL applications in-house. The Boxes make it easy to build and deploy complex multi-tiered applications powered by Couchbase Server and Couchbase Mobile.

    The best part? Those boxes are already right there for you on Cloud Application Manager. You just need to log on to Cloud Application Manager or sign up, if you don’t have an account already.

    Benefits of Couchbase Boxes on Cloud Application Manager:

    • Stick to best practices: Have you tried deploying Couchbase Server of Couchbase Sync Gatewaydatabase yourself? By using the Couchbase boxes, you can be sure that they are using the best practices when deploying Couchbase technology.
    • Scale with ease: You and your IT teams can now quickly launch new Couchbase Server and Couchbase Sync Gateway instances and respond to rapidly changing business requirements or increasing demand. Use
    ...

    Read on...

    Surprised by Your Cloud Bill Every Month? Try Cross-Cloud Tagging

    October 23, 2014
    By

    Cloud Services

    If you have recently received the bill from your cloud provider, you may be still wondering how you spent all this significant amount of money in cloud resources. And if you’re using multiple cloud providers (like we do at Cloud Application Manager), this problem is even more complicated. Evidently, you need to have much better insight into where your expenses are going. Resource tagging is a crucial technique to improve the cost efficiency and reduce your infrastructure bill.

    Without tags, you won’t be able to confidently know what each of your instances is doing, which instances have been provisioned and which you can power off. This technique is even more important when using autoscaling systems that automatically add and release resources based on the workload of your application.

    Tagging your cloud resources allows you to add metadata to your resources in the form of key-value pairs. Hence your tag-key must be a meaningful value that represents how you want to report this resource, while the tag-value should give you an insight of what you want to report on. These tag-keys will appear as columns in the reports of your cloud providers. Therefore they may be descriptive enough to help you to break...

    Read on...

    Three Reasons Why You Should Use Cloud Application Manager in Your CloudStack Environment

    October 16, 2014
    By

    CloudStack

    Cloud Application Manager can help you manage application delivery on a number of different clouds. Today, let’s focus on CloudStack, the stack with the $$ behind it.

    CloudStack vs. OpenStack

    OpenStack gets a lot of air cover compared to CloudStack. The main reason is the strategy behind CloudStack. CloudStack’s parent company, Citrix, has executed a clear roadmap under the umbrella of the Apache Foundation. As a result, in comparison to OpenStack, its community is much smaller and the vision more targeted.

    The results are more focused and, to some extent, more practical to implement. We have seen many successful implementations of CloudStack that leverage Cloud Application Manager to fully cover all aspects of a modern IT Organization.

    Why Use Cloud Application Manager to Manage a CloudStack Deployment

  • Avail your developers of a ready-to-use and fully-configured application service catalog.

  • Streamline the application delivery process by bringing real-time DevOps collaboration to your organization

  • Pave the way for a future where your applications not only work with CloudStack, but with any cloud provider you choose.

  • To get started, register your CloudStack in Cloud Application Manager. You can do this with either the SaaS version or by installing the Cloud Application Manager Virtual Appliance in your own environment.

    Once registered, CloudStack will show...

    Read on...

    Calling All AWS CloudFormation Power Users

    October 13, 2014
    By

    AWS CloudFormation is a very useful deployment mechanism provided by AWS and fully supported by Cloud Application Manager. We’ve recently made some changes to our product and one of the results is a very interesting AWS CloudFormation use case – splitting up gigantic and monolithic AWS CloudFormation templates into smaller, more manageable templates.

    First, A Little Background

    AWS CloudFormation is essentially a way to programmatically define and provision cloud infrastructure, via a JSON template. CloudFormation templates can be used for tasks such as setting up VPCs, creating autoscaling groups and launching EC2 instances into different network configurations.

    Several enterprises are using CloudFormation templates to deploy increasingly complex infrastructure configurations. As a result, power users are rapidly discovering that their templates have become large, monolithic and extremely difficult to maintain.

    Why Not Just Split Them Up?

    In theory, splitting up a monolithic CloudFormation template into a set of smaller, manageable modules seems straightforward, but there are a few challenges.

    • Since these modules contribute to a larger, overarching infrastructure deployment, users need a way to specify dependencies and ensure that the modules are deployed in the right order.
    • These modules need a way to communicate with each other and share information such that one can take over where the
    ...

    Read on...

    Can’t Survive Without AWS S3, DynamoDB, and RDS? See How You Can Use Them in Cloud Application Manager

    September 25, 2014
    By

    Configuration

    In my last blog post, I talked about how we support AWS EC2, EBS, Elastic IP Address, and ELB. In this post, I’ll cover S3, DynamoDB, and RDS.

    S3

    S3 or Simple Storage Service is Amazon Web Services’ highly durable data store that can be used as your primary data store and can be accessed from anywhere. In Cloud Application Manager, you can gain access to S3 using the S3 Box that allows users to select a region to deploy and access ports. The S3 Box enables you to provision and return end-points. Endpoints, in conjunction with the port, can be used by other applications to read/write data into S3.

    S3 is one of Amazon’s most popular services. Cloud Application Manager’s own appliance OVF is stored in an S3 bucket.

    DynamoDB

    DynamoDB is a NoSQL Database that is deployed automatically in AWS. It is cost effective and reliable with single-digit millisecond latency making it a great fit for gaming, ad tech, mobile and many other applications.

    Like S3, Cloud Application Manager supports a DynamoDB Box that you can add to your application stack. When ready to deploy, you can select parameters such as the region, port, and read/write capacity in the Deployment Profile.

    New Instance

    Cloud Application Manager also enables...

    Read on...

    Cloud Application Manager Support for AWS EC2, EBS, ELB, and IP Addresses

    September 17, 2014
    By

    Cloud Application Manager supports delivery of applications on a number of private and public clouds including AWS, GCE, Azure, VMware, CloudStack, OpenStack, and HP Cloud. Just supporting compute, however, is not unique. Several players in the market support compute. What’s great about Cloud Application Manager is that it also gives you access to a large number of cloud provider-specific services such EBS, Route 53, SQL Services, App Engine, etc.

    Why Cloud Provider-Specific Services?

    Every cloud offers a unique set of services that are targeted to certain use cases and integrate really well with other services provided by that particular cloud provider. It would be a shame if you couldn’t access these innovative services in combination with cloud and application management platforms.

    Some great examples and use cases of these integrations include:

  • AWS EC2 integration with EBS, Elastic IP Address, and ELB

  • AWS Route 53 support for adding new domains that can be bought on AWS

  • Azure integration of Visual Studio, Cloud Services, Websites, and SQL Services

  • GCE integration of Compute Engine, App Engine, and Big Query for Google Projects

  • Support for cloud provider specific services poses a risk as well. Cloud providers (read, AWS) are constantly adding new services. How do we support them while still focusing on...

    Read on...

    Introducing the Next Version of the Admin Console

    September 11, 2014
    By

    Over the past few months, we have added great features such as Expanded Collaboration and Lifecycle Editor to make it easier than ever for Developers and Operations to work together in small teams, on large projects, and across large organizations. As Cloud Application Manager is empowering bigger and bigger teams at larger and larger companies to freely deploy their business critical applications and consume infrastructure, IT teams increasingly want to gain visibility, control, and insights on how Cloud Application Manager is being used within their organization.

    That is why, today, we are launching a brand new version of the Cloud Application Manager Administration Console. It comes loaded with improvements and new features that enable IT to set up, manage, and support their users and organization on Cloud Application Manager.

    Customize and Integrate

    We know it’s important for our enterprise users to reinforce their brand with their employees and to integrate with their existing tools. With the Cloud Application Manager admin console, you will be able to set up your organization in a way that makes sense for your company. You can pick a logo and custom domain, enable signup and login options via email, Google/GitHub authentication, or integrate with your LDAP.

    You can...

    Read on...

    Announcing the Cloud Application Manager Virtual Appliance

    September 2, 2014
    By

    Exciting news! We announced the release of our Virtual Appliance free trial at VMworld San Francisco this past week and are kicking things off with an awesome giveaway.

    Download our free Virtual Appliance trial (10 min setup) and you’ll be entered to win one of the new Apple product being released next week (hint: our money is on an iPhone 6).

    To enter, please complete the following steps by Friday, September 12th:

    • Follow the steps to download the appliance.
    • Deploy the Virtual Appliance in your vSphere environment.
    • Setup the Virtual Appliance and create an account!

    Free Trial

    Today, I am extremely excited to announce the general availability of the Cloud Application Manager Virtual Appliance. Cloud Application Manager built a Virtual Appliance to make it easier for you to rapidly deploy applications and provision your private cloud resources without risking security.

    Our engineering team has been working really hard during the weeks leading up to VMworld to create a virtual appliance that is easy to install, easy to upgrade, and gives you access to the full Cloud Application Manager feature set.

    Why the Appliance?

    In short, it’s about giving our customers as much control as they want.

    • You have full control over your data with everything hosted in your private datacenter
    • You control
    ...

    Read on...

    Safehaven Run Book Automation – A small change with a big impact

    August 30, 2014
    By Scott Good, Senior Product Manager

    In an earlier post, we discussed SafeHaven for CenturyLink Cloud Disaster Recovery- as-a- Service (DRaaS) solution and the benefits it offers IT Administrators.  As we noted, failing over a multi-tiered application when executing a disaster recovery plan is critical. However, it’s not always as easy as it seems.  In order for a multi-tiered application to recover correctly, the VMs upon which it depends start up according to a prescribed “recovery plan.”  For instance, it is usually necessary for the database to be running before application servers boot and, similarly, necessary that application services be running before webservers boot.

    Safehaven for CenturyLink Cloud’s latest feature enhancement, Run Book Automation, allows end users to configure custom shut-down and bring-up plans for each group of IT systems that received disaster protection.  For example, delivering web services often involves a set of interdependent workloads that need to start in a specific order and taking into account time intervals between applications.

    For each group of IT systems within the CenturyLink Cloud, users can pre-configure and test recovery plans in the Safehaven Console that identifies bring-up and shut-down order, actions, delays, as well as any custom script to be executed as part of the recovery operation. ...

    Read on...

    Using AWS CloudFormation in Cloud Application Manager

    August 28, 2014
    By

    CloudFormation

    Recently, we have been experimenting with ways to support all the APIs and services that cloud providers like Amazon offer, such as CloudFormation. As I mentioned before in a blog post, cloud providers bring innovation to market in the form of infrastructure APIs

    CloudFormation Service Box

    Today, Cloud Application Manager announces support for Amazon CloudFormation with the CloudFormation service box, which lets you use a template to quickly launch any Amazon service as a single stack or unit. With this new box service that has API support, you can consume all of Amazon’s services and capabilities. As IT operations, you can configure infrastructure that developers can directly consume.

    Cloud Application Manager provides tools that enable your organization to exploit the full potential of Cloud computing. With CloudFormation support, IT operations can collaborate with developers at an unprecedented level of flexibility within the same product.

    At Cloud Application Manager, we believe that being agile–being able to adapt to changes and adopt innovation is the new tenet of IT operations stability. We are committed to build the tools to resolve the complexities of software and cloud operations, be they straight or curved.

    Features

    Integrated CloudFormation Application Deployment

    With Cloud Application Manager, the instantiation of Cloud Formation and the deployment...

    Read on...

    Configure Virtual Machines Faster with Cloud Application Manager Admin Boxes

    July 30, 2014
    By

    Have you ever asked a colleague for a favor, like provisioning or configuring a virtual machine, only to get the response: “Have you submitted a ticket for that?” It is no secret that in the world of development and IT operations, the traditional protocol for getting things done is through submitting tickets.

    In many cases, especially at large enterprises, developers often wait weeks or months even, for the IT department to provision and configure virtual machines or instances. Cloud Application Manager has a goal of making that experience disappear forever! Cloud Application Manager has developed a solution that can help enterprises reduce the amount of repetitive and manual processes involved in provisioning a virtual machine.

    It is easy to blame the IT department for the lag, but the truth is, they have the very serious responsibility of allocating resources and then properly configuring, hundreds or even thousands of virtual machines. What causes a delay in provisioning and configuring a virtual machine that developers can use?

    One is configuring the machine to comply with the company’s standards and policies. Another is the manual steps that have to be executed on that machine for it to be prepared for use. For instance, IT admins will...

    Read on...

    AWS Auto Scaling and Load Balancing Made Easy

    July 28, 2014
    By

    AWS Load Balancing

    Take advantage of automatically scaling and load balancing instances when you deploy applications using Cloud Application Manager in AWS EC2 or VPCs. Load balancing evenly distributes load to application instances in all availability zones in a region while auto scaling makes sure instances scale up or down depending on the load.

    Why Load Balance and Auto Scale at the Same Time?

    Paired together, auto scaling and load balancing provide useful benefits. Say you want to smoothly handle traffic surges to your website. When load increases, you want the website infrastructure to have enough capacity to serve the traffic. During bouts of low activity, naturally you want to reduce capacity.

    With load balancing alone, you’ll have to know ahead of time how much capacity you need so you can keep additional instances running and registered with the load balancer to serve higher loads. Or you could manually stop worrying about it and auto scale based on say CPU usage so that instances increase or decrease dynamically based on the load. Now this should give you a good idea of why it makes sense to have both.

    AWS Auto Scaling

    How to Easily Set Them Up in Cloud Application Manager?

    If you were to set this up directly in AWS,...

    Read on...

    Adding Google Compute Engine Provider In Cloud Application Manager

    July 18, 2014
    By

    Nowadays, a majority of cloud service providers offer an API that allows users to interact with their infrastructure for the creation or deletion of resources, volumes, and images, to name a few. To use these APIs, users have to first authenticate using mechanisms based on key-password pairs.

    However these mechanisms are quite cumbersome as users often have to search for their credentials on cloud provider websites or in their file systems. Besides, these key-password pairs are in the format of long and difficult chains of alphanumeric characters, which make them impossible or pretty difficult to remember. Even though the use of these key pairs is justified by security reasons, it clearly affects a user’s experience when having to access these APIs. In Cloud Application Manager, as well as in many other cloud platforms, users have to specify their credentials, in order to interact with cloud vendors such as AWS, Google Compute Engine (GCE), Microsoft Azure, VMware, Openstack or Cloudstack among others.

    Our philosophy at Cloud Application Manager is to alleviate and minimize all tedious management operations which affect a user’s experience, as long as they don’t present security issues. In the following, we will focus on Google Compute Engine’s API which supports...

    Read on...

    Securely Connect to Your AWS Cloud Resources

    July 10, 2014
    By

    When using Cloud Application Manager, you bring your own cloud. To deliver the absolute best experience of deploying applications on any cloud, we are working very closely with all the cloud providers that we support – Google Cloud Platform, Amazon Web Services, Microsoft Azure, OpenStack, CloudStack, and VMware. One of the topics that often comes up is security. Today, we’re adding enhanced security for our AWS support.

    Our friends at Amazon have built comprehensive Identity and Access Management (AWS IAM) features, which enable enterprises to grant and control secure access to specific AWS resources. For instance, with AWS IAM, cloud administrators can set up password policies for the user groups, delegate user and application rights with roles instead of sharing credentials and even enable multi-factor authentication for more privileged users. AWS IAM helps cloud administrators to narrow down the user rights and grant the least needed privileges for the users and applications.

    At Cloud Application Manager we are putting a lot of emphasis on security and hence we are proud to take advantage of the AWS IAM features. It is essential for us that we always comply with the industry standards and best practices of security and risk management. Starting July 11th,...

    Read on...

    Deploying a MongoDB Cluster with Cloud Application Manager

    June 20, 2014
    By

    With Cloud Application Manager, you can easily deploy a self replicating MongoDB cluster in just a few minutes. In order to accommodate Cloud Application Manager’s data needs, we rely on MongoDB clusters that run on two public clouds and one private data center.

    This way we can provide redundancy, high availability and excellent read and write performance around the world. Using Cloud Application Manager, and our concept of Boxes, which are application or infrastructure components made available as a service, we can consistently deploy a MongoDB cluster in just a matter of minutes on any of our Cloud Providers.

    To get started deploying MongoDB clusters on Cloud Application Manager, sign up today for our free account! If you’re interested in other resources on MongoDB, check out how you can easily use Splunk to monitor MongoDB using Cloud Application Manager.

    The Basics

    What is MongoDB? MongoDB is an open-source document store database.

    Why would you want to cluster MongoDB? To provide redundancy and high-availability for production deployments.

    How We Use MongoDB

    In our case, MongoDB is using a replica set model. A replica set is a group of MongoDB instances that host the same data set. One MongoDB, the primary, receives all write operations. All other instances, secondaries,...

    Read on...

    Mind the Gap: Innovating in the Cloud with APIs

    June 16, 2014
    By

    If you’ve ever traveled London by the underground tube, you’ve likely heard the station speakers announce “mind the gap.” They’re warning you of a gap between the train and the platform. But why build a curved train platform in the first place? Were the engineers not aware of its dangers?

    Mind the Gap

    Bridging Decades of Software Gap

    Now, London is an old city. Generations of engineers and architects have added their vision to the city’s foundation. Our software industry is not as old, but in comparison has seen such fast paced innovation and reinvention that it has generated decades of software on which our modern civilization is built. As a result, the only way to bridge the gap between decades of generated software is to build the equivalent of a curved station.

    It’s hard to look both forward and back at the same time. For years as they combined old with new technology–ranging from hardware to application runtimes–IT tried to stabilize operations with very little room for error. That stabilizing process has been arduous and expensive.

    On the other hand, the software development process is marked by continual change. Most organizations reconciled the operations and development worlds by introducing release cycles measured in years, even naming...

    Read on...

    Multi-Tier Applications Done Right

    June 11, 2014
    By

    One of the biggest problems facing software engineers since the dawn of the multi-tier application is, well, how to make it multi-tier.

    It’s more than just having several supporting applications – it is about connecting the layers correctly and allowing them to communicate with each other. All this is to create a scalable and responsive deployment that can be easily updated and adapted to changing business needs.

    It is about ensuring that your infrastructure can co-ordinate the order in which your application tiers are spun up, even when the apps themselves have not been designed to perform these critical dependency checks.

    What Are You Looking For?

    Intelligently handling your solution’s dependencies is an inherent problem in multi-tier deployments – at whatever scale you are operating. For instance,

    As a developer working on a project:

    “I want to ensure that my database is up and running before my web-server is deployed.”

    As the CTO of a rapidly growing startup:

    “I want to bootstrap on basic AWS services (such as managed cache, load balancing, and managed databases), but as the product evolves, I want to give myself the freedom to evolve the services I connect to and consume – experiencing as little downtime as...

    Read on...

    Cloud Application Manager and Infoblox – Say that Five Times Fast!

    May 12, 2014
    By

    We at Cloud Application Manager are really excited about our partnership with Infoblox to integrate “Network Control” into your process for developing and deploying applications in a cloud environment. To align with the Infoblox press release today, I wanted to provide a little more detail on how Infoblox and Cloud Application Manager work together.

    First let’s define the partnership at a high level, and from a conceptual point of view. Cloud Application Manager is a DevOps Platform that enables IT operations to deliver IT as a Service and also provides a collaboration mechanism for operations and developers to define and deploy applications in a modular process across any cloud environment – private, public, and hybrid. Infoblox, on the other hand, provides a powerful solution to centralize and automate network provisioning and control. So together, Cloud Application Manager and Infoblox ensure that when you are developing, orchestrating, and deploying your applications in the cloud, everything – including the network – will work, automatically.

    OK, I am being told that I should probably provide some more detail…

    Cloud Application Manager uses webhooks to provide a high level of integration to Infoblox. A quick example of how Cloud Application Manager uses webhooks to integrate with Infoblox...

    Read on...

    Let’s Start with Boxes

    April 30, 2014
    By

    What’s a Box? Is it Like a Container?

    I joined Cloud Application Manager in March and this was one of my biggest questions. So what better topic to kick off my blogging career than what a Box is…

    Think of a Box as a set of instructions, a DNA, or a blueprint that tells your application components where to go and what to do.

    The formal definition: A Box is a reusable, shareable, and portable layer of an application architecture. To create a multi-tier application architecture, you simply stack these Boxes.

    Here’s some examples of Boxes and what they do:

    • A Java Box contains the necessary files/scripts to install java onto a generic linux image.
    • A MongoDB Box makes your database portable and modular. You can also add other variables like database permissions to the Box.
    • An NGINX Box allows you to encapsulate your HTTP web server configurations and settings making them reusable for more than one app.
    • A Chef Solo Box deploys Chef Solo on your instance and let you run a Chef cookbook.
    • A Git Box allows your instance to have an integration with your source code repository which can be used for continuous integration, for example.

    So really a Box can be an OS layer, an app...

    Read on...

    Cloud Application Manager Supports AWS Elastic Load Balancer

    April 23, 2014
    By

    Today, we officially pushed to production our newest supported cloud service: AWS Elastic Load Balancer.

    The AWS Elastic Load Balancer, which automatically distributes incoming application traffic to the right set of EC2 instances, plays a vital role maintaining business continuity. Enterprises that often experience sudden surges in traffic, such as in media and marketing, rely on AWS Elastic Load Balancing to ensure greater levels of application fault tolerance.

    With Cloud Application Manager, users compose their applications by stacking together Boxes. At the time of deploying to a cloud provider such as AWS, the user can select from a set of provider-specific services that enhance the deployment.

    Users now have the option to add AWS Elastic Load Balancer capabilities at the time of deploying their Box. We support the load balancing of applications using HTTP, HTTPS, TCP and SSL protocols and provide the ability to specify the certificates necessary for secure protocols. In addition to creating brand new load balancers at the time of deployment, we also support the reuse of existing load balancers that are associated with the user’s AWS account. This allows businesses to repurpose their existing infrastructure configurations.

    With the addition of Elastic Load Balancer, we have expanded our current list of...

    Read on...

    Someone’s Infrastructure is Someone Else’s Service

    January 10, 2014
    By

    These days where everything is offered up “as a service,” we run the risk of turning “as a service” into a meaningless marketing tag. Most everyday someone out there comes up with a new “as a service” offering forcing even the government to officiate guidelines for IaaS, PaaS, and SaaS. In this blog, I’d like to explore the true meaning of Infrastructure as a Service (IaaS) and see what it means for us as enterprises and developers.

    What Does IaaS Mean?

    As defined by the government, Infrastructure as a Service allows consumers to provision processing, storage, network, and other fundamental computing resources on demand. To these provisioned resources, consumers can deploy and run arbitrary software including operating systems and applications. Without having to manage or worry about the underlying cloud infrastructure, consumers can control operating systems, storage, their deployed applications, and even in some limited way control networking components like host firewalls.

    Before Amazon’s EC2 offering, many hosting companies like Rackspace had already offered compute resources on demand. But what’s so different about the AWS offering that triggered a whole IT revolution?

    Why’s Amazon IaaS Strategy Successful?

    I believe the difference is rooted in Amazon’s service centered culture as revealed in Steve Yegge’s post and...

    Read on...

    Five Ways To Enhance IT Ops With CenturyLink Cloud

    November 12, 2013
    By Jared Ruckle, Product Marketing, CenturyLink Cloud

    Today at Dell World, Dell announced that CenturyLink has joined the Dell Cloud Partner Program.

    So what does this news mean for Dell customers?  Simple: you now have easy access to a high performance, highly resilient public cloud, with extensive self-service capabilities.  And you will be supported by Dell and the CenturyLink Cloud team every step of the way.

    autoscale

    Here are five key benefits you can take advantage of immediately on this platform:

  • Deploy on virtual servers with resiliency and redundancy.  When it comes to public cloud, you hear the phrase ‘build for failure.’  That’s a critical design pattern for cloud-native applications.  But many of the apps running in your data center today – including many that are candidates to move to the public cloud – are designed with reliable infrastructure in mind.  Dell Cloud On Demand with CenturyLink offers built-in resiliency and redundancy, so many of your legacy apps – homegrown, from boutique ISVs, or Microsoft – will run smoothly ‘out of the box’ on CenturyLink Cloud.
  • Simplify DR and backups.  These tedious activities should be immediately automated.  Savvy IT departments – and those that will thrive in the future as a strategic enabler of the business – are already on this
  • ...

    Read on...

    ITaaS: The Innovative CIO’s Recipe to Curb Shadow IT

    October 31, 2013
    By

    Early this month I was at the CIO Executive Leadership Summit in San Diego, which attracted about 900 people; among them were CIOs from big enterprises, influencers from the press, and portfolio companies sponsored by Intel.

    At this popular networking event, I had a chance to meet several C-level executives from enterprises that turn over upwards of a billion dollars in annual revenue. It was great to connect with these folks because you get to hear of challenges from a whole organization’s perspective thanks to their bird’s eye view.

    Wary of the Shadow

    In our meetings, the CIOs talked about shadow IT problems that affect departments today. Shadow IT happens typically when groups inside your organization quickly start experimenting with or using SaaS and cloud services without waiting around for IT and organizational approval.

    Shadow IT problems spring up in large enterprises when IT departments are slow to respond to pressing business demands. IT is often too busy processing a flood of requests related to production, post production, dev, and test. They’re held back from delivering services fast because of workflow processes and the amount of manual setup involved.

    Let me give an example. Company A, a publicly traded fortune 500 company, has a policy...

    Read on...

    Four Quick Steps to Deploying a Red Hat Enterprise Linux Server in the CenturyLink Technology Soluti

    October 6, 2012
    By Richard Seroter, Senior Product Manager. Find Richard on Twitter

    We just announced that we are a Red Hat Certified Cloud Provider which means that you can now confidently deploy Red Hat Enterprise Linux servers in the CenturyLink Cloud cloud. But enough talking about it; let’s show you how it’s done! In this post, I’ll walk through the short steps for getting a Red Hat Enterprise Linux box up and running.

    Step 1: Build the Server in the CenturyLink Cloud Cloud

    Our customers have two primary ways to build up server environments in the cloud. First off, servers can be included as part of a blueprint. Our customers leverage blueprints when they want to build reusable templates for single or multi-server environments. You can now include Red Hat Enterprise Linux servers as part of sophisticated blueprints. In addition to using blueprints, customers can build servers through a dedicated “create server” workflow. In this flow, users can provision Red Hat Enterprise Linux servers with any resource combination (CPU+memory+storage) and install any private software packages onto the new server.

    After completing the workflow, users will see their new server come online in a matter of minutes.

     

    Step 2: Update the server with all the latest patchset

    Recall that CenturyLink Cloud cloud servers are private by default, and...

    Read on...

    SaaS Your App: Providing Support to Customers (Part V)

    September 12, 2012
    By Richard Seroter, Senior Product Manager. Find Richard on Twitter

    Throughout this series of articles, we have looked at the architectural considerations and solution components that are necessary for delivering software as a service (SaaS). We have seen that upfront design is critical when building software that can be successfully used by customers with unique needs. A full-featured cloud service provider like CenturyLink Cloud offers many of the infrastructure automation and management services that makes it possible to efficiently deliver such software at scale. In this final article, we take a look at the choices that a SaaS provider needs to consider when deciding upon a support strategy for their customers.

    One overarching consideration that any SaaS provider has to make is whether they plan on providing consumer-oriented, personalized service, or something with a more mass market flavor. Each approach has merit but would result in different implementations of four suggestions below.

    Standardize wherever possible

    One of the only ways that any software provider, SaaS or otherwise, can sell at scale is to standardize their offering and avoid per-customer customization. While everyone loves the idea of “I want it my way”, that concept quickly falls apart when the software provider is maintaining unique code bases, support instructions, and pricing.

    The most...

    Read on...

    SaaS Your App: Building a Customer Sign-Up and Management Portal (Part IV)

    August 26, 2012
    By Richard Seroter, Senior Product Manager. Find Richard on Twitter

    So far, we have reviewed many considerations for designing, hosting and managing a SaaS application on a cloud platform. One of the hallmarks of cloud computing is the notion of “self service”, and for SaaS providers, it’s the only way that they can efficiently scale to hundreds or thousands of customers. In this article, we will look at how to use the CenturyLink Cloud web API to create a self-service sign up and management portal that lets SaaS customers administer their applications without requiring help from the software provider.

    Solution Overview

    We have been working through a scenario with a fictitious SaaS application that acts as a public face for candidates running for elected office. The application’s developer chose to deploy unique server clusters for each customer in order to isolate their sensitive donor and donation data. The management database, which holds account details and application configuration data, was shared.

    In previous articles, we walked through the steps of creating a blueprint for the server clusters, and now need a way to automatically provision these clusters and enable a self-service management experience.

    Adding New Customers

    The first thing that our developer provided was the ability to add new SaaS customers. For...

    Read on...

    SaaS Your App: Establishing Operational Support (Part III)

    August 12, 2012
    By Richard Seroter, Senior Product Manager. Find Richard on Twitter

    So far in this series of articles, we’ve looked at how a software provider can deliver their product in a Software-as-a-Service (SaaS) manner using the CenturyLink Cloud Enterprise Cloud Platform. While provisioning and deployment of solutions is an exciting topic, the majority of an application’s life will be spent in maintenance mode. In this article, we will look at how a CenturyLink Cloud cloud user can efficiently manage and monitor their SaaS environment.

    Defining Customer Capacity Thresholds

    You may recall from the last article that our fictitious SaaS application is targeted at candidates for political office. In this scenario, the application developer chose to create individual pods of servers for each customer instead of co-locating the customers on the same application or database server.

    Each of the pods of servers go into a CenturyLink Cloud Group which creates a logical segmentation of servers. Each Group can have its own permissions, maintenance schedule, performance monitors and much more. From the CenturyLink Cloud Control Portal, we can browse the individual server groups and have at-a-glance visibility into the resources being used by each server.

    In an upcoming article we will look at how to allow SaaS customers to increase server resources...

    Read on...

    SaaS Your App: "Blueprinting" Your Application (Part II)

    August 5, 2012
    By Richard Seroter, Senior Product Manager. Find Richard on Twitter

    In the first article of this series, we discussed the major things to consider when looking to create a software-as-a-service version hosted on a cloud platform. One major factor called out in that article was the need for a solid hosting environment. In this article, we will look at how to use the CenturyLink Cloud Enterprise Cloud Platform to package a web application for SaaS provisioning.

    Solution Overview

    To provide some real-life applicability to this article series, let us work with a fictitious, but realistic, use case. Elections to government posts are a regular part of most societies and it’s becoming increasingly critical for candidates to have a robust web presence. Let’s imagine that a web developer successfully built a web site for a local candidate and has realized that this site template could be reused by multiple candidates. Recall from the previous article that an application can be multi-tenant (and thus easier to maintain for multiple customers) in multiple ways:

  • All customers could reside on the same instance of the web application and database.
  • Customers can share a web application but maintain unique databases.
  • Each customer gets their own web application and database instance and MAY share underlying infrastructure.

    There are benefits and risks

  • ...

    Read on...

    SaaS Your App: Building for Software as a Service (Part I)

    July 27, 2012
    By Richard Seroter, Senior Product Manager. Find Richard on Twitter

    It will surprise no one to say that Software-as-a-Service (SaaS) is a hot topic. Really hot. In 2010, Gartner reported that 95% of organizations are planning to grow or maintain their SaaS investment. According to the influential technology blog GigaOm, the valuation of SaaS companies is skyrocketing compared to more traditional enterprise software vendors. While most organizations are increasing their use of SaaS products, some are looking for ways to offer their own software in a SaaS delivery model. What does it mean to “SaaS your app”? This series of articles will walk through the considerations and techniques for creating (or converting) an application for a SaaS offering. In this first article, we will lay the foundation for the series by identifying the critical aspects of SaaS and what you should look for when planning and architecting your software.

    Comparing Application Hosting vs. Software as a Service

    Isn’t SaaS just a rebranding of the products and services offered by Application Service Provider (ASPs)? The answer is a resounding NO, but it’s easy to become confused when you find so many products with “cloud!” slapped on their label. To be fair, SaaS is an extension of the ideas introduced by ASPs, but there...

    Read on...

    Making it Even Easier for Our Customers to Deploy Hybrid Cloud

    July 16, 2012
    By

    Manual environment deployments can be time-consuming and expensive. Over the years we’ve felt our customers’ frustrations: enterprise IT departments trying to be more agile in the face of business demands; ISVs that need faster time-to-money; Systems Integrators that are bogged down in repetitive work. That’s why we’re thrilled to announce the launch of Environment Engine, a toolset that automates environment and application deployments to the enterprise cloud using “Blueprints.” Blueprints contain the DNA of an environment—from host configurations, to firewall and load balancing rules, to any applications running on top. (And yes, before you ask, these tools are completely free to use for all CenturyLink Cloud customers.) With Environment Engine, the elusive IT-as-a-Service is no longer a myth. Now IT pros can create best practice-optimized Blueprints that others can use later to deploy complex applications and environments on-demand. Rollout times drop from days or weeks to hours or minutes, and because deployments are automated across the whole technology stack, build-outs are consistent and leave little room for pesky human errors. So how exactly does all of this work? Let’s get into the nitty-gritty… 1. Using the Blueprint Designer, a technical expert can create Blueprints that include host and network configurations;...

    Read on...

    When Provisioning Breaks, Blueprints Shine

    July 6, 2012
    By

    There is nothing worse than getting that email or error warning you that provisioning has failed. And inevitably this happens at the very last stages of getting an environment automated.

    You ask yourself: _What happened? Why did it fail? And then: Did I build my own logging? Did the person that built the orchestration provide logging (and, if so, where is it)?_ After asking yourself all of these questions, you realize that the orchestration/provisioning layer only shows you a simple “failure” message and no details of where it failed or why. Frustrating.

    #### With Blueprints, even failure is amazing…

    “Blueprints” is CenturyLink Cloud’s environment templating engine. It allows you to combine virtual machine templates, infrastructure tasks, script packages, and software packages to create a fully deployable environment—such as building a Microsoft Exchange server environment with just the click of a button. The other Blueprint feature that is simply incredible is the debugging and details that you have access to when deploying a Blueprint. This includes access to not only the status of the deployment, but also the tasks being executed. If something fails, you know where and why. ![](/assets/images/blog/BP-1-252x300.png "Deployment Failure") Here is an example of a blueprint being executed that fails on step...

    Read on...


    Connect

      Follow us on


    Start Your Free Trial

    High performance, fast deployment times and intuitive management capabilities that will push your business forward

    *We will send a SMS message to verify your account, standard rates apply.