Panamax is a Docker GUI that makes deploying multi-container apps as easy as one-click. The project recently introduced support for remote deployments.

Panamax Remote Deployment Support

This feature allows the user to take an application template from their local Panamax installation and send it to a remote machine (or cluster of machines) for execution. Remote deployments are handled by a small, containerized agent which runs in the remote environment and listens for requests from the Panamax client. When composing applications in the Panamax UI, all of the containers are executed within a single, local CoreOS instance which is set-up as part of the Panamax installation process.

For remote deployments we didn't want to assume that everyone was going to be using CoreOS/Fleet so we had to find a way to generalize the communication between the Panamax client and the remote agent but allow the agent to communicate with whichever Docker deployment technology was being used in that environment.

Pluggable Adapters

We settled on a pluggable adapter model whereby the end-user can choose whichever implementation meets their needs (or even build their own) and it will be guaranteed to work with the Panamax remote agent. At the time of publication, there are three adapters available: Fleet, Marathon, and Kubernetes.

The Fleet adapter behaves much like the local Panamax installation does -- containers are scheduled to a cluster of CoreOS machines via Fleet's distributed init system (where "cluster" may simply be a single machine).

The Kubernetes adapter interacts with Google's Kubernetes container orchestrator. The point of Kubernetes is to manage the deployment of containerized applications across a cluster of hosts so it's a perfect match for Panamax's remote deployment feature.

The remainder of this article describes how the containers in a Panamax application template are translated by the Kubernetes adapter for deployment into a Kubernetes-managed cluster.

Example Application

To see how the Panamax Kubernetes adapter translates a template into Kubernetes-specific artifacts, let's look at an example application template:

name: Wordpress with MySQL
images:
- name: WP
  source: centurylink/wordpress:3.9.1
  environment:
    - variable: DB_PASSWORD
      value: [email protected]
    - variable: DB_NAME
      value: wordpress
  links:
  - service: MYSQL
    alias: DB
  ports:
  - host_port: 8000
    container_port: 80
- name: MYSQL
  source: centurylink/mysql:5.5
  environment:
    - variable: MYSQL_ROOT_PASSWORD
      value: [email protected]
  ports:
  - host_port: 3306
    container_port: 3306

This template describes an application comprised of a WordPress container which is linked to a MySQL container. Both containers receive some configuration data via environment variables and establish port mappings.

Essential Kubernetes Concepts

When using Kubernetes to to deploy an application, there are three different entities that can be created: Pods, Services and Replication Controllers. The following sections describe how the application above maps to these concepts.

Pods

A Pod represents a group of containers that are deployed together on a host. Typically, if you are deploying an application into a cluster, you don't necessarily want all your containers to land on the same host unless they need to share a local resource or have some other colocation requirement. So, for our example application, we'll end-up with two separate Pods.

The Kubernetes adapter will do a pretty straight-forward translation of every container defined in the application template into a Pod. Here is the JSON-encoded Pod description that will be sent to Kubernetes for the WordPress container:

{
  "id": "wp-pod",
  "kind": "Pod",
  "apiVersion": "v1beta1",
  "desiredState": {
    "manifest": {
      "id": "wp-pod",
      "version": "v1beta1",
      "containers": [{
        "name": "wp",
        "image": "centurylink/wordpress:3.9.1",
        "ports": [{
          "containerPort": 80,
          "hostPort": 8000
        }],
        "env": [{
          "name": "DB_PASSWORD",
          "value": "[email protected]"
        }, {
          "name": "DB_NAME",
          "value": "wordpress"
        }]
      }]
    }
  },
  "labels": {
    "name": "wp"
  }
}

Most of the configuration data for the WordPress container in our application template translates nicely into the Pod definition -- the image name, environment variables and port mappings were pretty much copied as-is. The only thing that appears to be missing is the link data, but we'll get to that in the next section.

Note that the name value from the application template was used in a few different places in the Pod definition: in the ID of the Pod, as the name of the container, and as a label that is applied to the Pod. The Pod ID is what you reference if you want to manage the Pod via the kubecfg command line tool:

kubecfg get /pods/wp-pod

The label is used by Kubernetes as a way to select groups of Pods and becomes important when we look at Services in the next section.

The Pod description created for the MySQL container would look like the following:

{
  "id": "mysql-pod",
  "kind": "Pod",
  "apiVersion": "v1beta1",
  "desiredState": {
    "manifest": {
      "id": "mysql-pod",
      "version": "v1beta1",
      "containers": [{
        "name": "mysql",
        "image": "centurylink/mysql:5.5",
        "ports": [{
          "containerPort": 3306,
          "hostPort": 3306
        }],
        "env": [{
          "name": "MYSQL_ROOT_PASSWORD",
          "value": "[email protected]"
        }]
      }]
    }
  },
  "labels": {
    "name": "mysql"
  }
}

Services

When using Docker it's very common to have a container that needs to communicate with some other container. One of the trickiest parts of setting up an architecture with multiple dependent containers is making sure that each container knows how to find its dependencies. Mostly, this boils down to knowing the IP address and port number that you need to use in order to connect to a dependent service.

When using Docker directly, this sort of service discovery is facilitated by creating links between containers. If you were to use the Docker client to start the two containers from our sample application you might do something like this:

docker run --name MYSQL centurylink/mysql:5.5
docker run --name WP --link MYSQL:DB centurylink/wordpress:3.9.1

The MySQL container is started and assigned the name MYSQL . The WordPress container is started with a link to the MySQL container and that link is assigned an alias of DB. All that the --link flag does here is to cause the Docker runtime to inject some environment variables into the WordPress container with connection info (IP address and port number) for the MySQL container.

The environment variables injected into the WP container would look something like this:

DB_PORT=tcp://10.1.2.160:3306
DB_PORT_80_TCP=tcp://10.1.2.160:3306
DB_PORT_80_TCP_ADDR=10.1.2.160
DB_PORT_80_TCP_PORT=3306
DB_PORT_80_TCP_PROTO=tcp

Each of the variable names is prefixed with the link alias that was specified with the --link flag (the use of the alias provides a level of indirection which prevents the application code from being tightly coupled to the container's assigned name). Docker already knows the IP address for the _MYSQL_container and the port number is determined by looking at the list of ports exposed by the MySQL image.

With this information in hand, the process running in the WordPress container should have everything it needs to connect to the service running in the MySQL container.

To support this sort of service discovery, the Panamax application template also provides a way to model container dependencies. Much like the docker run command, a link in the Panamax template is specified with the name of the container being linked-to and the alias to be applied to the link.

When the Kubernetes adapter sees that a link has been specified between two containers it will automatically create a Kubernetes Service. One of the many things that the Service entity in Kubernetes handles is service discovery. A Service can be associated with a Pod and then any subsequent Pods which are started will automatically receive connection information from that Service.

In the case of our example template, the Kubernetes adapter would see the link between the WordPress container and the MySQL container and configure the following Kubernetes Service:

{  
  "id":"db",
  "apiVersion":"v1beta1",
  "kind":"Service",
  "port":3306,
  "labels":{  
    "name":"mysql"
  },
  "selector":{  
    "name":"mysql"
  },
  "containerPort":3306
}

The ID of the Service is assigned the same value as the alias specified in the link definition. This ID serves as the prefix for the environment variables injected into the containers when they are started. This ensures that the names of the Kubernetes environment variables match those that would be created when using the --link flag with docker run .

The selector field in the Service description is used to select the Pod (or set of Pods) that this Service is representing. Note that the "name":"mysql" selector shown here matches exactly the label that was assigned to the MySQL Pod. The label is how Kubernetes establishes the relationship between the Service and the Pod it is fronting.

Once the Service shown above has been created, any subsequent Pods which are started will receive the following environment variables:

SERVICE_HOST=127.0.0.1
DB_SERVICE_HOST=127.0.0.1
DB_SERVICE_PORT=3306
DB_PORT=tcp://127.0.0.1:3306
DB_PORT_80_TCP=tcp://127.0.0.1:3306
DB_PORT_80_TCP_ADDR=127.0.0.1
DB_PORT_80_TCP_PORT=3306
DB_PORT_80_TCP_PROTO=tcp

The first three variables above are Kubernetes-specific, but the rest of them follow exactly the naming conventions that Docker uses when linking containers running on the same host. By using these environment variables in your application for service discovery you should be able to take containers you're running locally with Docker links and run them in a Kubernetes cluster without any changes.

Kubernetes Services have additional uses beyond service discovery, but we'll cover those features in the next section.

Replication Controllers

In our example application template, we have a single WordPress container connecting to a single MySQL container. This is fine for small scale deployments, but what if you were expecting a lot of traffic on your WordPress site and wanted to run a bunch of WordPress instances in order to handle the load.

The application template doesn't yet provide for scaling container instances, but when you use the Panamax UI to initiate a remote deployment you have the option of specifying a deploy count for each container.

Panamax WP Service Options

For any container configured with a deploy count greater than 1 the Kubernetes adapter will create a Replication Controller instead of a Pod (in fact, a future release of the Kubernetes adapter may simply default to using Replication Controllers for all services and skip the Pods altogether -- that way you have the option to scale up later if you want). The Replication Controller is pretty much a Pod with an associated instance count. Kubernetes will do its best to ensure that the specified number of instances of that Pod are always running.

If we were to deploy our sample application via the Panamax UI and set the deploy count to 3 for the WordPress container, the Kubernetes adapter would create the following Replication Controller:

{  
  "id":"wp-replication-controller",
  "apiVersion":"v1beta1",
  "kind":"ReplicationController",
  "desiredState":{  
    "replicas":3,
    "replicaSelector":{  
      "name":"wp"
    },
    "podTemplate":{  
      "desiredState":{  
        "manifest":{  
          "id":"wp-replication-controller",
          "version":"v1beta1",
          "containers":[{  
            "name":"wp",
            "image":"centurylink/wordpress:3.9.1",
            "ports":[{  
              "hostPort":8000,
              "containerPort":80,
              "protocol":"TCP"
            }],
            "env":[{  
              "name":"DB_PASSWORD",
              "value":"[email protected]"
            }, {  
              "name":"DB_NAME",
              "value":"wordpress"
            }]
          }]
        }
      },
      "labels":{  
        "name":"wp"
      }
    }
  },
  "labels":{  
    "name":"wp"
  }
}

The only things that are really new here are the replicas and replicaSelector fields. The value of the replicas field indicates how many instances of this Pod should be running. The replicaSelector field works much like the selector we saw in the Service example above and creates the association between the Replication Controller and the Pods it is managing.

Note that the Replication Controller defined above has the Pod template embedded within it -- there is no need for a separate Pod to be configured. For any container defined in the application template, the Kubernetes adapter will create either a Pod or a Replication Controller (but never both).

Obviously, having three instances of WordPress running doesn't do you much good unless you have some way to load-balance your incoming requests across those instances. This is one of the other features provided by the Kubernetes Service. If you define a Service and associate it with Replication Controller it will offer clients an endpoint which, when accessed, will load-balance across the Pods managed by the Replication Controller.

Whenever the Kubernetes adapter creates a Replication Controller it will also create a Service to act as the the proxy/load-balancer for the replicated Pod. For the Replication Controller shown above the configured Service would look like this:

{  
  "id":"wp",
  "apiVersion":"v1beta1",
  "kind":"Service",
  "port":8000,
  "labels":{  
    "name":"wp"
  },
  "selector":{  
    "name":"wp"
  },
  "containerPort":80
}

The service will expose an endpoint on port 8000 which will load-balance all requests to port 80 on the replicated WordPress Pods.

Caveats

The Panamax Kubernetes adapter tries to make the deployment of an application template to a Kubernetes cluster as similar as possible to the execution in the local Panamax environment. However, there are some places where Kubernetes' specific rules/features prevent a perfect translation of the application template. Some of the things to look-out for are documented below:

  • Any volume configuration (either volumes mounted from the host or from other containers) defined in your application template will not translate to Kubernetes.
  • Kubernetes allows only a narrow set of valid characters when naming Pods, Services, and Replication Controllers. Names may only contain lowercase letters, numbers and the - (dash) character. In order to comply with the Kubernetes naming restrictions, the adapter will automatically alter the passed in service names -- all uppercase letters will be down-cased and any other dis-allowed characters will be substituted with a - (dash) character.
  • In order for container links to work, you must explicitly expose any ports that the linked-to container is listening on. When using container links locally Docker has the ability to inspect the image and see any exposed ports which were defined in the Dockerfile. With Kubernetes the linked containers may be on different nodes in the cluster so the exposed ports must be explicitly defined in the application template so that the Kubernetes Service can be properly configured.
  • When using container links locally Docker will inject service discovery environment variables into the parent container for each of the ports exposed by the child container. When using Kubernetes Services for service discovery only a single port can be specified. If the child container in the link relationship exposes more than one port (see the point above) only the lowest numbered port will be used for the Kubernetes Service configuration.

Contributing to Panamax

The work we did on the Panamax Kubernetes Adapter represents our first pass at translating Panamax templates for deployment with Kubernetes (and really our first experience with Kubernetes). Undoubtedly, there are things that we could have done differently or improved. If you're a Panamax user and are at all interested in Kubernetes as a deployment target we encourage you to look at the code, submit issues, or make pull-requests.

If you're interested in creating adapters for different orchestration systems, we've also created a Panamax Adapter Development Guide which explains how to implement an adapter which will work with the Panamax Remote Agent. If you end up building your own adapter, please drop us a line -- we'd love to hear about it!