How do you do databases with Docker? Or for that matter, how do you do anything that stores persistent data in Docker? Sure you can attach a volume container to your application container, but that doesn’t scale and it makes your container dependent on your host, which is not ideal if you have a multi-server Docker cluster.

Flocker is one of the few projects that is trying to tackle this problem (along with Flynn). They are using cool technology like ZFS to snapshot your underlying container filesystem and make it transferable to any Docker host in your control.

This week, we talk to Flocker’s founder, Luke Marsden, about his company ClusterHQ who is positioning themselves to solve this big problem.


Luke Marsden CTO and Founder of ClusterHQ, Flocker creator

Luke has 12 years of experience building web apps & running a web hosting company. Inspired by the practical operational problems faced running web apps at scale, he started ClusterHQ. Luke holds a Computer Science degree from Oxford University. Solving the data problem for Docker at

What is ClusterHQ and Flocker?

ClusterHQ is “The Data People for Docker”. They believe that support for portable and resilient data volumes is a major missing piece of the puzzle for Docker. Without support for data migration, cloning and failover, Docker is unable to easily capture entire applications (including databases, queues, and key-value stores) in production in a way that ops can manage.

What is the difference between Flocker and ClusterHQ?

ClusterHQ as a company is 100% focussed on building Flocker, which is an open source project. The mission for Flocker is to solve the data problem for Docker. The big problem at the moment is that if you have data services within containers is that as soon as you put a data service (anything that has state) inside a Docker container, best practices dictate that you attach a volume from outside the container to inside the container. The database will be continuously writing to that volume, which means that container get stuck on that physical host.

And how does Flocker work?

Building on technology from previous projects, and leveraging ZFS on Linux, each Docker container that needs to have state is backed by its own ZFS filesystem. ZFS is a beautiful storage analog for containers because ZFS filesystems arte light-weight, portable filesystems which you need for your light-weight, portable containers.

What’s a good metaphor?

Think of Flocker as a "container sandwich" that is made of containers in the middle, a network proxy on top, and a storage layer underneath. You can have multiple sandwiches for multiple hosts and Flocker can control any request for any container and also control the Flocker volumes that are attached. This allows container movement by allowing the network and storage layers to be synchronized in conjunction in real time, and switch the network proxy to the new location.

What is the difference between using Flocker with ZFS versus a more traditional NFS or underlying persistence layer?

This architectural choice removes the need for a SAN in the system. With today's modern cloud systems it is even more important since you don't get a SAN, which would be a single point of failure in any case. Cloud application design is predicated on that fact that "things will fail" so having a SPOF is a terrible thing.

For the data part, does it sync automatically or, when you need to move, is it an atomic transition?

You need to remember that Flocker 0.1, at the moment, is a command line tool that reaches in and interacts with some servers to do static configuration. Currently the movement of containers is done in response to a manual request. Utilizing "application manifests" and "deployment manifests" Flocker 0.1 allows the deployment manifest for an application to be changed, Flocker re-run, and the container and data moved to the new target location based on the information in the updated deployment manifest. While this is manual (with some latent downtime) the project is moving toward a seamless, atomic migration.

Can you tell us more about the HybridCluster, what is that heritage?

With HybridCluster (the former name for ClusterHQ) there was a large feature set; including built-in HA (with local or remote DC fail-over) and auto-scaling (or "auto-juggling" as they called it) that implements the notion that stateful things scale vertically while stateless things scale horizontally. See Data-focused Docker Clustering on the CusterHQ blog for more details.

Did HybridCluster use Docker too, or just containers?

HybridCluster was based on FreeBSD and so used FreeBSD jails.

So a difference between the Hybrid Cluster legacy and ClusterHQ, is that ClusterHQ is more Docker-centric?

Absolutely. They noticed a huge opportunity to solve some of the big problems that the Docker community haven't quite figured out yet. ClusterHQ is becoming more engaged with the Docker community and try to help out to solve some of those problems. To that end, ClusterHQ is not providing yet another orchestration layer but rather is targeting integration with the other orchestration projects in existence. They want to integrate with CoreOS, Kubernetes, Mesosphere, etc.

What are the biggest challenges for people adopting Docker and containers?

Docker did a great job in getting to 1.0 and it has clearly won the hearts and minds of developers in a big way, opening up a huge set of opportunities around containers. However, customers want/need to be able to deploy production applications to more than one machine, for performance reasons, for scaling, and for resilience. This drives two big themes when moving from a single host deployment to a multiple host deployment; storage and networking. This is where the Docker community has a lot to figure out still. We are just at the beginning of that road.

What do you think about what the Flynn guys are doing, as they are also working on some persistent state issues?

It's great to see the Flynn guys tackling this head-on. The difference in approach is that ClusterHQ is taking a generic approach to being able to replicate arbitrary filesystem state (utilizing ZFS) while the Flynn guys are building on top of the built-in replication methods that exist in some, but not all, of the data services. There is room for both approaches.

Do you think the state problem is fundamentally different in virtual machine than it is in containers?

Yes, it is. The reason is that handling state for containers you are exposing a POSIX filesystem rather than a block device as is typical for VM's. You could allocate and attach a block device for each container, but think of a large VM with thousands of containers, each requiring it's own block device. Current systems (such as EC2) are not designed to work this way.

What container projects are you most excited about, what's on your radar for things to watch?

Weave just launched, and is a really interesting project by the founders of RabbitMQ. They are building, effectively, a layer-2 Ethernet switch into a cluster of Docker containers. This allows on-prem and cloud containers to join together and feel like they are on the same network segment. Combined with Flocker, this could allow live migration of stateful applications between on-prem and cloud services. Also, excited to see the start of "agreement" around the concept of orchestration. People are starting to think about Kubernetes serious, which will be a big part of our future.

What do you think is next for containers, where do you think it is all going?

It's interesting to see how containers are being used for the installation of things (opposed to Chef, Puppet, etc.) and also how they are encroaching on the traditional VM functionality. That provides a very powerful combination; providing a consistent way to deploy stuff is a big win and containers will start to encroach into both of those markets.

Can you tell us how ClusterHQ will monetize in the future?

This is an important question for any open source company. You need to walk a fine line between building an open source technology that is really powerful, really production-ready while at the same time being able to draw a hard line of differentiation. This differentiation is what allows companies to pay for the software as opposed to just running the free version. We are still trying to figure out what to open source and what makes sense as a proprietary add-on in the future. Over the next 12 months we are solely focused on building the Flocker open source community.

Where else can people find out more about Flocker?

Go to and check out the docs section. Also, the source project is on GitHub under an Apache 2 license, the Google Groups is flocker-users, and on IRC go to freenode and check out #clusterhq.

Quick Links