Docker is not the only Linux Container manager available. Recently, CoreOS launched their own project named Rocket which has some architecturally different approaches to managing containers. This week, we interview the brain behind Rocket to ask what is Rocket and why they built it.
Brandon Philips is the Co-founder and CTO of CoreOS
Prior to starting CoreOS, Brandon spent some time at Rackspace where he worked on monitoring products. He worked on a platform called Lubbock where they did LUA sort of a Node.js style LUA thing. Before that he did kernel infrastructure and kernel hacking at SuSE and the SuSE Labs Group.
Instead of starting with the controversy, let's talk about the tech. What does Rocket do?
Rocket is a container runtime, which means a few things. The first is that it downloads images of the application over the Internet, and in future versions it will verify those, and it runs those inside of a container. A lot of people have a good sense of what a container is, but in simple terms a container is a process that is running on your host and it is isolated from other processes.
So how is Rocket different from Docker?
A few things that we have done a little bit differently; first is that Rocket does not have a daemon so when you run "rocket run coreos/etcd" that is actually executing directly under the process that you started it from. So, you know your bash is PID 400, then "rocket run" is going to be 401. This is a pretty substantial design difference, we wanted to have Rocket be essentially a view on a static binary. Much in the same way when you compile a Go program and run that it executes directly underneath your initial process.
So, with Ruby you have an interpreter that runs your code, the Ruby interpreter and your code is running. With Go you compile it and there's no "Go interpreter" running, you're just running your code. Is that kind of the difference between Docker and Rocket?
In a way, in that Docker runs these things in separate process. You can think of it in another way... So you already have an init system like upstart or systemd or daemon tools or runit, when you do "rocket run" it actually gets monitored and run under that init system.
This can be a little bit of a tricky point if you're using Docker for these sorts of use cases. Because the (Docker) daemon will, when you do "docker run" will be underneath the PID namespace of the Docker daemon.
Before we get into why you did this, let's understand the technology a little bit better. Could you take some of the Docker images in the Docker hub and run them under Rocket, in theory?
Yes, there is a community project that someone announced yesterday that converts a Docker image into the App Container image. Another thing is that we have worked with a number of people and groups within this "container eco-system" and got some feedback and created a separate spec outside of Rocket which defines what we are calling an App Container, an App Container image, and an App Container runtime.
Those projects now live at github.com/appc and so there is a tool that converts from a Docker container to an App Container image.
Does it do that through the Dockerfile or through the image itself?
Through the image itself.
Would you be able to create an application image in Rocket from a Dockerfile?
Yeah, you can imagine doing that, there's no direct tool right now. Part of what Docker is, is that Dockerfile and that Docker build process. I think that process is fine and it is a pretty reasonable way of putting together a container.
Once you have a Rocket container up and running, so the Rocket system gets the container up and running does it go away after it has it up and running?
With the App Container runtime another difference from Docker is that Rocket runs multiple processes inside the container. So you have an active monitoring init system that ensures you have the ability to restart processes, or say, have two or three processes sharing resources inside of a single container.
So, you can think of it as there's an outside container, the root container, and each app runs in its own individual container an you can put contraints on those things. This kind of fixes the problem that a lot of people have where they will install daemon tools or runit inside their Docker container.
Instead you can do that as a first class process and compose different images together in that single root container.
What is the underlying operating system within the application containers with Rocket? Is there an operating system in these Rocket containers?
Yeah, it's identical to how you have a root process (in Docker) and that can come from wherever you want; you can build it yourself or use a Debian or use a Fedora or whatever.
Is there the ability to have something similar to a scratch image within Rocket?
So, right now there there are build tools that are merging, like build tools that create a Go ACI (Application Container Image) directly from Go source code, or convert from Docker images to ACI's.
That sort of whole eco-system is developing.
Right now you can't run Rocket on any Linux or BSD system, what do you need to run Rocket?
Any modern Linux kernel should work, so if you have a modern Ubuntu or Fedora or CoreOS or whatever, its a single static binary.
Now that we understand the technology a little better, explain why did you build this?
There's a few technical reasons why we wanted to build this. One of the major ones was this daemon-less mode and this enables us to do things like integrate well with socket activation, so the idea that I pass file descriptors from one process to another and those file descriptors may be open, listening sockets.
This has a lot of nice security things that come out of it and also dependency management things. So you can imagine that I have, on a systemd system I can say listen on port 443 and then hand the file descriptor, if someone comes in on that socket, hand that file descriptor to my Rocket container that is running my web server.
The cool thing about that is that if you architect your application correctly, the web server doesn't have to have any networking at all.
It just has this one file descriptor and so it doesn't have to have any outbound networking, it doesn't have to know anything about routable Internet addresses. So this is a nice little feature, but it means that it can play well with existing systemd or upstart or whatever init systems. We have had a few bootstrapping problems that we have hit (with Docker) and talk about on our mailing list, for example we have an overlay networking system called Flannel and we'd like to run that in a container.
We ended up having a bootstrapping problem how do we run that in a container that then needs to configure the outside Docker daemon and then start that Docker daemon with the correct network configuration? So these sorts of advantages were what we had in mind for daemon-less mode. Also the (App Container) image format allows for a few interesting things.
The first is that it can be signed by regular GPG keys and it has a discovery mechanism. So your registry for these images can be a simple http server or you can do more interesting things like if you run coreos.com/etcd:version050 on Rocket that actually uses a mechanism that looks at the http pages and finds out where the image that the binary for etcd lives and downloads it and launches it.
You don't have to know where that image is; its actually hosted on GitHub, but we own that domain (coreos.com/etcd) so people can kind of spread out their namespaces where containers live over the internet. This is how a lot of protocols on the Internet work.
Is CoreOS going to support Docker and Rocket in the future or is it going to go to one or the other, how does the CoreOS system interact with Rocket?
Yes, so Docker isn't going anywhere; we have a ton of users using Docker and we think the Docker platform is just fine. What Rocket will be used for is for some of these problems I've talked about, for example being able to do this Flannel overlay stuff. There's a number of places where we just continue to add more stuff to the CoreOS system but it would be really nice if we could run those things in containers. Helping to solve those problems is where we will start to see Rocket more.
Do you envision people who use CoreOS in the future to mix and match some Rocket containers and some Docker containers depending on the needs of those container? Or do you foresee it to be one or the other?
I can certainly see a use case for both things depending on what you are trying to do with your host. I think that is reasonable.
Do you think that this is kind of like back in the day when version control was going, there was git and there was Mercurial and there was all these different version control system out there doing different versions of branching and merging? Do you think we are at the kind of moment in time with Linux containers right now?
From that time period when we had git coming out what we kind of found was that there is plumbing, git has this concept of plumbing and porcelain. The plumbing is the stuff that hopefully everyone can agree on but there is going to be different workflows on top of that plumbing.
What I would like to see with the App Container spec that kind of becomes the plumbing of what we think of as these "application containers", so we agree on file formats, how these images are found, how they are downloaded, and how they are assembled on disk. Then it is perfectly great, and I actually want to encourage this idea, that lots of different runtimes might exist.
Because POSIX and the way people architect their systems, they're complicated and a lot of people have very strict or interesting needs, so I think it is OK if we end up in a place where there are lots of different runtimes.
For example, Mesos may have a different runtime than Cloud Foundry than, etc. etc. And I think that is a reasonable place to end up. But we need to make sure we can agree on that plumbing piece that's kind of the thing we are trying to do with the App Container thing.
Why didn't you guys try to go the direction, or did you try to go the direction of contributing these functionalities back into Docker, why did you create a separate project?
That's a great question. Early on we tried to get the stand-alone mode into Docker and Red Hat contributed a bunch of things that kind of go down that path and it kind of kept getting delayed. That was one of the reasons.
It's not a fork, right? You didn't take the Docker source, did you build it from scratch or did you start it from some other projects, what's the heritage of the source code?
The heritage of the code was we started it about two or three weeks was the first commit before we put it up on our blog post. It uses a bunch of libraries that we had already written for Fleet and for some of our other tools. We kind of put together the prototype and the specification over that time period, its definitely a raw project its a brand new raw open source project and it is progressing pretty well.
When we last talked you had just announced beta channels, it was alpha before, and CoreOS was in beta channels. Now you have stable channels for CoreOS, you've made a lot of progress with CoreOS...where are you at with Rocket? Is it alpha, beta, and when do you think it will be ready to use in production?
Right now, we are definitely on the alpha side of the development process. What we are doing with Rocket is fairly straight-forward and well scoped. We're not trying to create a lot of software here and we want to leverage as much stuff as we can. So hopefully early next year (2015) we can get to the point where it is beta and start testing it out with some of the use cases that we have in mind. We'll see how it goes, you always underestimate how hard software is.
Would you ever see Rocket, the code or stand-alone mode, coming back into Docker? Do you see it folding in or do you think it is fundamentally going to stay a separate project?
The first place we can definitely work together is on the App Container spec and the image format. Over time, the goals of the projects will look less similar, which is perfectly reasonable. We want this tiny little package manager plus container runtime and the Docker project is moving more towards a platform, and that's a reasonable thing too. With CoreOS we want try to ship as little stuff as possible so its a slight difference of end goal.
Are there any updates that you would like to share about any of the other technologies, either CoreOS, etcd, Fleet since we last talked?
The big exciting one is that etcd has been maturing quite nicely, we have projects like Flynn and Deis that rely on etcd in a pretty big way inside those projects. We have made a big investment in the stability of etcd and, not just the stability but the operationability (if that's a word).
We have had all this experience with users interacting with the system and for the most part it has worked as designed, but there are definitely some sharp corners that people have been stubbing their toes and cutting themselves on.
Etcd now has a lot of nice tooling in these alpha releases for 0.5 that we have been doing for adding and removing hosts and for making sure that configuration errors don't actually end up in cluster failures.
We added all these unique UUID's all over the place that we can detect "hey you're adding a member from an old cluster or from this other cluster and you can't bridge two clusters" and reject those sorts of membership changes. And detect changes and faults for hard disk and that sort of thing where we found people would have a log file that had a random bit flip, and its very hard to debug so we added checksums and that sort of stuff. So, we are really focusing so people can detect and recover from mistakes.