We've all been there: You just wanted to try a command line utility, but the install process was as tedious as processing a mortgage application. If you weren't forced to complete too many steps, you were required to clutter up your development environment with libraries you'd never use again. Naturally, when Docker came along, you were thrilled that trying out a new tool became much easier. But is it too easy? Maybe it's time to ask yourself: In your quest for a simple command line tool, are you accidentally giving Docker too much control of your system?

After I'd been using Docker containers for a while, I started to run across containerized utilities. I've been thrilled ever since. While not all that different from running a normal Docker app, these containers are very short-lived. The big gain for me is that I don't have to set up an environment to use the tool. All I need is Docker. I'll show you how easy it is with a quick example. In the Dockerfile below, I start with a small Linux distro, install some requirements, and then install the tool, xml2json. I don't need Node.js on my system, and now the tool is portable to other systems so long as Docker is already installed. As an added bonus, clean-up is a snap. Delete the Docker container, and you know you are getting rid of every last trace of the tool.

FROM alpine:latest
MAINTAINER Matthew Close <[email protected]>
# bash alias
# alias dxml2json='docker run -i --rm mclose/xml2json'
# usage example
# somefile.xml | dxml2json > somefile.json

RUN apk --update add nodejs && rm /var/cache/apk/*
RUN npm install -g xml2json-command

ENTRYPOINT ["/usr/bin/xml2json"]

However, when I was looking to quench my thirst for easier tool installation, I also ran across a few containers that you'd start up like this, docker run -v /var/run/docker.sock:/var/run/docker.sock, or variations on the same thing, -v /var/run:/var/run or -v /var:/var. Running a container this way shares the volume /var/run/docker.sock with the container.

Naturally, I wanted to know:

What is /var/run/docker.sock?

What does it mean to share that volume with a container?

And is it always safe?

Here's what I've learned. /var/run/docker.sock is a Unix domain socket. Sockets are used in your favorite Linux distro to allow different processes to communicate with one another. Like everything in Unix, sockets are files, too. In the case of Docker, /var/run/docker.sock is a way to communicate with the main Docker process and, because it's a file, we can share it with containers.

So what do you gain by sharing the socket with a container? Quite a bit as it turns out. When you start Docker and share the socket, you are giving the container the ability to do Docker-like things on the Docker host. Your container can now start or stop other containers, pull or create images on the Docker host, and even write to the host file system. Sharing the Docker socket also makes it possible to use some really great tools. Here's one, and another, and a whole treasure trove of tools. Beyond standard tools, sharing the Docker socket facilitates things like continuous integration of Docker apps. Think of a Jenkins Docker container controlling your Docker development pipeline. Allowing containers access to the socket is very beneficial in some situations.

But what's at risk? Possibly the security of the Docker host system. This isn't a new revelation, but it's definitely worth noting that all that power I just described can pose a threat.

For example, say I'm going to install Docker inside a Docker container. I don't want to run the Docker process; I just want to have the command line docker available to me. I could do this with a single command: docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/bin/docker ubuntu:latest /bin/bash. But I'm going to use a Dockerfile for now. More on why I made that choice later. Here's the Dockerfile.

FROM ubuntu:vivid

RUN apt-get update && \
    apt-get install -y apt-transport-https

RUN apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D && \
    echo "deb https://apt.dockerproject.org/repo ubuntu-vivid main" > /etc/apt/sources.list.d/docker.list && \
    apt-get update && \
    apt-get install -y docker-engine=1.8.3-0~vivid \
        socat && \
    apt-get clean && \
    rm -rf /var/lib/apt-lists/*

CMD ["/bin/bash"]

After building the container, let's run it and take a quick look around: docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock mclose/docker-1.8.3. Note that when I run docker ps inside the container, I'm seeing what is running on the Docker host outside of the container.

[email protected]:/# docker ps
CONTAINER ID        IMAGE                 COMMAND             CREATED             STATUS              PORTS               NAMES
fa6e2e60fd1a        mclose/docker-1.8.3   "/bin/bash"         17 seconds ago      Up 16 seconds                           adoring_bell
[email protected]:/# ls /
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[email protected]:/#

Now for the fun part. From the running container, let's use the docker command line to start another container: docker run -it --rm -v /:/host busybox.

/ # ls /host
Users    dev      home     lib      linuxrc  opt      root     sbin     usr
bin      etc      init     lib64    mnt      proc     run      sys      var
/ # cat /host/etc/motd
                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|

So this is really interesting. I've started yet another container and shared the root file system of the Docker host. See why it might be important to know who can and can't run Docker on your systems? I could create a SUID root shell on the Docker host. I could then run things as root on the Docker host and there wouldn't be any logs of my activity.

There are probably a few folks out there thinking, "But I run this on OS X. Docker-machine totally isolates me." Think again. Take a look at that first directory listing above. Recognize anything? Yup: /Users. Your OS X home directory is right there. Keep in mind that docker-machine mounts the /Users directory in every Docker-machine VM. So this container not only has read/write on the Docker host, but it also has read/write access to your home directory on OS X.


If you're starting to panic right now, don't. The rewards of using Docker are still great overall; you simply need to take some precautions.

  1. Start by looking at which users have the ability to run the Docker command. You should consider them full system admins. Why? Because, as in the example above, users that can run Docker are able to circumvent access control tools like sudo.

  2. When it comes to running containers with access to /var/run/docker.sock, be vigilant with the Dockerfile for the container. You probably got the Dockerfile from someone's repo on github.com. Look for signs that this is a GitHub project you can trust: active development, good documentation, and stars.

  3. Next take a look at the Dockerfile. Is the base system, the FROM line, one that you would trust? Would you install all the applications in the RUN lines of the Dockerfile on your own system? If you see something like my Dockerfile above, where Docker itself gets installed, know why. (That's why I used a Dockerfile above instead of a single command.) Do you fully understand each and every line in the Dockerfile? If not, take a look at the command reference. There might even be a few helper shell scripts that get installed, too, and it's a good idea to take a look at them. Make sure you understand the command line arguments that are being passed to docker run. I've covered the -v option here, but you should know what else is going on when you fire up a container. If you see anything mysterious, get a better understanding from the documentation.

To be clear, I am not suggesting that you go back to the source code for the project and review it. No one has that kind of time. At some point after reviewing the Dockerfile, you have to trust the container you are about to run.

When you finally do run it, if that container uses a shared Docker socket, the container should be considered a high-value asset along with your databases and management systems. This means using network access restrictions, such as firewalls and VPNs, if the container is exposed beyond the Docker network.

Not too distant releases of Docker will probably alleviate some of the risk involved in sharing /var/run/docker.sock with containers. One very promising solution uses namespaces and is in the Docker 1.9.0 experimental build. If you want to know more, here's a great read on Docker namespaces.

Docker is an exceptionally flexible tool, but you do need to be careful how you use it. If you aren't, you could be giving away more access than you bargained for. However, at this point, I hope you have a better understanding of what /var/run/docker.sock means when you run containers. If you review and understand a container's Dockerfile, as well as how it gets run, you will be in a much better position to understand the security risks involved and take appropriate action.