If you've built any multi-container applications, chances are you've had to define some networking rules in order to allow traffic between your containers. There are several ways to do this: you can expose a port via the --expose flag at runtime, or include an EXPOSE instruction in the Dockerfile. You can also publish ports by using the -p or -P flags in the Docker run string. There's also container linking via --link. While these may achieve mostly the same results, they are different. So which one should you use?

TL;DR

For reliability, use -p or -P to create specific port binding rules; treat EXPOSE as a documentation mechanism, and approach --link with caution.

Before comparing the different approaches, let's learn about each individually.

Monthly Docker Tips

Exposing a port via EXPOSE or --expose

You can expose a port in two ways: either in the Dockerfile with the EXPOSE instruction, or in the docker run string with --expose=1234. These are equivalent commands, though --expose will accept a range of ports as an argument, such as --expose=2000-3000. However, neither EXPOSE nor --expose depend on the host in any way; these rules don't make ports accessible from the host by default. Given the limitation of the EXPOSE instruction, a Dockerfile author will often include an EXPOSE rule only as a hint to which ports will provide services. It is up to the operator of the container to specify further networking rules. Used in conjunction with the -P flag, which I'll get to a bit later in this article, this strategy for documenting ports via the EXPOSE command can be very useful. Essentially, EXPOSE or --expose is just metadata that provides information to be used by another command, or to inform choices made by the container operator. In practice, there is no difference between exposing a port at runtime or exposing it via an instruction in the Dockerfile. Running a container configured either way will yield the same results when looking at the networking configurations via docker inspect $container_id | $container_name output:

"NetworkSettings": {
    "PortMapping": null,
    "Ports": {
        "1234/tcp": null
    }
},
"Config": {
    "ExposedPorts": {
        "1234/tcp": {}
    }
}

We can see that the port is noted as exposed, but there are no mappings defined. Keep this in mind as we look at publishing ports.

ProTip: Using the runtime flag --expose is additive, so it will expose additional ports alongside whatever EXPOSE instructions were specified in the Dockerfile.

Publishing a specific port with -p

Instead of merely offering up a port, you can explicitly bind a port or group of ports from container to host using the -p flag. Note this is the lowercase p, not uppercase. Because this configuration does depend on the host, there is no equivalent instruction allowed in the Dockerfile; it is a runtime configuration only. The -p flag can take a few different formats:

ip:hostPort:containerPort| ip::containerPort \
  | hostPort:containerPort | containerPort

Essentially, you can omit either ip or hostPort, but you must always specify a containerPort to expose. Docker will automatically provide an ip and hostPort if they are omitted. Additionally, all of these publishing rules will default to tcp. If you need udp, simply tack it on to the end such as -p 1234:1234/udp. If I run a very simple application with the string docker run -p 8080:3000 my-image, whatever service running on my container on 3000 will be available on my host on 8080. The ports need not match, but you must be careful to avoid port conflicts in the case of exposing ports on multiple containers. The best way to avoid conflict is to let Docker assign the hostPort itself. In the same example as above, I could choose to run the container with docker run -p 3000 my_image instead of passing in a host port. In this case, Docker will select a port on my behalf. I can see what port was selected by running the command docker port $container_id | $container_name. Aside from docker port -- which will only display ports bound to the host while the container is running -- we can also see networking information by running docker inspect on the container and browsing around in the config. This is usually only interesting if there are port mappings defined. They will be in Config, HostConfig, and NetworkSettings. We'll use the information here in a bit to compare and contrast a few different styles of setting up networking between containers.

ProTip: You can specify as many port mappings with -p as you fancy.

Comparing --expose/EXPOSE and -p

To better understand the differences, let's run containers from images with different port settings. Let's run a very bare-bones application that simply echos 'hello world' when you curl it. We'll call the image no-exposed-ports:

FROM ubuntu:trusty
MAINTAINER Laura Frank <[email protected]>
CMD while true; do echo 'hello world' | nc -l -p 8888; done

If you're playing along at home, make sure you're on the Docker host, and not using an intermediary like boot2docker. If you are using boot2docker, run boot2docker ssh before running any example commands.

Note: that we'll run this container with the -d flag so that it stays running in the background. (It is worth noting again that port mapping rules only apply to running containers):

$ docker run -d --name no-exposed-ports no-exposed-ports
e18a76da06b3af7708792765745466ed485a69afaedfd7e561cf3645d1aa7149

There's really not much going on, other than the echo of the container's ID to let us know the service successfully started. As expected, there is no information to see in docker port no-exposed-ports or via docker inspect no-exposed-ports because we made neither a port mapping rule nor publish any ports. So, what happens if we do publish a port, and what's different between the -p flag and EXPOSE? Taking the no-exposed-ports image above, we will add a -p flag at runtime, but NOT add any expose rule. Recall that the result of either an --expose flag or EXPOSE instruction is data in Config.ExposedPorts.

$ docker run -d --name no-exposed-ports-with-p-flag -p 8888:8888 no-exposed-ports
c876e590cfafa734f42a42872881e68479387dc2039b55bceba3a11afd8f17ca
$ docker port no-exposed-ports-with-p-flag
8888/tcp -> 0.0.0.0:8888

Awesome! We see the available port. Also note that it defaulted to tcp. Let's snoop in the network settings to see what else is up. There are a couple interesting things in here:

"Config": {
    [...]
    "ExposedPorts": {
        "8888/tcp": {}
    }
},
"HostConfig": {
    [...]
    "PortBindings": {
        "8888/tcp": [
            {
                "HostIp": "",
                "HostPort": "8888"
            }
        ]
    }
},
"NetworkSettings": {
    [...]
    "Ports": {
        "8888/tcp": [
            {
                "HostIp": "0.0.0.0",
                "HostPort": "8888"
            }
        ]
    }
}

Notice that the entry in "Config" for an exposed port. It's exactly the same as when we just exposed a port via EXPOSE or --expose. Docker implicitly exposes a port that is published. The difference between an exposed port and a published port is that the published port is available on the host, and we can see that in both "HostConfig" and "NetworkSettings".

All published (-p or -P) ports are exposed, but not all exposed (EXPOSE or --expose) ports are published.

Publishing ports with -P and EXPOSE

Ready for a little Docker sugar? Because EXPOSE is often used as a documentation mechanism -- that is, just to signal to the user what port will be providing services -- Docker makes it super easy to translate the EXPOSE instructions from the Dockerfile into specific port binding rules. Just add -P at runtime, and Docker will automatically create port mapping rules for you, and you are guaranteed to avoid port mapping conflicts. I've added the following lines to the same web application Dockerfile we used in the above examples:

EXPOSE 1000
EXPOSE 2000
EXPOSE 3000

We'll build this image and tag it as exposed-ports.

docker build -t exposed-ports

Then let's just run it with the -P flag, but not pass in any specific -p rules. We can see that Docker will map each of the ports associated with an EXPOSE instruction to a port on the host:

$ docker run -d -P --name exposed-ports-in-dockerfile exposed-ports
63264dae9db85c5d667a37dac77e0da7c8d2d699f49b69ba992485242160ad3a
$ docker port exposed-ports-in-dockerfile
1000/tcp -> 0.0.0.0:49156
2000/tcp -> 0.0.0.0:49157
3000/tcp -> 0.0.0.0:49158

Handy, right?

What about --link?

You may have come across using the runtime flag --link name:alias for specifying a relationship in a multi-container application. While --link is a convenient flag, you can approximate nearly all of the functionality with port mapping rules and environment variables, if needed. Think of --link as a mechanism for service discovery instead of a gatekeeper for network traffic. The only additional thing provided by a --link flag is that it updates the /etc/hosts file of the consumer container (i.e. the one that has the --link flag) with the source container's host and container ID. Docker has a standard set of environment variables that are set with the --link flag, and you can find them in the docs if you're curious. While --link can be handy for smaller projects with an isolated scope, it functions mostly as a service discovery tool. If you are using any orchestration service in your project like Fleet, it's probably true that there will be some other service discovery tool managing relationships. That orchestration service may throw out your Docker links altogether in favor of whatever service is included with the tool. In fact, many of the remote deployment adapters used in the Panamax project do just that!

Striking a Balance

Depending on who (or what other containers) consume the services you run with Docker, one networking option may be highly preferable. Remember that you can never know how someone might use your image if you publish it on the Docker Hub, so try to keep your images as flexible as possible. If you generally consume images from the Docker Hub, running containers with the -P flag is an easy and fast way to create port mapping rules based on the authors' suggestions. Remember that every published port is an exposed port, but the inverse is not true.

Quick Reference

Command Function
EXPOSE Document where a service is available, but not create any mapping to the host
--expose Expose a port at runtime, but not create any mapping to the host
-p Create a port mapping rule like -p ip:hostPort:containerPort. containerPort is required. If no hostPort is specified, Docker will automatically allocate one.
-P Map a dynamically allocated host port to all container ports that have been exposed by the Dockerfile
--link Create a link between a consumer and service container, like --link name:alias. This will create a set of environment variables and add entries into the consumer container's /etc/hosts file. You must also expose or publish ports.