Many developers choose Rails because it allows them to be productive very quickly, but that can come with the trade off of dependency spaghetti, frustration with databases and version management, and file size bloat. Using Docker can alleviate some of these issues by isolating processes from one another, limiting the scope of dependencies, and allowing you to use optimized images. Of course, another layer of abstraction from your code can introduce problems of its own. Below, we'll guide you through the process of starting a new Rails project with containerization in mind, and explain some of the sticking points.
Step 1: Select a Rails Base Image
A good base image is your best starting point. It should give you all the dependencies you need to build an application in development and run in production. In the case of Rails development, you want the Ruby language itself along with system libraries needed to build gems like nokogiri, pg, or mysql2.
We've created a couple versions of popular operating system images for this very purpose. On the Docker Hub, along with the official Rails image, you can find our Rails base images that derive from Alpine and Ubuntu.
Step 2: Set up an In-Container Development Env
To use the base image, you can certainly run a container and just SSH right into it and use it like a virtual machine, but that's not the best way to go about using Docker in development. During development, you'll want to use tools such as an editor or IDE, but these have no place in your container at runtime. Instead, you should volume mount your code into a container running the base image and rely on editing normally in your local OS and building and testing in-container. The benefit of this is that you can continue to rely on tools you're familiar with, like debuggers and code completion, without having to compromise the parity of your development and production environments.
Using a Dockerfile
It's possible to run a container using the base image and pass the volume mount instructions on the command line, but it's easier to orchestrate and automate running your application if you make use of a Dockerfile. In some cases, it's absolutely necessary. For example, when using the official Rails image, in order to control the firing of the
ONBUILD instructions, they make this recommendation:
adjust your Dockerfile to inherit from a non-onbuild variant and copy the commands from the onbuild variant Dockerfile (moving the ONBUILD lines to the end and removing the ONBUILD keywords) into your own file so that you have tighter control over them and more transparency for yourself and others looking at your Dockerfile as to what it does.
With the CenturyLink Labs images, the Dockerfile is as simple as a single line
or two when using the alpine-rails image with PostgreSQL
FROM centurylink/alpine-rails RUN apk --update add libpq postgresql-dev
Once the Dockerfile is in place, the image can be built and tagged. Apart from the code you'll inject via volume mount, this image is identical to the image you'll run during testing, staging, and production.
docker build -t myrailsapp .
A slight modification to the Dockerfile to
ADD your code during the image build and running it with
RAILS_ENV=production is all that's needed to create the perfect image for your CI/CD process. This is best handled with a separate production version of your Dockerfile (e.g.
Dockerfile.prod) and then passing
--file="Dockerfile.prod" to the Docker CLI build command.
Step 3: Decomposing the Monolith
You're using Docker, so you might as well embrace microservice architecture. You don't necessarily need to include your Rails application's database in the container which houses the Rails code. If you're using sqlite3, you can and should, but if you're using PostgreSQL or MySQL, there are already official images that are simple to configure and run containerized with Docker. There's a bit of work up front to use them with a Dockerized Rails app, though, but it's relatively simple.
Persistent Data Storage with a Containerized Database
If you're working with MySQL or Postgres, you want to create a data volume container to house your data. There are plenty of good reasons to use a data volume container when using a containerized database, but suffice it to say that they are designed to persist data. When doing Rails development and you recreate the database container with a data volume you don't lose your data and you can follow the conventional mode of migrating your app's database.
Here's an example of the Docker CLI command to run a data volume intended to house PostgreSQL data:
docker create --name data -v /var/lib/postgresql busybox /bin/true
This creates the data volume container and sets up the /var/lib/postgresql directory within the container as a volume to be mounted by other Docker containers. Although specific to PostgreSQL, this same concept works for other databases. You simply need to create the volume using the path expected by the database.
The Database Container
Once you have a data volume container, mounting it from a database container is not difficult. You simply pass the name of the data container in as the value for the
--volumes-from flag of the
run command. Below is an example of a container mounting the volume created above.
docker run -d --volumes-from data --name db -e POSTGRES_PASSWORD=mysecretpassword postgres
One important thing to note is that when you do decide to get rid of the containers and corresponding data volume containers, you want to remove the last container that references the volume with the
-v flag or you could leave ophaned containers on the system just taking up valuable space.
docker rm -v db
The Rails Service Container
With an image, a running database container, and a data volume container all in place, we can fire up our Rails application container.
Getting the code into the container is done by volume mounting the directory containing the code into a directory in the container. This is done with the
-v flag of the Docker
Next, To bind the Rails server port to the container port, you'll need to do two things. First, you pass the
-p flag to the
Second, you pass the -b flag to the
rails server command to bind Rails to any IP address of the resulting container.
rails s -b '0.0.0.0'
Finally, we need to connect to the database container. One of the really nice features of Rails is that the database configuration in
database/config.yml can be overriden by setting the connection information in the special
DATABASE_URL environment variable. When we pair this with Docker container linking and the ease of injecting environment variables into Docker containers at runtime, we get a very simple way of connecting our Rails app to our containerized database.
With the following command, we can run our Rails app in a container, mapped to the localhost on port 3000, and connected to the database service.
docker run --name webapp -e DATABASE_URL=postgresql://postgres:mysecretpassword@db/ -p 3000:3000 -v ./:/usr/src/app myrailsapp bundle exec rails s -b '0.0.0.0'
If you're using boot2docker, you'll need to forward port 3000 from the VM to a local port to see the application in the browser, or simply access the GUI via $docker_host_ip:3000.
Step 4: Executing Rails Commands and Rake Tasks
The last thing we need is to be able to execute the usual
rake commands. Thankfully, Docker has the
exec command which is tailor-made for this very purpose. With it, we can execute arbitrary commands on our container. For example, if we make a change to the Gemfile locally, we can run bundler in the container simply by executing the following command which tells Docker to execute the
bundle command on the container named 'webapp':
docker exec webapp bundle
Likewise, we can migrate the database with Rake:
docker exec webapp rake db:migrate
We can even execute our tests:
docker exec rake spec
In each case, Docker starts up the container, executes the command and then shuts the container down again. Unless we remove the container, it's there to do our bidding. Changes we make locally are reflected in the container because we've mounted the code with the
-v flag. And with that, we've achieved the goal of a containerized development environment.
Step 5: Orchestrating Containers with Docker Compose
Before we stick a fork in it, however, there's one last optimization available to us. Orchestrating the startup, shutdown, and linking between these containers can be a bit tedious. Docker Compose can alleviate much of the pain. With a single YAML file, we can reduce the coordinated control of this multi-container application to a simple start and stop.
Here's the docker-compose.yml modeling the three containers.
--- webapp: build: "." ports: - 3000:3000 volumes: - ".:/usr/src/app" working_dir: "/usr/src/app" command: bundle exec rails s -b '0.0.0.0' environment: - DATABASE_URL=postgresql://postgres:mysecretpassword@db/ links: - db:db db: image: postgres volumes_from: - data environment: - POSTGRES_PASSWORD=mysecretpassword data: image: busybox volumes: - "/var/lib/postgresql"
To start the multi-container application, we simply call
docker-compose up. We can still execute our typical Rails and Rake commands, as well. Instead of using
docker exec we just use
docker-compose run webapp. If that's just too much typing for you, create a simple alias to call the Compose
run command and pass in the name of the Rails app container followed by the command.
Some developers advocate using containers while developing locally as if they were virtual machines. However, this has the potential to reduce parity between their Dev and Prod environments, while simultaneously limiting their ability to use familiar tools such as an IDE or debugger. The right way to use Docker in development is for running and testing the application and for standing up dependent services.
In this post, we've shown how to set up a Rails development environment that allows you to code locally using the toolset for your environment while also making use of containerization for running and testing the application and a database alongside it.