Running Ruby on Rails on Docker

I have been developing Ruby on Rails apps in Docker for several years now. I couldn't imagine not using Docker at this point!

An Introduction to Docker

Docker an open-source project for automating the deployment of applications as portable self sufficient containers that run in cloud or on premises. Docker is also a company that owns this technology. The underlying technology that powers Docker has been part of Linux for many years. Docker has built some tooling to make working with that technology seamless and easy.

A container is a sandboxed process on your machine that is isolated from all other processes on the host machine. The isolation leverages long time Linux features kernel namespaces and cgroups.

You build a Docker image with a file called a Dockerfile. This file contains all of the commands required to build the image. An image is basically a custom file system for a container. It contains everything needed to run an application, all dependencies, configs, scripts, binaries, etc. The image informs how a container should instantiate and run.

Docker is not a Virtual Machine and is not a replacement for it either. It's an entirely different thing, though some concepts may overlap. With a VM you run an entire operating system with resources allocated from the host through a layer called the hypervisor. With docker you share the hosts operating system and run just your app over a thin layer called the docker engine. VMs are heavy in terms of resource allocation, while Docker containers are lightweight. Docker images are ephemeral and are meant to be immutable. You "compile", ie: build, your image and run it. When you want to make changes you build a new image and run a new container. Deploying an app to a VM usually entails copying your apps files to the machine.

Building and Running a Docker Image For Your Rails App

What you want to do in your Dockerfile is to install any dependencies your app requires, install bundler, install ruby gems, run any asset compilation, expose any ports you wish to open, then tell the docker engine how to start your services.

My Dockerfile is a little different than what is proposed for Rails, but it's tailored for my apps and how I run Rails and things in production. And I think that the point is that how you run your application may be different than me, or DHH, or whomever, and so you need to tailor your tooling to support that. I slice "microservices" from a single Docker image of my Rails app. I run a web instance, separate sidekiq instances based on queues and workload, and then some one-off tasks to perform things like database migrations. I control which services are started based on environment variables I pass into the docker container when I start them up. I don't use Procfiles to do this, but you might tailor your start up command accordingly.

Generally in all of my rails projects I run NGinx in front of Puma because I want static assets served by NGinx. It's faster overall and I can leave threads open on Puma to do real work. In the Dockerfile I install NGinx and copy a config for the server.

Dockerfile


FROM ruby:3.1.3

ARG RAILS_ENV=production
ENV RAILS_MASTER_KEY=

# Install core packages
RUN apt-get update -qq && \
    DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends imagemagick nginx libvips libvips-dev libvips-tools libpq-dev && \
    rm -rf /var/lib/apt/lists/* /var/cache/apt

# Installer bundler and update
ARG BUNDLER_VERSION=2.3.26
RUN gem install "bundler:${BUNDLER_VERSION}" --no-document && \
    gem update --system && \
    gem cleanup

# Install NodeJS (https://github.com/nodejs/docker-node/blob/main/14/bullseye/Dockerfile)
ARG NODE_VERSION=16.14.2
RUN ARCH= && dpkgArch="$(dpkg --print-architecture)" \
  && case "${dpkgArch##*-}" in \
    amd64) ARCH='x64';; \
    ppc64el) ARCH='ppc64le';; \
    s390x) ARCH='s390x';; \
    arm64) ARCH='arm64';; \
    armhf) ARCH='armv7l';; \
    i386) ARCH='x86';; \
    *) echo "unsupported architecture"; exit 1 ;; \
  esac \
  # gpg keys listed at https://github.com/nodejs/node#release-keys
  && set -ex \
  && for key in \
    4ED778F539E3634C779C87C6D7062848A1AB005C \
    94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
    74F12602B6F1C4E913FAA37AD3A89613643B6201 \
    71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
    8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 \
    C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
    C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C \
    DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
    A48C2BEE680E841632CD4E44F07496B3EB3C1762 \
    108F52B48DB57BB0CC439B2997B01419BD92F80A \
    B9E2F5981AA6E0CD28160D9FF13993A75599653C \
  ; do \
      gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || \
      gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; \
  done \
  && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH.tar.xz" \
  && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
  && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
  && grep " node-v$NODE_VERSION-linux-$ARCH.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
  && tar -xJf "node-v$NODE_VERSION-linux-$ARCH.tar.xz" -C /usr/local --strip-components=1 --no-same-owner \
  && rm "node-v$NODE_VERSION-linux-$ARCH.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
  && ln -s /usr/local/bin/node /usr/local/bin/nodejs \
  # smoke tests
  && node --version \
  && npm --version

# Install Yarn Package Manager
ARG YARNPCKG_VERSION=1.22.18
RUN set -ex \
  && for key in \
    6A010C5166006599AA17F08146C2130DFD2497F5 \
  ; do \
    gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || \
    gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; \
  done \
  && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARNPCKG_VERSION/yarn-v$YARNPCKG_VERSION.tar.gz" \
  && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARNPCKG_VERSION/yarn-v$YARNPCKG_VERSION.tar.gz.asc" \
  && gpg --batch --verify yarn-v$YARNPCKG_VERSION.tar.gz.asc yarn-v$YARNPCKG_VERSION.tar.gz \
  && mkdir -p /opt \
  && tar -xzf yarn-v$YARNPCKG_VERSION.tar.gz -C /opt/ \
  && ln -s /opt/yarn-v$YARNPCKG_VERSION/bin/yarn /usr/local/bin/yarn \
  && ln -s /opt/yarn-v$YARNPCKG_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg \
  && rm yarn-v$YARNPCKG_VERSION.tar.gz.asc yarn-v$YARNPCKG_VERSION.tar.gz \
  # smoke test
  && yarn --version

# Copy and Build Rails App
WORKDIR /app

# Install the Gems before copying the app code in
# This will speed up future builds by ensuring that regular code changes do not require a full bundle install 
COPY Gemfile Gemfile.lock ./
RUN bundle install --jobs "$(nproc)"

# Similar logic for npm packages. We install before copying code over. 
COPY package.json yarn.lock ./
RUN yarn install

COPY . .

RUN if [ "$RAILS_ENV" = "production" ] ; then bin/rails assets:precompile ; fi

# I run Nginx in front of Puma. I want Nginx to serve static assets. 
RUN ln -sf /dev/stderr /var/log/nginx/error.log
RUN ln -sf /dev/stdout /var/log/nginx/access.log
COPY config/nginx.conf /etc/nginx/.

# The docker run command starts up nginx and rails
CMD ["docker/run"]

EXPOSE 3000
EXPOSE 80

The external files I am referencing in the Dockerfile:

config/nginx.conf


worker_processes auto;
daemon off;
pcre_jit on;
error_log /var/log/nginx/error.log warn;
pid /app/tmp/pids/nginx.pid;
include /etc/nginx/modules/*.conf;

events { worker_connections 1024; }

http { include /etc/nginx/mime.types; default_type application/octet-stream; server_tokens off; client_max_body_size 10m; keepalive_timeout 120; sendfile on; tcp_nodelay on; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:2m; gzip on; gzip_vary on; log_format main ‘$remote_addr - $remote_user [$time_local] “$request” ' ‘$status $body_bytes_sent “$http_referer” ' ‘"$http_user_agent" “$http_x_forwarded_for”';

access_log /var/log/nginx/access.log main;

upstream app_server { server 127.0.0.1:3000 fail_timeout=0; }

server { listen 80 default_server; listen [::]:80 default_server; root /app/public; add_header X-Frame-Options “DENY”; add_header X-Content-Type-Options nosniff; add_header Strict-Transport-Security “max-age=31536000; includeSubDomains” always; add_header X-Xss-Protection “1; mode=block” always; add_header Referrer-Policy “origin-when-cross-origin” always;

location = /favicon.ico {
  log_not_found off;
  access_log off;
}

location = /robots.txt {
  allow all;
  log_not_found off;
  access_log off;
}

location ~ ^/(assets|packs) {
  expires max;
  gzip_static on;
  log_not_found off;
}

location ~ /\.ht {
  deny all;
}


location / {
  proxy_set_header X-Forwarded-Ssl on; 
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-Server $host;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_redirect off;
  proxy_read_timeout 300s;
  if (!-f $request_filename) {
    proxy_pass http://app_server;
    break;
  }
}

error_page 500 502 503 504 /500.html;
location = /500.html {
  root /app/public;
}

} }

docker/run

I control which services to start based on the environment variables I use to run the docker container. This file, `docker/run` executes a ruby script which does the heavy lifting of starting and controlling services. This is a flexible system that lets me use a single docker image to run a Rails web instance, separate Sidekiq processes, and also a service to perform database migrations for me, by simply changing some environment variables.


#!/bin/sh
exec bundle exec ruby config/docker.rb

config/docker.rb


###############################################################################

Docker

###############################################################################

STDOUT.sync = true

require ‘dotenv’ require ‘logger’ require ‘json’

class Docker

def initialize @config_url = false @pids = [] @pid_commands = {} @quit = false

Signal.trap('QUIT') do
  quit_all
end

Signal.trap('TERM') do
  quit_all
end

end

def call logger.info “Starting container (rails_env: #{rails_env}, deploy_env: #{deploy_env})”

if ENV['ENABLE_WEB'].to_i == 1
  start_webpack_dev_server if rails_env == 'development'
  start_rails_s
  start_nginx
end

if ENV['ENABLE_WORKERS'].to_i == 1
  start_worker
end

wait_all

end

private

def deploy_env @deploy_env ||= ENV.fetch(‘DEPLOY_ENV’){File.file?('/.dockerenv’) ? ‘docker’ : ‘local’} end

def logger @logger ||= Logger.new(STDOUT) end

def load_environment(config_file)

unless File.file?(config_file)
  logger.info 'Using local environment'
  return
end

logger.info "Loading environment from `#{config_file}`"
Dotenv::Parser.new(IO.read(config_file)).call.each_pair do |key, value|
  ENV[key] = value.to_s
end

end

def rails_env @rails_env ||= ENV.fetch(‘RAILS_ENV’, ‘development’) end

def spawn_(command, env = {}) @pids « Process.spawn(env, command)

command = "#{command} `#{env.to_json}`"
logger.info "Started[#{@pids.last}] #{command}"
@pid_commands[@pids.last] = command

end

def system_(command, env = {}) logger.info “Running #{command}” system(env, command) end

def start_nginx spawn_ ‘nginx’ end

def start_webpack_dev_server spawn_ ‘bin/webpack-dev-server’ end

def start_rails_s system ‘rm -f /app/tmp/pids/server.pid’ spawn_ ‘bin/rails s -b 0.0.0.0 -p 3000’ end

def start_worker spawn_ “bundle exec sidekiq -C config/sidekiq.yml -t 60 -c #{ENV.fetch(“RAILS_MAX_THREADS”, 20)}” end

def quit_all @quit = true

@pids.each do |pid|
  Process.kill 'TERM', pid
end

end

def wait_all until @pids.empty? pid = Process.wait logger.info “Exited[#{pid}] #{$?.exitstatus} - #{@pid_commands[pid]}” @pids -= [pid]

  if !@quit
    logger.info 'Shutting down container due to component failure'
    quit_all
  end
end

end

end

Docker.new.call

When you put this all together you can build and run your docker image with the command:


docker build -t my-rails-app .
docker run --rm -it -p80:80 --env RAILS_MASTER_KEY=[see config/master.key] --env ENABLE_WEB=1 --env ENABLE_WORKERS=1 my-rails-app

What this command is saying is, build a docker image named "my-rails-app" from the current directory. Then run the image "my-rails-app", mapping local port 80 to port 80 in the container, and set these environment variables in the container. The flags "--rm" will delete the container when you stop it, and "-it" instructs Docker to allocate a pseudo-TTY connected to the container’s stdin; creating an interactive bash shell in the container, which is useful if you wish to connect to the container to do "things". If all went well you can visit http://localhost to see your app running.

At this point you have an image you can run in a production environment. How you get it there is another exercise. The Docker ecosystem is complicated. You could adopt mrsk and give "capistrano for docker" a go. Personally, I run my own docker swarm cluster because I am a masochist, or I use AWS ECS if I am doing work for clients.

Wrap It Up..

I recommend you checkout dockerfile-rails for a jumpstart on building Docker images for your rails project. If you have a rails project you want to containerize and have some questions about how to get there I would be happy to chat with you! I think Docker is a lot of fun, and extremely useful, but am not blind to the complexity it may present at first glance. However, once you start down the path of using containers for your projects I can promise that you won't look back!