Running a docker registry in my homelab
Home labs are a great place to learn and tinker with systems. I love it because I get to wear my systems administrator hat. I’ve been doing a lot of application development lately as well as tinkering with various build & deployment tools for those applications. The best way, in my opinion, is docker. It’s just so good, you can package up all of the tools and configurations into a distributable unit, using an open standard.
Those packaged units are called images in docker. And in order to distribute them, you need a place to store them. Enter the registry.
The Registry is a stateless, highly scalable server side application that stores and lets you distribute container images and other content. Learn more about it here -> https://distribution.github.io/distribution/
When you pull an image from the docker hub, you are pulling from an image registry. You can run this registry yourself, which is exactly what I want to do.
There are hosted registries you can use such as hub.docker.com, or github packages. There are also paid versions like with AWS ECR, or DigitalOcean.
I want my own because I don’t want hobby apps and experiments on a public registry. I also do not want to pay for private registry when I am perfectly capable of running my own. I do not require authentication since it is my home network.
I do not want this to be a tutorial on how to setup and run the image registry. There are a lot of good resources for that. Instead I want to chat about 3 different ways to run the registry, and the headaches that come with each.
For a few months I have been running the registry on my home network and deploying some apps from it, but it has felt fragile. Tweaking and turning far more knobs and dials than I had expected. A lot of effort to run the registry where communication between the docker cli and engine was over TLS.
There are 3 ways to run the registry. Insecurely, Securely with Self-Signed Certificates, and Securely with “proper” Certificates.
Insecurely
This is easy enough. You run the registry, do not create any certs, and then configure each docker installation in your network to use insecure registries. It also mostly just works. What I was running into was that some applications that build docker images that also use buildkit would fail to push the image to the registry. To get passed that required customizing buildkit configuration, but I simply could not get this working correctly (especially with Kamal). Going this route required me to fiddle with may too many knobs that were not the registry itself.
If I was running the registry on my workstation and only building on my workstation it would be fine.
Customizing docker run time settings you create a file /etc/docker/daemon.json
if it does not already exist. Then you add the following, then restart docker.
{
"insecure-registries" : [ "registry.lindauerlab.net:5000" ]
}
Securely with a self-signed certificate
Doable, but annoying to be real blunt. Again, so many knobs and dials to fiddle with that are not the registry or immediately related to me getting work done. Now I am managing the certificates, and distributing to the different machines that need them. I was having some trouble getting this working entirely. Sometimes docker login
would fail due to SSL errors. Most of the time buildkit failed to push an image. I realized quickly that I would have to spend more time janitoring certificates than I wanted to, which is to say more than zero percent.
Securely with certificates from letsencrypt
The option I ultimately ended up going with.
This path has the least amount of friction. None of my VMs are publicly accessible frm the Internet so I went with DNS validation for generating the certs. I did loosen up the permissions for the cert directories so I could mount them in the registry container with less hassle. I probably could run an NGinx proxy on the server and route requests to the registry container and avoid the permissions issue. The trade off I guess is finding which knobs and dials you care to tweak.
This has far less dials. Far less fiddling. Certbot makes it super easy to get certs. Most of the fiddling work is done on a single machine now, instead of across many machines. Overall, just less work. And something I can more easily encapsulate into an ansible playbook.
My working registry project
I am containing this in a project called ~/docker-registry
It consists of a docker-compose.yml
, a symlink to the letsencrypt certs, and a directory to store the data.
Generate certificates:
sudo certbot certonly --manual --preferred-challenges dns-01 -d registry.lindauerlab.net
Symlink certs directory:
sudo ln -s /etc/letsencrypt/live/registry.lindauerlab.net certs
docker-compose.yml
services:
registry:
image: registry:2
ports:
- "5000:5000"
environment:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/fullchain1.pem
REGISTRY_HTTP_TLS_KEY: /certs/privkey1.pem
volumes:
- ./data:/data
- ./certs/:/certs
It’s important that you use the fullchain for the TLS cert or you will get certificate authority errors.
Now it’s as simple as docker compose up -d
and I have a working registry.
Test with curl -
curl https://registry.lindauerlab.net:5000/v2/_catalog
{"repositories":["some-test-project-i-already-pushed-up"]}