Getting rid of Docker and going 100% Kubernetes
How I got rid of Docker and went all in on Kubernetes. Image generated using craiyon.com
Contents
Introduction ⚓
I’ve been hosting my own private web services for nearly 15 years, and in that time, there have been two major changes in the way I manage software.
First was Docker in 2014-15: instead of installing services directly to my server, I could bundle them into containers and run them without having to worry about installing packages or littering files all over my server.
Second was Kubernetes—specifically, K3s 🔗—in 2019. Kubernetes let me automate a bunch of previously manual steps, like making sure containers are deployed in the right order, managing TLS certificates, and running health checks to make sure my containers are always up and running. I’ll acknowledge that Kubernetes is overkill for what I’m running (a handful of single-user services on a single server), but it has its uses and was a great way to get hands-on experience with it.
But despite moving to Kubernetes, I still relied on Docker for a handful of things, specifically pulling container images and applying small changes or fixes before deploying them. I still had to run Docker and Docker Compose on my laptop, periodically run docker-compose build && docker-compose push
to keep my images up-to-date, and run a private image registry to make those images available to my Kubernetes “cluster.” Recently, I made some changes to my Kubernetes manifests so I no longer need to do any of that. This blog post is meant to document some of those changes, and hopefully help other folks going through this same journey.
Why I quit Docker ⚓
Don’t get me wrong, Docker’s a fantastic project and a great way to learn about containers. There’s a reason it’s such a popular project and kick-started the container revolution: it’s easy to use, there are tons of guides, and a huge number of projects provide Docker as an alternate (if not the recommended) installation method.
But over the years, Docker’s become less and less relevant. The core component of Docker—the runtime—was replaced with the open source containerd 🔗. Docker images use the OCI 🔗 (Open Container Initiative) format, which is supported by many other tools including Kubernetes. Even the Docker CLI (command-line interface) is no longer unique, with tools like Podman 🔗 becoming more mature.
Also, the industry in general is moving away from container tools like Docker and more towards container orchestrators like Kubernetes and OpenShift. These are much better at managing complex distributed container environments, like the kind you’d see running at companies like Google and Netflix. And while most companies aren’t on Google or Netflix’s level, the tools are versatile enough to work just as well for small teams as they do for huge multi-national companies.
For these reasons, I decided to try out Kubernetes using K3s, a lightweight version that can run on just a single host.
Transitioning from Docker to Kubernetes ⚓
The first step in migrating (after installing K3s, of course) was to update my manifests. I was using Docker Compose 🔗, a tool designed for managing multiple containers at once. Since I was still new to Kubernetes, I used another tool called Kompose 🔗 to convert my Docker Compose file into Kubernetes manifests. There was of course some additional cleanup and configuration, but Kompose gave me a good foundation to build from.
From there, I was deep in the Kubernetes docs learning about Deployments, ReplicaSets, liveness probes, Services, Ingresses, load balancers, networking rules, and all sorts of other weirdness to get my services working. And for the most part, it worked great! There was just one small problem: Some of my services use custom container images.
One great feature about Docker Compose is that it can simultaneously build and deploy custom images, since it’s typically running on the same host as the Docker service itself. With Kubernetes, however, this isn’t an option. You have to point Kubernetes to a pre-built image being hosted in a container registry somewhere. Since these custom images contained some sensitive information that I didn’t want to host publicly, and I didn’t want to pay for a private registry, I decided to host my own as its own separate Kubernetes service 🔗.
The process looked like this:
- Install Docker and Docker Compose onto my laptop.
- Build my container images on my laptop using
docker-compose build
. - Push my newly created images to my private registry using
docker-compose push
. - Update my Kubernetes manifests to pull images from my private registry.
- Deploy the Kubernetes manifest using
kubectl
.
Not too messy, but more complicated than I’d like. I figured there had to be a way to simplify this.
The beauty of Kubernetes’ lifecycle postStart option ⚓
Kubernetes has an amazing feature that negated the need for custom images: container lifecycle events 🔗. This lets you run commands before a command starts and right before it terminates. The only downside (which isn’t relevant in my case) is that this runs on every container instance, so large commands can slow down container deployment.
As an example, I run a
Nextcloud 🔗 instance that requires custom permissions for the www-data
user. Specifically, I create a new group called media
that has a specific gid (group ID), then assign that group to www-data
. With Docker, this meant building a custom image based on the official Nextcloud image just to run these two commands. But with Kubernetes, I can just drop these into the manifest like so:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: nextcloud
name: nextcloud
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: nextcloud
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: nextcloud
spec:
containers:
- image: nextcloud:26-apache
imagePullPolicy: Always
name: nextcloud
resources: {}
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "groupadd -og 1000 media && usermod -aG media www-data"]
hostname: nextcloud
restartPolicy: Always
This does everything I need it to do, without any custom image building. Without the need to build custom images, I no longer needed Docker. And without Docker, I no longer needed a private container registry. Just like that, several layers of complexity vanished. Now, the process looks like this:
- Update my Kubernetes manifests.
- Deploy the manifests using
kubectl
.
Conclusion ⚓
As revolutionary as Docker was, I think it’s time has come. Kubernetes is more complex, maybe too complex for a single user, but its functionality far surpasses Docker. Once you get it configured and understand how it works, it’s even easier than Docker in my opinion. Hopefully this post gave you some ideas on how to simplify your own container deployment even more!
Previous: "...and back to Airsonic" | Next: "What Pride Means to Me" |