Time with Docker
I've been spending a lot of time with Docker lately. It's a pretty neat technology that solves a couple of hard problems. Firstly, it lets you isolate your application dependencies from your operating system. Every application has a set of programs and libraries it depends on. In a traditional Linux distribution, these are installed via a package manager.
Every package you install changes the system. This is normally a destructive change, unless you have a file system that supports snapshots, Btrfs, or a distribution that values immutability, like NixOS. Docker takes a slightly different approach.
If you have an application that runs on a particular distribution, say CentOS6, package just enough CentOS6 so that your application can run. These are essentially docker containers. It's not a new idea but it has been wrapped in a powerful and useful command line tool.
The second very neat problem Docker solves, is how to describe and build these containers. Docker has defined a DSL expressed in a format called a Dockerfile. The dockerfile is a recipe that describes how to set up your image.
By automating the process of generating images Docker solves one of the main problems of virtualization and operations. Keeping virtual machines up to date in a timely and efficient manner.
These two features alone would make it very useful, but it has one additional concept which gives it even more power. Overlays. Each command you run inside a dockerfile creates a new layer. This design makes docker images re-usable, composable and shareable.
This was driven home to me today when I needed to deploy a node js based application called hubot. The base image comes from NodeJS. You add your configuration and commands to a Dockerfile and then you deploy it to a docker container. Iteration is fast, unlike traditional OS virtualization because of the integrated automation, and this is key to what makes it so powerful.
It works equally well for Continuous Integration. Deploy a Jenkins image, officially blessed by upstream. Add some slaves for each of the OSes you want to target and you have a easily deploy-able and repeatable build system.
It's not without problems, but most of them seem solvable given time. This entire category is pretty interesting, and I'm looking forward to what CoreOS comes up with. Combining a distributed systemd with containers seems like a natural fit.