If you’ve kept up with the latest trends in software development, there are two terms you’ve undoubtedly encountered again and again: Docker and Kubernetes, which are essentially shorthand for containers and orchestration.
Docker containers have helped streamline the process of moving applications through development and testing and into production, while both Docker and Kubernetes have helped to reinvent the way applications are built and deployed—as collections of microservices instead of monolithic stacks.
Why are Docker and Kubernetes important, how are they changing software development, and what role does each play in the process? I’ll try to answer those questions below.
Docker and containers
Containers—supported in Linux, Windows, and other modern operating systems—allow software to run in self-contained mini-environments that are isolated from the rest of the system.
Containers have been likened to VMs, but they’re not VMs—they’re far leaner, faster to start and stop, and much more flexible and portable. Because containers can be spun up or down or scaled in or out in seconds, they make it easier to run apps in elastic environments like the cloud.
Linux and other operating systems have supported containerised apps for many years, but working with containers was not exactly user-friendly. Docker, in both its open source and commercial incarnations, is software that makes containers a user-friendly and developer-friendly commodity.
Docker provides a common set of tools and metaphors for containers so that you can package apps in container images that can be easily deployed and re-used in your own organization or elsewhere.
In short, Docker makes it a snap to create container images, version them, share them, move them around, and deploy them to Docker-compatible hosts as running containers.
When do I use Docker and containers?
Docker and containers are best suited for when you’re dealing with workloads that must have one or more of the following qualities:
- Elastic scalability: You don’t know how many instances of an app you’ll need to run to meet demand. A containerised app or service can be scaled in our out to meet demand by deploying fewer or more instances of its containers.
- Isolation: You don’t want the app to interfere with other apps. Maybe you’ll be running multiple versions of the app side-by-side to satisfy different revisions of an API. Or maybe you want to keep the underlying system clean (always a good idea).
- Portability: You need to run this app in a variety of environments, and you require each setup to be reproducible. Containers let you package up the entire runtime environment of your application, making the app easy to deploy anywhere you find a Docker-compatible host—a developer desktop, a QA test machine, local iron, or remote cloud.
Kubernetes and container orchestration
Containers are designed chiefly to isolate processes or applications from each other and the underlying system. Creating and deploying individual containers is easy.
But what if you want to assemble multiple containers—say, a database, a web front-end, a computational back-end—into a large application that can be managed as a unit, without having to worry about deploying, connecting, managing, and scaling each of those containers separately? You need a way to orchestrate all of the parts into a functional whole.
That’s the job Kubernetes takes on. If containers are passengers on a cruise, Kubernetes is the cruise director.
Kubernetes, based on projects created at Google, provides a way to automate the deployment and management of multi-container applications across multiple hosts, without having to manage each container directly.
The developer describes the layout of the application across multiple containers, including details like how each container uses networking and storage. Kubernetes handles the rest at runtime. It also handles the management of fiddly details like secrets and app configurations.
Kubernetes requires a certain amount of expertise to use well, although it’s far more of a turnkey solution than it used to be.
Some of the progress in ease of use is due to readily available recipes for common applications (Helm charts); some is due to a wealth of Kubernetes distributions produced by name-brand firms (Red Hat, Canonical, Docker) that work hand-in-hand with popular application stacks and development frameworks.
When do I use Kubernetes and container orchestration?
Simple containerised apps that serve a small number of users typically don’t require orchestration, let alone Kubernetes. But if an app has more than a trivial level of functionality or a trivial number of users, it becomes hard not to reinvent the wheel provided by orchestration systems.
Here are some rules of thumb for determining when orchestration should enter the picture.
- Your apps are complex: Any application that involves more than two containers probably fits the bill. That said, modest apps that serve only a small number of users might be orchestrated through a more minimal solution like Docker swarm mode rather than Kubernetes.
- Your apps have high demands for scaling and resilience: Kubernetes and other orchestrators let you balance loads and spin up containers to meet demand declaratively, by describing the desired state of the system instead of coding reactions to changing conditions by hand.
- You want to make the most of modern CI/CD techniques: Orchestration systems support deployment patterns for apps using blue/green deployment or rolling upgrades.
There may come a day when Docker and Kubernetes are eclipsed by even friendlier abstractions, and give way to more elegant ways to create and manage containers. For now, though, Docker and Kubernetes are crucial to know and understand.