Linux at 25: Containers and unikernels prove less is more

Linux remade the datacenter and created the cloud; now it’s revolutionizing app development and delivery

If there’s been one constant through Linux’s 25 years in the wild, it’s change. The kernel itself has been through dozens of revisions; Linux distributions for most every use case have emerged; and the culture of Linux has evolved from weekend hobby project to an underpinning of worldwide IT infrastructure.

Now we’re seeing the first versions of the next wave of Linux change. Containerization, unikernels, and other experiments with form are reshaping Linux from the inside out, opening up unheralded avenues for how the open source operating system that could (and did!) can do it all over again.

Linux’s container (r)evolution

Containers account for one major aspect of Linux's reinvention. Containers allow a high degree of isolation between applications or even whole virtual systems, minus the overhead typically associated with hypervisor-style VMs.

What’s remarkable about containers isn’t only the way they’ve lionized discussions about software development and operations. It’s how all the technology has been native to Linux for a long time, but became a driver of Linux reinvention only after third parties commodified it.

The most obvious and pivotal example of container tech in Linux is Docker, the software product used to run applications in isolation, as well as to package, deliver, manage, and schedule them. Docker took functionality already available in the Linux kernel -- mainly, cgroups and namespaces -- and provided a convenient metaphor, front end, and workflow to wrap them in.

Not long after Docker took off, people started experimenting with a radical concept. What if we took Linux and stripped it down to nothing but a boot mechanism, a startup system, and a means to run and manage containers? Why not create a Linux that was to containers what the embedded Linuxes had been to networking or storage management? Thus, CoreOS was born.

There were more reasons to do this than sheer novelty. For one, a scaled-down Linux would be easier to manage and maintain, easier to protect from attack, and easier to freshen in the face of a Heartbleed or a Shellshock. It also meant, as Matt Asay noted, a Linux that was more appealing to developers, rather than sys admins or ops people.

Docker’s success and CoreOS’s experiments inspired other Linux distributions to try similar ideas. Red Hat built in support across its product line for running containers at scale (see: OpenShift) and inaugurated its own breed of container-centric Linux, aka Red Hat Atomic Host.

In some ways, Atomic Host is a lot like CoreOS: a pared-down version of Linux that runs containers and does little else. But Red Hat’s idea wasn’t merely to create a minimal system and leave it at that. Instead, Red Hat used Atomic Host as a foundation on which to build a full Linux distribution, using containers to manage software installs on the platform. A botched or buggy install could be rolled back cleanly if needed. This doesn’t yet fully replace the need for conventional Linux package management, but provides an augmentation for it.

Canonical has done some of the same with its Snappy application-packaging system, also container-powered. Originally developed for deploying updates on the Ubuntu Phone OS, Snappy uses containers to handle software installations in the same manner as a database transaction.

Unikernels: Only enough and no more

If stripping Linux to its kernel and some containers wasn’t minimal enough, another set of projects involves reducing Linux to a kernel, an application, and absolutely nothing else that doesn’t need to be there. This is the “unikernel” approach.

Like containers, unikernels aren’t a new concept; they’ve been around in one form or another for decades. Unikernels are widely touted as tiny and fast to boot, with a minimal attack surface -- but more complex to create. By and large such projects have not employed Linux, but instead use custom kernels written from scratch or build on top of minimal kernels like the Xen Project’s MiniOS.

One way Linux could be used as the basis for a unikernel is as a “library OS.” Here, the Linux kernel is essentially turned into a giant code library that’s linked into an application. The Graphene Library OS is one project that uses this approach and can be compiled to embed “native, unmodified Linux applications” in a bootable kernel.

Another prominent example of unikernels and Linux comes by way of Docker. That company purchased Cambridge-based Unikernel Systems, which had been working with unikernels in various scenarios, and used some of its technology to power the release of the Docker for Mac and Windows products. Originally, running Docker on the desktop involved booting a full Linux distribution powered by VirtualBox. Now, it involves using each platform’s native virtualization technology to boot a custom Linux kernel with an embedded Docker Engine.

Not everyone is on board with unikernels as the road ahead. Docker’s interest in unikernels in particular has spurred a spate of recent criticism. Bryan Cantrill of Joyent has argued that unikernels are “unfit for production” -- in his view, the drawbacks far outweigh the benefits. Everything runs in a single process; unikernels are difficult to debug; they create dependencies on the language and development stack used to create the unikernel. Alex Polvi of CoreOS was equally skeptical for many of the same reasons. But Docker’s plan so far has been targeted to a specific use case -- the desktop -- and not intended to unilaterally replace the behavior of containers.

There’s always a next step

Across all these projects, the real innovation isn’t in making Linux “minimal.” Tiny Linux distributions have been a staple of the Linux world. What’s new is how long-standing problems of software delivery, management, and maintenance -- as well as system management and maintenance -- are being solved by new and creative applications of elements at the heart of Linux, or new and creative uses of the Linux kernel itself.

Where from here? For starters, there will be rising debate and dissension over making Docker-style containers (as opposed to the underlying kernel technologies powering them) a deeper part of Linux proper, if at all. One example of this friction is the question of how to handle container runtimes as a Linux system service, exacerbated by previous controversies about how to handle system services in Linux in the first place. Are containers part of the OS, a user-space addition, or a hybrid of the two? The only way to figure out what’s best is to experiment tirelessly and see what model provides the most universal benefit.

With unikernels and Linux, the future lies in figuring out where the two work best together and why. The unikernel mode of operation isn’t meant to replace containers. But it opens up possibilities that didn’t exist before or weren’t taken seriously because the implementations were lacking.

One of the constant low-level worries about Linux is fragmentation -- that the sheer diversity of Linux implementations makes it difficult to guarantee consistency. When discussing a consumer product like Android or Linux as a desktop environment, that’s one thing. But it’s another matter when we’re talking about Linux as a substrate for other items -- which has been a substantial part of Linux use in the first place.

This kind of invention and experimentation isn’t “fragmentation.” It’s part and parcel of what Linux was always meant to be about -- a raw material that could be cut, sewn, and hemmed into any number of new shapes for any number of future needs.

Related InfoWorld resources

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Docker

More about LinuxRed HatUbuntu

Show Comments
[]