How thin? Imagine the Linux server as a process

Imagine a time when processes and services are completely transitory and server-agnostic, carrying their dependencies with them; it's not far away

Gears on keyboard
Credit: Thinkstock

Lately I've been causing a ruckus among readers who appear to have a very narrow view of Linux and other Unix-like operating systems. My main point has been that we need a streamlined, finely tailored Linux server distro that better supports what server instances are becoming: transient, highly specialized bundles of processes and services. At some point, beyond Linux containers and cloud-scale server instances, we hit on the concept of server as process.

Several years ago, I wrote a column about how we might proceed beyond server virtualization. In re-reading it now, I think that we're still not there in many respects, but we are rapidly approaching a few of the other concepts. I said then:

Perhaps, at some point in the future, we'll look at a blade chassis or even several racks of them, and instead of seeing a few dozen physical servers running our hypervisor du jour, we'll instead view them as a big pile of resources to be consumed as needed by any number of services, running on what might be described as an OS shim. If there's a hardware failure, that piece is easily hot-swapped out for replacement, and the only loss might be a few hundred processes disappearing from a table containing tens of thousands, and those few hundred would be instantly restarted elsewhere with no significant loss of service.

What I was actually thinking of then is a ways away -- namely a distributed process tree across physical server clusters with bus-speed interconnects -- but we are moving into a reality where something similar is possible in the form of containers. Instead of a distributed process tree spanning physical hosts and distributed kernel interaction, we are containerizing our processes into silos that can be spread around a cluster at a whim. This isn't nearly as thin or lithe as moving the service itself around, but within the constraints of what we're able to achieve with distributed processing today, it's about as close as we can get.

At that point, what is in that container? Using Docker as an example, that container is a delta based off an OS image. Thus, it contains only the changes made within the container (or cloned from another image), but the entirety of the base image is still available.

We may have a memcached service in the container that needs only a smattering of libraries and binaries to function, but has the rest of the distro image along for the ride, if only in a linked sense. There's no kernel, there's no init, there's essentially nothing within that container except for links to the base image and small file changes. That base image may be built from a "general" distribution, but the parts of that distribution that are actually in use is exceedingly minimal. That's the whole idea.

What runs underneath this collection of siloed processes? The folks at CoreOS could tell you a thing or two about that. Their whole concept is an ultra-thin, bare-metal Linux build that is specifically tuned to run clusters of Docker container hosts. This, then, becomes the actual server "distribution" in use for all computing resources. The containers running on top of that core can't really be considered servers, since they're usually running a single process and are probably more appropriately considered static processes that carry their own dependencies along with them. As I said in response to a comment last week, this turns the concept of a Linux server distribution on its head.

We have moved from big Linux boxes running hundreds or thousands of processes and many different services to virtualization hypervisors running dozens, hundreds, or thousands of emulated Linux virtual servers, each running hundreds of processes and several services. We are now moving toward eliminating the server itself, paring down all of the associated baggage along the way, and turning up services that are actually services, not running on a server to speak of.

It's early yet, and there are many warts on these technologies. The transport and communication mechanisms are rough in many places, and many instances of "good enough for now" usages of existing technology frameworks will have to suffice until we make enough advances to dispense with them. Perhaps the biggest target is the nature of network communication within containerized systems. What we have now is functional, but there has to be a better way.

Then there's the matter of changing the thinking of service architects and admins. We are now at a point where the spectrum of Linux administrators is broadening. There are those who need GUI tools to manage their servers; those who've never needed a GUI, but have a hard time understanding why you would never run an ssh service in a Docker container (unless it's supposed to be an ssh server); and those who are eliminating nearly everything we consider an operating system in order to streamline their services and make the most out of their hardware in terms of performance and scalability. It's a wide curve, and moving fast. Best to keep up.

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.