The operating system is dead. Long live the operating system?

Is the operating system really being replaced by containers, serverless computing, and unikernels?

multiple-exposure image showing a virtual network of connections, a human profile, and circuit board
Thinkstock

Why is it that we in the tech industry are so fond of “out with the old, in with the new”? SaaS will replace on-premises applications! Smartphones will replace laptops! The cloud will replace … everything!

You may have been hearing lately that the operating system is being replaced by containers, serverless computing, unikernels, … the list of things that will purportedly make the general-purpose OS obsolete goes on and on. And, yet, operating systems are alive and well. Jim Whitehurst, in a recent analyst call, said Red Hat Enterprise Linux is “growing faster than expected—17 percent in the fourth quarter versus a low-teens target.”

Indeed, very few technologies actually replace another technology completely. It’s much more common—and typically preferred—when a new technology complements the technologies we are already using and benefiting from.

Take virtualization. There was a lot of talk at one point about how virtualization would kill hardware. Clearly, it didn’t, for two core reasons: First, all software needs hardware to run on. It’s a fundamental of computing. Second, over time the growth of software outpaced the extent to which virtualization could consolidate workloads. This actually led to net growth in hardware, even with consolidation from virtualization.

The same idea is fundamentally true with many of the technologies gaining attention today. As with technologies that came before them, containers, serverless, and unikernels, for example, each provide new and unique capabilities. However, none of them replaces the flexibility of a general-purpose operating system like Linux. Containers are an extension of the operating system. Serverless, meanwhile, is a new way to build on top of an operating system. Unikernel technology does represent a different way of building and using an operating system, but it’s still an operating system.

People had the same concerns about the viability of traditional operating systems when Java virtual machines and web browsers came on the scene, but there’s no getting around the fact (and, really, do we want to?) that general-purpose operating systems provide flexibility and will be around for as long as that capability is desirable in IT systems. Indeed, it’s always a trade-off between purpose-built and general-purpose systems—whether it is chip design or operating system design.

Constant change

All of this is not to say, of course, that operating systems we are using today are like the OSes of 2008. Or 2015. Or 2017. Or January 2018, for that matter.

The OS is changing all of the time—it has to, to not just accommodate new technologies like containers but to optimize them.

The earliest operating systems were mostly about scheduling batch processes serially—one after another—so that the machine could be used 24 hours a day. Quickly, this led to time sharing—running programs side by side—so multiple users could run programs simultaneously. This was the beginning of multitenancy, and it’s developed further ever since. Essentially, virtualization and containerization technology are about giving users varying levels of isolation in a multitenant environment. Container technologies like Cgroups, SELinux, and kernel namespaces have developed over time to support this.

Survival of the fittest (and most flexible)

While general-purpose OSes are in no immediate danger of extinction, they will need to continue to provide flexibility for users consuming resources.

For example, we are starting to consume resources in more granular ways—through containers, functions, blocks, files, buckets, and so on—and all of this runs a lot more smoothly when there is a layer of software abstracting these higher-level concepts, supporting multitenancy and managing access to resources. Be it Linux, OpenStack, a cloud provider, or OpenShift (Kubernetes), there needs to be a system in place to provide abstractions on how and which types of resources are consumed.

Of course, all of this demand for change and innovation falls squarely on the shoulders of hard charging developers and innovative operations people who want to support the business at a higher velocity. New ways of thinking about development—including agile, devops, and CI/CD—provide the framework for nearly constant evolution. But since the early days of computing, developers have had to keep up with new technology by actually using it and learning as much as possible about how it works with “old” technology to determine if it’s worth the investment (and the risk).

In the beginning, developers would have to learn how to use and develop for every new computer that came into the organization because no two were the same. Today, we have a much greater level of standardization and portability (thanks, in large part, to general-purpose OSes), but developers still need to keep up to date on what is available.

Whether it’s Istio and the ability to embed common web-based programming patterns in the infrastructure, using Kubernetes YAML files to define application, or learning how to apply disaster recovery best practices to 5 billion chunks of data in buckets, it’s all about keeping up and learning to combine established principles in information technology with new technologies.

Will the general-purpose OS be around forever? If not, what will lead to its demise? Please let me know what you think.

Copyright © 2018 IDG Communications, Inc.