Containers and virtual machines: Which is best for you?

Use an extension when you need a layer, and you have a whack-a-mole. Use a layer when an extension will do, and you have a pig.

Containers and virtual machines: Which is best for you?

If you want to stop someone from driving a car, you can take away the keys. Quick, easy and effective. Alternatively, removing the wheels and engine will work, too.

A container is an OS extension that takes away keys, leaving the OS intact. A virtual machine (VM) reworks the architecture, separating the car from the wheels and engine. Taking the keys is easy, but the driver might have spares, and a car can be hot-wired in about a billion ways. Removing the wheels and engine is a lot of trouble, but the car won’t move without them. And when you mount snow tires, that removable wheel architecture is handy.

Time-sharing computers

Containers and VMs go back to the beginning of time-sharing, an outstanding advance in mid-twentieth century computing. A single time-sharing computer supports multiple users running multiple tasks at the same time. Each user thinks they control the entire machine.

But time-sharing users must be protected from each other. A user’s broken code can bring down the system for everyone or monopolize shared resources such as memory or file storage.

The early solution, which is still used, was OS-controlled privileged instructions. An ordinary user’s process must request permission from the OS to execute a privileged instruction. The OS allows or denies a request based on the safety of the request for other users. However, properly allocating permissions is a whack-a-mole problem that is still at the root of most security vulnerabilities.

Virtual machines

In a brilliant moment, someone thought of running an OS as an ordinary program on another OS. The code for the abstract “guest” OS is identical to a usual OS, except hardware interactions are directed to the supervisory OS, the hypervisor. The hypervisor handles interaction with the hardware, either simulating the action or passing it on to the hardware for execution.

Several guest OSs, called VMs, run on a single hypervisor, which allocates processing and other hardware resources to the guests. The VM architecture separates the wheels and engine of the OS from the user of the VM. Processes running on a VM can do all the usual things, but without the hypervisor to engage the engine and wheels, they are harmless.

VMs and hypervisors are simultaneously complex and elegantly simple. Users of guest systems are isolated in their own worlds. The logically complex permission decisions are made within the VM, not the hypervisor. Therefore, the hypervisor is relatively simple in a natural layered architecture.

Guest OSs and hypervisors can be modified independently. Guest OSs can run on different hardware architectures by changing the hypervisor without touching the guest OS. The image of a running guest system, the VM, is an ordinary data file that can be duplicated and restarted, making identically configured processes a simple copy and restart operation.

Without these powerful qualities of VMs, cloud computing as it is practiced today would be impossible. VMs can be migrated from physical machine to physical machine without the processes running on the VM aware of the change. A VM image of an application can be preconfigured, complete with its own pre-tuned OS, ready to be started instantly.


In most pre-cloud time-sharing systems, the OS enforces isolation at the user level, not via VMs. Each process running on a machine has an owner with a set of permissions enforced by the OS. All processes run in a single OS, which controls access to critical resources. Generally, this is more efficient than VMs because the OS is not duplicated for each user.

Containers are an extension of this time-sharing mechanism. System administrators soon discovered that not all users are equally responsible: Some could be trusted to respect critical resources, but the bad cowboys caused trouble. One approach was “restricted shells” (rsh, rbourne, rbash, etc.) to corral untrustworthy users, but mischievous users battled with the sys admins and found ways around these restrictions. Eventually, these restricted shells grew stronger and more sophisticated and were often called “jails.”

Hooligans aren't the only source of destructive processes. Some of the most dangerous are incomplete programs in testing. A jail is a good place to test programs that may fly out of control. Instead of bringing the entire system down, the jail walls catch the flak and developers can continue to debug without rebuilding the system.

Self-contained jails can be built with their own set of preconfigured start up sequences, files, libraries, and utilities. These preconfigured entities reproduce an exact computing environment, which is ideal for the painstaking test and debug cycles of development.


These preconfigured, controlled environments are the containers of today. The OS still has the complex burden of enforcing containment, but containers do not use multiple copies of OSs like VMs. A new application installation today is often the final preconfigured test container that goes to a customer site rather than a development test bed.

A decade or two ago, installing a major enterprise application required man-months of high-level expertise. Today, long complex installs aren't acceptable. Both VMs and containers install applications quickly into a predictable, preconfigured environment. However, containers and VMs are not interchangeable and are often best used in combination. The implications of their strengths and weaknesses are for future discussions.

This article is published as part of the IDG Contributor Network. Want to Join?