Containers have been one of the major themes in the IT industry over the past couple of years, following the rise to prominence of Docker and its namesake platform. Beneath all the names and buzzwords, the technology has found popularity because it meshes well with a need for greater efficiency and agility in development and IT operations.
Like many trends in IT, containers are not actually a new concept and have been around for many years in one form or another. Even within the realm of x86 servers, Virtuozzo (formerly part of Parallels) was offering a container platform 15 years ago targeted at hosting companies and service providers.
But it was Docker that popularised containers by pulling together a platform that provided the tools to develop and operate application code inside containers. The speed with which containers could be provisioned and deployed using Docker meant the platform quickly found favour among developers.
What are containers?
Containers are a form of virtualisation, but take a different approach to virtual machines. Instead of working at the level of the bare metal to carve up a server into multiple server instances, container platforms operate at the level of the operating system to create sandboxed spaces within which different applications can run. A container can thus be thought of as a chunk of memory reserved for a specific application or piece of code, isolated from other containers on the same system.
With virtualisation, each virtual machine needs to have its own complete operating system in addition to the application it is operating. With container environments, every container sits atop the same operating system kernel, each holding just the specific code libraries necessary to support its own application code.
As a consequence, containers are speedier to deploy, because they do not require an entire virtual machine and its operating system to spin up. They also consume fewer resources, and thus enable a server to handle a greater number of separate workloads in containers than if the same workloads were inside virtual machines.
How do containers relate to devops and microservices?
These attributes of containers have led to them being adopted by organisations as part of a devops strategy for a more agile IT services approach. In addition, containers are often associated with microservices, a development approach in which applications and services are broken down into smaller modules that can be developed and maintained separately.
However, the two are not synonymous; containers can be used for monolithic applications while microservices can also be developed without the use of containers.
Microservices, or any approach that uses containers at any kind of scale, calls for orchestration and management tools to allow them to be operated efficiently, especially across multiple servers.
Several options are available to organisations, with the most widely used being Kubernetes, a project originally developed by Google. Alternatives include the Apache Mesos project, and Docker’s own Swarm, which was a separate tool but is now integrated into the Docker platform itself.
Is Docker the only option?
There are many competing container platforms now available, but by far the most commonly adopted is Docker. This began as a Linux-only platform, but has now expanded to Windows and Azure thanks to a partnership with Microsoft, which is using Docker as the basis for its containers strategy.
However, it should be noted that Docker containers created for Linux cannot be run on Windows, and vice versa. This is because containers share the kernel of their host operating system, as mentioned earlier. What it does mean is that developers familiar with the Docker tools and APIs can reuse those skills when working with containers on Windows or Azure.
Docker’s success is due to the fact that it developed a complete platform for building and operating code in containers, including a registry to store container images ready for deployment.
Docker support has been added into a surprising number of platforms, including the leading cloud providers. The Amazon EC2 Container Service (ECS) on AWS operates workloads in Docker containers, as does Google Container Engine and Microsoft’s Azure Container Service.
Docker compatibility is also a feature of application platforms such as Red Hat’s OpenShift and Pivotal’s Cloud Foundry, both of which are used for development of workloads in both the public cloud and on-premises environments.
“Yes, Docker is the current large gorilla in the containers market,” says Clive Longbottom, Service Director at analyst firm Quocirca. However, he cautions that we are at a relatively early stage in this containers renaissance, and that firms with an early technical lead do not always remain top of the pile.
“Remember when WordPerfect was the only word processor in town, or when Borland was the big code management player? It can all change—and change quite rapidly,” he warns.
What other platforms are available?
Other container platforms include rkt (rocket), developed by CoreOS to address some of the security flaws it saw in Docker’s platform. Rkt uses an open source container format called appc, in contrast to Docker’s proprietary format for container image files.
“There have been problems with Docker where the use of privileged admin levels within a container has allowed access to the underlying shared platform, opening up the possibilities for either accidental or malicious damage,” explains Longbottom.
“To my mind, rkt was showing a great deal of promise, but it just doesn't seem to be gaining any traction in the market,” he adds.
Meanwhile, other container platforms have been developed with a more traditional usage model in mind, including LXC, LXD, Virtuozzo, and Oracle’s Solaris Zones.
Solaris Zones is regarded as an alternative to virtualisation, with support for snapshots and cloning, for example, but is only available on Oracle’s Solaris platform.
Virtuozzo continues to offer its platform, which is based on a modified Linux kernel and refers to its containers as virtual private server (VPS) instances. An open-source version, OpenVZ, now forms the basis of Virtuozzo releases.
LXC is open-source technology that builds on capabilities in the Linux kernel. It is thus tied to Linux, with commercial support available from Canonical via its Ubuntu Linux LTS releases.
LXD is a project that is sponsored by Canonical and supported in recent releases of its Ubuntu Linux. LXD builds upon LXC, but adds a Linux daemon to enforce security and isolation, plus an API set for manipulating containers.
To differentiate LXD from platforms such as Docker, Canonical refers to the two as machine containers and process containers, respectively. According to Canonical, LXD functions more like a virtual machine, with each container holding an entire environment that could multiple applications. However, it sees machine containers and process containers as complementary rather than competing, suggesting that users might deploy Docker inside a LXD container, for example.
Meanwhile, a quirk of the way containers operate has led to the development of specialised operating system versions optimised to run as container hosts. Because containers include all the code libraries necessary to support the application, this layer can largely be stripped away, leaving little more than the operating system kernel.
Examples of this include CoreOS’ Container Linux and Red Hat’s Atomic Host in the Linux arena, while the Nano Server installation option for Windows Server 2016 provides a similar function for Windows.
Will containers replace virtual machines?
While it seems that containers are rapidly becoming the preferred method for operating some workloads, especially “cloud-native” or distributed workloads, the two approaches have different merits that mean virtualisation is not going away.
Virtual machines offer greater security and isolation for critical workloads, for example, while Docker-style containers can be regarded as a convenient way to package up and distribute applications.
Docker-style containers were also intended to be stateless and exist just long enough to process a specific task, so they initially had no persistent storage mechanism, unlike virtual machines. However, persistent storage has proven to be useful for many kinds of applications, and so much work has gone in to mechanisms to provide containers with storage resources.
This means that containers and virtual machines both have their pros and cons, and organisations should choose which approach best suits their specific application. In many cases, containers will prove ideal for cloud-native workloads, while virtual machines will be best suited for traditional enterprise applications, but this is generalising and both can co-exist.
In fact, some container platforms are actually using virtual machines to host containers. Amazon’s ECS operates this way, for example, as does Hyper-V Containers on Windows Server 2016, and VMware’s Photon platform and vSphere Integrated Containers.
There are various reasons for this, such as ensuring isolation for enterprise workloads that need to be secure, which is Microsoft’s reason for Hyper-V Containers, or integrating containers with existing infrastructure and management tools, which was VMware’s motivation for vSphere Integrated Containers.
“The strange thing is that we are finding quite a lot of people running containers in VMs. This seems to be because of security issues, but seems to me to be taking a very wrong approach,” says Longbottom.
However, running a platform such as Docker inside a virtual machine enables organisations to host containers alongside other virtual machines on the same server, a solution that may suit those wanting to deploy on-premises rather than in a public cloud.
It also has the side effect of meaning that a Linux-based container could after all run on Windows or Azure, if it is inside a virtual machine running Linux.
As with many other areas of IT, organisations need to have a clear understanding of what their requirements are before rushing in to adopting one solution over another. Containers and virtual machines are just tools to help get the job done.
This story, "Containers: Everything you need to know" was originally published by IDG Connect.