How do you know containers are going mainstream? In a word, “standards.” As more and more organizations adopt Linux container technology, standards are being developed to help communities and vendors innovate while retaining compatibility among container implementations. It’s all good, but it’s also important to understand how different standards do (and don’t) work together—and what they mean for your container deployments moving forward.
But before I talk about Linux container standards, I need to talk about Linux containers. There’s a lot of hype about containers these days—and rightly so—but the term “container” is starting to be coopted because it’s considered cool.
A true Linux container is a set of processes that is isolated from the rest of the system, running from a distinct image that provides all the files necessary to support the processes. An image that contains all of an application’s dependencies is portable and consistent as it moves from development to testing and, finally, to production.
I sometimes refer to it as “fancy files and fancy processes.” In the traditional computing world, a file is just a file—until it is a process. Indeed, when you think about it, a process is just a file that is loaded into memory. Likewise, a process is just a process—until you add extra security controls. Then it’s a container. So, all things being equal—and despite all the hype—a container is just a fancy process (or group of processes).
With all of that said, containers have really taken off. A survey conducted earlier this year by 451 Research’s Voice of the Enterprise service found that, of more than 300 enterprise respondents, 19 percent were in initial production deployment of containerized applications, and 8 percent were in broad production implementation. That’s a year-over-year doubling, according to the report, and we’re seeing no sign of that growth slowing.
The march toward mainstream
One sure sign that a once-bleeding-edge technology has gone mainstream is standardization of that technology. That’s definitely what we are seeing with containers, and it’s important for companies and dev teams to be able to put those standards in context to make the right decisions about the technology moving forward.
You’ve probably seen articles and whitepapers about Linux containers illustrated with images of actual shipping containers. The analogy is actually very good, especially when it comes to Linux container standards.
Think about it: If goods were shipped in random bags, boxes and barrels, there would be no one tool that could move them. Instead, goods are shipped in containers that meet strict ISO standards. It’s like putting those bags, boxes and barrels in a standard container that can then be easily moved, no matter what ship, truck, train, or plane it’s on or which port or warehouse it’s in.
It’s the same in the software world: You add a bunch of RPMs and build a standard software container image, and now you can easily move that software container around.
Once you standardize containers, you protect your investment. If you create 1,000 container images and they meet a certain format standard, they’re good for a long time. You don’t have to worry that a container you built in September will be broken in November. You can invest in tools and learning that will be leveraged over time to provide a significant return on investment.
Standardization also supports the growth of a thriving ecosystem of products—and we all know open source products allow for more investment in open source communities—so this is good for everyone. In a healthy ecosystem, vendors make money by investing in technology that helps customers solve business problems. That creates more revenue, which results in more investment. We want that to happen, and standards help enable it.
Standards also encourage community investment. People want to contribute to projects that are meaningful, that have name value, and that will provide value for a long time. If I build a tool and it gets a big following, I want that tool to be leveraged for years to come. Standards help enable that, too.
Standard procedure
When you think about how quickly containers have taken hold, it’s surprising how many standards have sprung up around the technology. (Or maybe it’s not surprising, as everyone wants to get in on the game.) In fact, some standards work together, but some compete. Some are industry standards, while others are de facto standards. Others are single-vendor standards.
Some of the most common and, in some cases, most compelling standards are:
- OCI (Open Container Initiative) Image Specification
- OCI Runtime Specification
- Kubernetes Container Runtime Initiative
- CNI (Container Network Interface)
- Docker CNM (Container Network Model)
The OCI standards (Version 1.0.1 was just released) are managed by the Linux Foundation. They are open standards that many vendors support, and they govern the image specification and the runtime specification.
The Kubernetes runtime interface allows developers to swap out a container engine. Container engines can implement the runtime interface natively or be interfaced by leveraging shims which serve as drivers. Kubernetes is developing a huge following so its Container Runtime Interface (CRI) is quickly becoming a de facto standard.
On the networking side, CNI and Docker CNM are both active. They are competing standards (CNI is used by Kubernetes), but because there are only two at this point, most vendors are investing in both. This means you are fairly safe if you invest in a network vendor.
Container standards are used in different places to achieve different goals.
In the last couple of years, orchestration has taken off, enabling the scheduling of workloads on container engines across clusters of hosts. This simplifies managing thousands of containers or thousands of hosts, or both. These standards work together to govern the interaction of container engines, hosts, orchestrators, and the supporting networking and storage.
For example, the OCI container image spec governs how a container engine goes out and pulls down an image as well as the data and metadata in that container image. The OCI runtime spec essentially governs the relationship between the container engine and the container host. The container runtime interface is the de facto standard that governs the interaction between the orchestrator and the container engine, whether it be Docker, Rocker, or Cryo.
Indeed, when you think about it, all these standards surround the container engine. Essentially, this lets you swap in and out not only the engine, but also the hosts and different registries. In fact, you can swap out components based on workload requirements—for example, Windows or Linux container hosts—while at the same time, leveraging centrally managed infrastructure such as registry servers, CI/CD tools, container image scanning tools, or orchestration software such as Kubernetes.
And, in the end, that’s really what we want and need from standards—for containers or otherwise: the ability to do what’s best without being hamstrung.