Containers are eating the world

To fully take advantage of the agility that containers bring, teams must retool their software delivery workflow

cubes - blocks - squares - containers - storage - repository
Ilze Lucero (CC0)

Containers are fast becoming the unit of packaging and deployment for enterprise applications. Many in IT still see containers as merely the next step in the logical progression that began with the move physical servers to virtual machines, bringing with it another order-of-magnitude increase in compute density relative to the number of VMs that can run on a physical server.

While this approach recognizes that containers represent another explosion in the number of things IT needs to manage, it misses the most important change brought about by the container ecosystem—namely the fundamental shift in the software delivery workflow that containers enable.

In the traditional software delivery workflow, two separate teams are responsible for different layers of the stack: Operations teams own the operating system image, and development teams own the application artifacts. In this workflow, application artifacts and their dependencies are delivered from development to operations using the OS packaging constructs (RPMs, MSIs, and so on). The ops team then deploys those artifacts on “blessed” OS images that meet the organization’s policies and include additional monitoring and logging software; and the composite image is then run in production. Dev evolves the application by handing new packages to ops, and ops deploys those updates, as well as any other updates (such as patches that address operating system vulnerabilities) using scripts or configuration management software.

Container-based software delivery is different

The container delivery workflow is fundamentally different. Dev and ops collaborate to create a single container image, composed of different layers. These layers start with the OS, then add dependencies (each in its own layer), and finally the application artifacts. More important, container images are treated by the software delivery process as immutable images: any change to the underlying software requires a rebuild of the entire container image. Container technology, and Docker images, have made this far more practical than earlier approaches such as VM image construction by using union file systems to compose a base OS image with the applications and its dependencies; changes to each layer only require rebuilding that layer. This makes each container image rebuild far cheaper than recreating a full VM image. In addition, well-architected containers only run one foreground process, which dovetails well with the practice of decomposing an application into well-factored pieces, often referred to as microservices. Therefore, container images are far smaller and easier to rebuild than typical OS images, and therefore take much less time to deploy and boot.

An important consequence of immutability and the microservices architecture is that the software agents that ops typically uses to handle configuration management, monitoring, and logging isn’t typically found on container images. Instead, containerized applications rely on rebuilding the entire image if the software has to change; logging and monitoring are likewise externalized to the container orchestration system. In other words, software changes aren’t handled at runtime by agents—they are made at build time. Automation moves from being a runtime activity to a build-time activity, by using an automated build/test/deploy cycle, commonly known as continuous integration/continuous delivery (CI/CD).

Delivering IT in the context of the container paradigm

Of course, the core concerns that we have in IT don’t go away: We need mechanisms to ensure our applications are free of vulnerabilities, run the latest software versions that have been certified by IT, can scale with load, and provide the data exhaust that enables logging and monitoring systems to help us identify problems and even predict them before they happen.

To fully take advantage of the agility that containers bring, while giving us the security, governance, compliance, and audit trail that we require to run our business, we must retool our software delivery workflow. The two most important pieces of technology that we now need to maintain and operate are the container orchestration system and our container delivery pipeline.

With regards to the former, over the course of the last two years, Kubernetes has become the multivendor open source standard. Kubernetes provides features that were once an exercise left to each IT department: workload scheduling, log aggregation, scaling, health monitoring, and seamless application upgrades. Rather than fight these built-in capabilities by preserving the old workflows and tools, IT organizations need to accept them as part of the new “operating system” and build their workflows around what Kubernetes provides.

The second critical component is the container delivery pipeline: This is the system that automates the build/test cycle for every code check-in, and deploys successful check-ins into the container orchestration system. The most critical shift in the ops workflow is to move core aspects of the software delivery life cycle, such as vulnerability remediation, out of the runtime monitoring of production systems and into the build pipeline. For example, instead of being able to patch a vulnerable package on the running container, the ops team needs to flag a vulnerable package version using container inspection tools, trigger a rebuild of the container image, scan the image for vulnerable packages as part of the CI/CD pipeline, and only deploy images that pass these scans.

Unifying dev and ops via the new container workflow

This may feel like a scary shift for IT, but in fact it’s exactly aligned with the shift to devops: By having dev and ops collaborate together during the build phase of an application, issues are found much earlier in the software delivery life cycle, and a lot of the waste that the devops movement was created to address is eliminated by having a much tighter workflow for dev and ops.

IT now has two additional mission-critical systems to standardize and operate on behalf of the business: the container orchestration system and the container delivery pipeline. But what happens to common IT fixtures like configuration management systems, log aggregation systems, and monitoring systems? They don’t go away in the container world. Rather, they take on different roles. Configuration management systems are used to deploy and manage the life cycle of core distributed systems such as the container orchestration system, the container delivery pipeline, and other dependencies such as data management systems that aren’t running in containers. Log aggregation systems continue to provide a critical function for audits, forensics, and predictive analytics by draining the logs that come from the container orchestration system and the container delivery pipeline. And monitoring systems aggregate data that comes out of the container orchestration system with other external data sources.

Building a structural competitive advantage through devops and containers

Organizations that couple a devops transformation with the introduction of an enterprise-standard container orchestration system and container delivery pipeline will unlock the agility benefits inherent in this new workflow, be able to experiment and learn from their customers faster, and ultimately deliver the right capabilities to their customers far more rapidly than their competitors. Those visionary organizations will build a significant and structural competitive advantage and will be the prime beneficiaries of the new world of devops and containers.

Copyright © 2018 IDG Communications, Inc.

InfoWorld Technology of the Year Awards 2023. Now open for entries!