Why containers should make you stop and think

Container proliferation is reshaping how we think about infrastructure, but it still requires big picture (and little picture) architectural know-how

containers
Thinkstock

There’s no doubt about it, containers are hot - and for good reason too.  Their ability to speed development and deployment is making them a popular choice among DevOps teams the world over.  Without wading into the container vs. VM debate (answer: they both have their uses), perhaps the most important thing about containers is that they make architecture “snackable.” Because containers share the operating system kernel in which they run, container-based architectures can be assembled as a set of smaller, lighter-weight components while achieving greater modularity and robustness.  If this sounds like the fulfillment of the elusive SOA vision, it’s because it is.

Unfortunately, “better” doesn’t always mean “simpler,” and moving to a container-based approach creates new challenges that must be overcome and requires a shift in thinking.  For starters, you will need to invest in creating a new workflow with a new set of skills and tools.  This is made more difficult by the fact that the tools and processes for building container-based architectures are still immature.  It can also be hard to know which tools and technologies to bet on.  How will you orchestrate and manage your containers?  How will you secure and monitor them?  How will you ensure that your container-based architecture is performing as designed? 

In fact, how will you go about designing your new container-based architecture given all the variables in play?  Your approach to architecture will need to evolve to accommodate the distributed nature of container-based systems.  Sure, individual container-based components may be well-defined and cleanly partitioned, but the various teams building, deploying and operating the different pieces will all have to understand how they all fit together.  Like a container-based system, your architectural know-how will also need to become more distributed and well-specified.

Container proliferation is also reshaping how we think about infrastructure.  You could say that the traditional view of infrastructure is being turned on its head.  In the “old days”, application architecture generally reflected infrastructure architecture and vice versa.  Large, monolithic applications ran on virtualized “big iron” machines.  Virtualization created some resource utilization efficiency, but the basic unit of work was still “the machine.”  Containers came along and turned the notion of large, fixed units of work around.  A containerized unit of work can be as large as it needs to be to suit the task, and unlike virtual machines, can be created, shut down, or moved almost instantaneously.  Containers also allow clean partitioning of compute and data, and in fact require such a segmented approach in architectures where container workloads are ephemeral.  With the appropriate orchestration mechanism in place, containers allow us to treat infrastructure as a flexible, mutable compute fabric rather than a set of slow-moving, heavyweight machines -- virtual or otherwise.

The old adage “form follows function” rings true with containers; they are a means to build highly efficient and scalable services.  It’s no accident that Google’s search engine is entirely container-based.  But if we’re talking about containers instead of virtual machines when designing next-generation services, is next-generation infrastructure architecture going to become invisible over time?  Perhaps the answer is that infrastructure will continue to be increasingly abstracted, with more focus on the work than the underlying mechanisms needed to accomplish it.  How many developers know – or need to know –  about the underlying machine instructions their code eventually produces?  But perhaps the more exciting opportunity ahead is with machine intelligence.  Imagine applying machine learning techniques to create or optimize compute infrastructure based on intent, to detect patterns no human could, and even to fix problems dynamically as they arise.

The “NoOps” vision of fully-automated, self-running infrastructure is certainly appealing – no people to mess things up!  In reality, software remains a tricky business, where if something can go wrong, it will.  Today’s DevOps model will evolve to be more of a blend of disciplines.  Indeed, containers are already pushing things in that direction.  With containers, developers have a greater role in deploying their code, and operations must have a clearer understanding of what’s in the containers and how they interoperate.

Now, where is the enterprise in all of this?  Already trying to tame legacy and build new things at the same time, it’s no wonder today’s CIOs have serious heartburn.  There is no giant switch anywhere to move from old to new, and buying into a hot new trend without clear outcomes could lead to disappointment.  Perhaps the best approach is to take a big step back, map business goals to available technologies, draw a blueprint of what your software factory looks like now, and what you think it might look like in one, three and five years from now. Then – using agile as an accelerator –  start incubating, piloting and iterating your way forward.  Pausing now to construct your big-picture architecture – most importantly, in the context of your digital business goals – will have payoffs at every step on the road ahead.

This article is published as part of the IDG Contributor Network. Want to Join?