How to orchestrate containers with Docker

Welcome to the post-hardware era, where we move containers or VMs around as needed without thinking about it. Here are some new Docker tools for the job

Building next-generation applications is one thing. Managing and running them is another.

Perhaps the best way of thinking about this lies in the old analogy of pets vs. cows. People take extraordinary measures to keep their pets alive and healthy, in the same way admins carefully tend a high-end server with redundant everything. But on the farm, a dead cow is part of the cost of doing business -- and in today’s cloud world, where applications are designed to tolerate failure, a server that falls over is no big deal.

The role of the modern application orchestration tool is to monitor your herd of virtual servers and/or containers and make sure they’re roaming the right ranch. When one server dies, it quickly instantiates a new VM -- or even a new container. There’s no system admin intervention at all because the whole process is automated. You never know exactly which server or container (or combination thereof) is running your application.

Automated IT has been a dream for a long time, but today's tooling is finally starting to deliver on the promise. If you’re working with cloud-scale applications, especially with scale-out microservices, then such tooling is essential.

An OS for the data center

That’s where the idea of a data center operating system comes into play. Individual servers no longer matter, except as an element of compute, storage, or networking. Applications are tied to virtual machines or to containers and become the main management element.

Instead of managing individual servers, we’re managing an entire data center, partitioning it as necessary to support different applications -- and building development, test, and deployment environments without needing to know anything about the underlying hardware. That’s a big change from the way we used to manage servers and applications. It marks the start of a new era, and provisioning hardware for specific workloads is relegated to the past.

A key concept is orchestration -- that is, dynamically placing applications and services to take advantage of available compute resources. Orchestration is an important tool for distributed, automated computing. It uses application definitions and manifests to determine the placement of hosts and workloads, managing scaling as well as ensuring failed servers and services are handled correctly.

While Google’s Kubernetes and Apache’s Mesos projects are perhaps the best-known orchestration solutions, they’re far from the only ones available. Both are complex tools, requiring a significant investment in skills and in resources, and they're best applied to large-scale deployments.

Alternatively, small slice of businesses have moved to private clouds that include orchestration, such as those offered by Microsoft, OpenStack, or VMware. The vast majority of organizations, however, are still experimenting with the processes and the tools needed to deliver next-generation applications.

Herding cows with Docker

What’s needed is a set of tools that can scale from one or two servers to one or two racks, then to whole data centers. That’s the approach Docker is taking with its container automation tools: Machine, Swarm, and Compose.

Machine is the heart of Docker’s automation tooling because it automates the process of setting up and provisioning host servers. Using Docker’s APIs, it gives you a single command that establishes a host server, provisions the underlying Docker Engine, and sets up the client tools. It can also append a host to an existing Swarm cluster or create a new cluster from scratch. In addition, you can work with containers on a range of different cloud providers -- and use the command line to set up hosts in your chosen cloud environment.

Once you’ve automated the creation of container hosts and fired up Docker Engine, you can bring those hosts into a compute fabric using Swarm, Docker’s clustering tool. Swarm is designed to provide a scalable environment for containers, using the same API as a standard Docker Engine instance. If you’re already running Docker in your devops environment, you can quickly scale by installing Swarm and carry on using your existing devops tooling and processes. A built-in scheduler handles assigning containers to individual Docker Engine nodes, with support for several different strategies to help optimize a deployment.

Creating a Swarm is easy, as is adding new engines to an existing cluster. You can use Machine to automatically create new engines or work with the Docker API to provide an index of available nodes. One option is to use the Docker Hub registry to simplify discovery, as Swarm identifies and manages registered hosts.

Compose is a more complex tool. It works with YAML to build descriptions of applications, showing how the various containers used in an application link to each other. YAML makes a lot of sense because it lets you access the same tools to describe your application as you'd find in Swagger for your APIs. Once you’ve created a description for an application and how it’s built, you need only a single line of script to launch the application.

Keeping it simple

Perhaps the most interesting aspect of Docker’s orchestration tooling is its simplicity. All three tools employ very simple commands, so it's no sweat to script them from tools like Jenkins or to manage them with environments like Puppet or Chef. By building on the existing Docker APIs, they also make it easy to manage and control a distributed environment using the same tools you have on a single development PC, simplifying the move from development to production.

Docker’s tools fit right in with whole data center management tools like Kubernetes, as well as work alongside the tooling offered by public clouds. Using a combination of Machine, Swarm, and Compose, you’ll be able to work with your applications as they scale from development and test on a single server, to a full-scale cloud service running on Azure or AWS.

Developers won’t need to know what they’re delivering containers to. It will simply look like a Swarm, even if it's running on a cloud-scale Mesos implementation. That kind of abstraction is what the cloud is all about.

Copyright © 2015 IDG Communications, Inc.