If you're on the lookout for an easier way to migrate apps and services from development to production, or from one server environment to another, then you may already be aware of Docker. The Linux container solution has made waves for a while now, even as it has been widely viewed as not quite ready for production. The Docker team has been working steadily at finalizing a release that it considers to be production ready, and it appears to have reached that goal with the introduction of Docker 1.0.
Major enhancements in Docker 1.0 push it toward this production-ready state. Docker can now directly connect to host network interfaces rather than using the internal bridging required in earlier versions. Linked Docker containers can find each other by hostname, with the hosts file modified to reflect the correct host. Also, Docker plays nice with SELinux, supports greater monitoring, offers time-stamped logs for each container, and supports registry mirrors with multiple endpoints, which improves redundancy and reliability.
[ Also on InfoWorld: How-to: Get started with Docker | Linux IQ test: Round 3 | Review: Puppet vs. Chef vs. Ansible vs. Salt | Subscribe to InfoWorld's Data Center newsletter to stay on top of the latest developments. ]
These are all notable advancements, and they make Docker substantially more relevant across multiple use cases and production scenarios. Plus, it will cost you nothing to try. Docker is available free under the Apache 2.0 open source license.
Docker in a nutshell
Like a virtual machine, but much more lightweight, a Docker container allows you to move applications and services seamlessly between host servers. In addition, it incorporates versioning and image management tools that permit simple scaling and elasticity of applications and services across physical servers, virtual servers, or cloud instances. About all that's required from the underlying host is that it run a recent version (3.8 or above) of the Linux kernel that supports the LXC (Linux Container) features Docker relies on.
As an example, you could create a Docker container that does nothing but run a memcached service or an Apache Web server. This container would be built from a standard Linux base, such as Ubuntu or CentOS, and the desired service would be installed and configured much as it would on any Linux system. However, once built into a container, you could check that container in to Git version control, check it out on any other system, and have it immediately start and become a functional, production service.
Thus, that memcached instance could be replicated and run on a virtual server, a physical server, an Amazon cloud instance, or anywhere else you can run Docker. You don't have to worry about service dependencies between hosts, nor must you concern yourself with application installations, emulating hardware, or any of the trappings of traditional virtualization. You just need to start your properly built container where you want it to run.
How Docker works
Docker works by creating containers based on Linux system images. Much like other paravirtualization tools such as Virtuozzo, all instances fundamentally run on the host system's kernel, but are locked within their own runtime environment, separated from the host's environment.
When you start or create a Docker container, it is active only if active processes are running within the container. If you start a daemonized process, the container will exit immediately because the process ceases to be active in the foreground. If you start a process in the foreground, the container runs normally until that process exits. This is unlike other paravirtualization tools that set up essentially "normal" virtual server instances in airlocked environments on the same host. Those instances persist even without active foreground processes.
Docker can be installed on most major Linux distributions, as well as on Mac OS X and Windows, albeit the last two only via the use of emulated virtual machines as hosts.
In most cases, installing the Docker runtime on a host is a very simple process, requiring only the use of normal package management commands on many Linux distributions. You'll find a very complete set of installation instructions for a wide variety of Linux distributions and cloud services, as well as Mac and Windows, on the Docker website.
Once Docker is installed, we can create a container with a simple command:
$ sudo docker run -i -t ubuntu /bin/bash
This command tells Docker to download the latest Ubuntu image (if not already present on the host) and run the
/bin/bash command within the container. This command will execute within the new container as root, and we'll be presented with a root command prompt running in our new container:
From here we can do just about everything you might expect from a new Linux installation. We can run
apt-get update, install new software, configure that software, write scripts, and use the container more or less like we would any other Linux server instance. Except, when we exit from the command line, the container stops running. If we had started an Apache process and begun serving Web pages from the container, our Web server would stop. Thus, it's generally a good idea to build your containers for a single service only, rather than an application stack. You can run multiple services on a single container, but it's more challenging than it perhaps should be.
Working with Docker
Docker is a command-line tool that provides all of the required tools in the central "docker" executable. This makes it very simple to use overall. Some examples would be checking the status of running containers:
Or checking the list of available images and their versions:
Another example would be to show the history of an image:
The above command shows a handy shortcut in the command-line interface, in that you only need to specify the first few characters of the image ID to pull it up. You can see that only "d95" was required to show the history of the image
You may note that the size of that image is quite small. This is because Docker builds deltas out from the parent image, storing only the changes per container. Thus, if you have a 300MB parent image, your container and resulting image might be only 50MB in size, if you installed 50MB of additional applications or services within the container.
You can automate the creation of Docker containers with Dockerfiles, which are files that contain specifications for single containers. For instance, you could create a Dockerfile to set up an Ubuntu container with proper networking, run a bevy of commands within the new container, install software, or perform other tasks, then start the container.
Networking in earlier versions of Docker was based on host bridging, but Docker 1.0 includes a new form of networking that allows a container to connect directly to the host Ethernet interfaces. By default, a container will have a loopback and an interface connected to the default internal bridge, but can also be configured for direct access if desired. Naturally, direct access is faster than bridging.
Nevertheless, the bridging method is very useful in many cases and is accomplished by the host automatically creating an internal network adapter and assigning a subnet to it that is unused on the host itself. Then, when new containers attach to this bridge, their addresses are assigned automatically. You can configure a container to attach to a host interface and port when it starts, so a container running Apache may start and connect to TCP port 8080 on the host (or a randomized port), which is then directed to port 80 on the container itself. Through the use of scripting and administrative control, you could start Docker containers anywhere, collect the port they're using, and communicate that to other parts of the application or service stack that need to use the service.
Docker in the real world
In the right hands, Docker has been ready for production for at least a few releases, and the release of v1.0 should result in more eyeballs on the project. The learning curve for Docker should be relatively short for seasoned Linux administrators, but you can easily try it out for yourself at Docker's online demo.
Docker is a very good example of a workable, foundational, back-end infrastructure component that possesses plenty of utility and functionality for Linux admins and architects, but will be lost on those used to point-and-click interfaces. That's not necessarily a bad thing. Docker still has many places to go from here (e.g. image versioning and private registries) and many areas that could use streamlining (e.g. networking). But this 1.0 release is quite enough to get you started.
This article, "Review: Docker 1.0 is ready for prime time," was originally published at InfoWorld.com. Follow the latest developments in application development, cloud computing, virtualization, and open source at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.
Ease of use (20.0%)
Overall Score (100%)
Having trouble installing and setting up Win10? You aren’t alone. Here are many of the most common...
Picking an Android phone can be difficult, but we're here to help. These are the top Android phones you...
Confidence in our power over machines also makes us guilty of hoping to bend reality to our code
Sponsored by Hewlett Packard Enterprise
Sponsored by Intel
From backup to productivity tools, here’s the best of the best for Win10. Sometimes good things come in...
If you spend most of your workday with Google's browser, you'll want to put these Chrome tips to work,...
Difficult setup and hardware issues mar an otherwise successful marriage of a smartphone to the...
Learn the key concepts behind programming in Go, a concise, simple, safe, and fast compiled language...