Review: Docker 1.0 is ready for prime time
The first production-ready version of the open source Linux container engine irons out networking and other wrinklesFollow @pvenezia
How Docker works
Docker works by creating containers based on Linux system images. Much like other paravirtualization tools such as Virtuozzo, all instances fundamentally run on the host system's kernel, but are locked within their own runtime environment, separated from the host's environment.
When you start or create a Docker container, it is active only if active processes are running within the container. If you start a daemonized process, the container will exit immediately because the process ceases to be active in the foreground. If you start a process in the foreground, the container runs normally until that process exits. This is unlike other paravirtualization tools that set up essentially "normal" virtual server instances in airlocked environments on the same host. Those instances persist even without active foreground processes.
Docker can be installed on most major Linux distributions, as well as on Mac OS X and Windows, albeit the last two only via the use of emulated virtual machines as hosts.
In most cases, installing the Docker runtime on a host is a very simple process, requiring only the use of normal package management commands on many Linux distributions. You'll find a very complete set of installation instructions for a wide variety of Linux distributions and cloud services, as well as Mac and Windows, on the Docker website.
Once Docker is installed, we can create a container with a simple command:
$ sudo docker run -i -t ubuntu /bin/bash
This command tells Docker to download the latest Ubuntu image (if not already present on the host) and run the
/bin/bash command within the container. This command will execute within the new container as root, and we'll be presented with a root command prompt running in our new container:
From here we can do just about everything you might expect from a new Linux installation. We can run
apt-get update, install new software, configure that software, write scripts, and use the container more or less like we would any other Linux server instance. Except, when we exit from the command line, the container stops running. If we had started an Apache process and begun serving Web pages from the container, our Web server would stop. Thus, it's generally a good idea to build your containers for a single service only, rather than an application stack. You can run multiple services on a single container, but it's more challenging than it perhaps should be.
Working with Docker
Docker is a command-line tool that provides all of the required tools in the central "docker" executable. This makes it very simple to use overall. Some examples would be checking the status of running containers:
Or checking the list of available images and their versions:
Another example would be to show the history of an image:
The above command shows a handy shortcut in the command-line interface, in that you only need to specify the first few characters of the image ID to pull it up. You can see that only "d95" was required to show the history of the image