Ship early and often: Docker has adopted the mantra of software developers, but it isn't shipping only a new version of the Docker client a mere two months after the last one. Instead, it's offering a major architectural change in Docker image delivery -- a clear sign the company's success is forcing it to keep pace with customers' real-world needs.
The original incarnation of the Docker image-delivery framework, Docker Registry, had begun experiencing performance issues under load. Scott Johnston, senior vice president of product at Docker, described in a phone conversation that it was part of a learning experience about how the company would need to support security, enhanced access control, and performance for Docker users.
Performance for Docker Registry, he said, "has become pretty evident as a critical feature, as we've watched the Docker Hub [itself based on Registry] grow. When you go from zero to 300 million images downloaded, it really stresses the performance of the system."
In reply, Docker radically refreshed the architecture of Registry, switched the language used to write the software from Python to Google's Go language (which Docker itself is written in), and made changes to the way the protocol delivers images. Originally, layers of Docker images were delivered to clients sequentially; the new system downloads them in parallel.
The changes to the Docker client also reflect the ways demand and use cases have evolved rapidly for Docker, even if the changes still answer only part of the criticisms laid at Docker's doorstep.
Aside from a Windows edition of the Docker client, soon to be joined by an actual Windows edition of the Docker engine, most of the other new features are in Compose, the tool used to assemble applications from the contents of multiple containers. Compose now allows configurations to be shared between multiple applications and app environments, as a way of establishing heritable dependencies for container-based apps.
Those configurations can also be used to separate the development version of a given application from the deployed version -- another sign of Docker's interest in providing tools for across the entire lifecycle of an application.
Docker Engine 1.6 also boasts two features that stem from Docker drawing at least as many feature requests from ops as does from dev: more detailed image and container handling, and a new set of logging drivers.
The sheer number of containers and apps being built that use containers by the dev community, Johnston said, means QA, staging, and ops now need tools to manage and have inspection into those same apps. The first version of the logging framework in Docker, he said, "was OK for developers, largely, but we know these admins that are managing hundreds if not thousands of nodes take logging very seriously, and have a whole different level of systems they put in place."
Syslog, one of the logging frameworks now supported directly by Docker, is a widely used staple among admins to aggregate data collected from multiple servers -- the kind of collection mechanism needed for introspection across many Docker containers. Johnston hopes other major logging framework producers -- Logstash or Splunk, for instance -- will step up and create drivers for Docker, which seems inevitable given Docker's growth.