A year ago if you asked most system administrators what they would do with a container, chances are you'd get a blank look. That was then. This is now.
Container technology, and Docker in particular, is hotter than hot. Even while it was still in beta, Fortune 500 companies were starting to "containerize" their server, data center, and cloud applications with Docker. Indeed, James Turnbull, Docker's VP of services and support, said that before Docker 1.0 was even released, three major banks were already moving it into production.
Why Docker and containers are so popular
Once you understand what Docker did, it's pretty easy to understand its popularity. James Bottomley, Parallels' CTO of server virtualization, said that technically, the virtual machine (VM) hypervisors that we've all been using for years (such as Hyper-V, KVM, and Xen) to get the most from our hardware servers and to power the cloud, are "based on emulating virtual hardware. That means they're fat in terms of system requirements." Containers, on the other hand, are based on shared operating systems. They are thinner and more efficient than hypervisors. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This means you can "leave behind the useless 99.9% VM junk, leaving you with a small, neat capsule containing your application," said Bottomley.
In addition, with a totally tuned container system, expect to see four-to-six times as many server instances as you would with Xen or KVM VMs, according to Bottomley, and with no extra effort, you can run at least double as many server application instances on the same hardware as you could VMs.
As for Docker, said Bottomley, "Docker isn't container technology itself, it's an application packaging and orchestration system that requires container technology to function. If the underlying OS platform doesn't supply it, Docker won't work." It's that packing and orchestration that has given Docker its special sauce.
What is the difference between a virtual machine and containers? Wonder no more!
That's the technical case. Let me follow the bread crumbs from there to the business case. You can run the largest number of server application instances on the smallest number of hardware servers with Docker. That means you could save your data center tens-of-millions of dollars annually in power and hardware acquisition and maintenance costs.
Need I say more?
Container history lessons
Now you might think Docker is the great new thing in IT, and you'd be half right. It is great, but there's nothing new about its underlying container technology.
You can track containers all the way back to 1979 and the chroot command in Version 7 Unix. Since then, it's appeared in a variety of formats. Some of the most well-known are FreeBSD Jail. AIX Workload Partitions, and Solaris Containers.
Easily the most popular container technology, albeit few people know about it, is Google lmctfy. It runs, I kid you not, 2-billion containers a week. Within those Google containers runs every Google application you'll ever use from Documents to Gmail to search.
The real action lately has been happening in Linux. Parallels has had some commercial success with its Virtuozzo Containers and its open-source foundation OpenVZ. LXC has become the bedrock that most container activity, including Docker, is being built on.
Docker's rise to stardom
Docker has been getting the headlines, customers, and partnerships. Amazon, Cisco, Google, and VMware are all supporting Docker. Even Microsoft is getting into the Docker act! How did Docker garner such overwhelming industry support? True, there were always technical and economic reasons why containers were attractive, but there were also reasons why companies were reluctant to really get into containers.
First and foremost of those reasons was security. As the libvirt virtualization application programming interface (API) developer, Daniel Berrange wrote in 2011: "LXC is not yet secure. If I want real security I will use KVM." While he was talking about just one container technology, everyone knew this was a problem with all of them.
In 2013, it became possible to create LXC containers as an unprivileged user rather than root. That made it possible to create containers, which were inherently more secure. As Docker will be the first to admit, even as they work on improving Docker security, there's still a lot of work to be done but Docker, and any other LXC-based container technology, is much safer than it used to be.
Another important milestone was in March 2014 when Docker joined forces with Canonical, Google, Red Hat, and Parallels to create a critical standardized open-source program libcontainer. This program enables containers to work with Linux namespaces, control groups, capabilities, AppArmor security profiles, network interfaces and fire-walling rules in a consistent, predictable manner. By freeing programs from depending on Linux userspace components such as LXC, libvirt, or systemd-nspawn, it "drastically reduces the number of moving parts, and insulates Docker from the side-effects introduced across versions and distributions of LXC," said Docker CEO Solomon Hykes.
Libcontainers brings simplicity and standardization to many other Linux-related container-related technologies.
What was even more important is that by making a key component of Docker a standard, the program gained far more credibility. In particular, it made it much more attractive to developers. This enables them to confidently ship containerized programs, complete with all their required libraries and other files, to customers and be certain they'll run on their Docker installations.
In addition, as Bottomley explained at the time, libcontainer "will expose granular container features to applications ... and allow us to make our tools go much more seamlessly across our disparate products." For example, this "would allow things like Docker and LXC to deploy on to OpenVZ or even our cloud server product."
Bottomley also said (via email) that "Docker isn't container technology itself, it's an application packaging and orchestration system that requires container technology to function. If the underlying OS platform doesn't supply it, Docker won't work." It's that packing and orchestration that's given Docker it's special sauce.
Besides that, as Red Monk co-founder and analyst Stephen O'Grady explained, in many ways Docker also lucked out by having the right technology at the right time. O'Grady wrote, "Rather than one explanation, it is likely a combination of factors. Most obviously, there is the popularity of the underlying platform."
He continued, "Perhaps more importantly, however, there are two larger industry shifts at work which ease the adoption of container technologies. First, there is the near ubiquity of virtualization within the enterprise." Finally, and "More specific to containers ..., is the steady erosion in the importance of the operating system." Containers generally and Docker specifically [treat] the operating system and everything beneath as a shared substrate, a universal foundation that's not much more interesting than the raised floor of a data center. For containers, the base unit of construction is the application. That's the only real unique element."
What's going wrong?
So why aren't we all singing kumbaya and moving to a Docker-based data center future? Well, for starters, as you may have noticed from earlier, there are a lot of other container players out there. They'd like some of that container pie goodness on their plates as well. While that doesn't matter as much to a Google -- lmctfy or Docker, so long as you're running containers on Google Compute Engine, they'll be happy -- it does matter to many of the smaller companies in the container space.
These companies, many of which are pure open-source plays, are having trouble finding a viable business model. On top of that I know many of them have a lot of venture capital pressure to start turning their technology into profits sooner rather than later. One way to do that is to follow the tried and true business model of dominating an ecosystem and pulling in as many pieces as they can to keep their customers in their new software stack.
Some people in the industry have told me that's what they see Docker doing by adding more services to their initial container offering.
One company, CoreOS, which built its thin Linux server operating system around Docker, has stated publicly that Docker the company has gone badly off course by moving from offering a simple, straightforward container to offering a confusing platform, which offers a whole platform worth of programs to create, store, upload, run and orchestrate Docker images. And, said, Alex Polvi, CoreOS CEO, "Most of the people using CoreOS have existing environments of some sort that they are trying to integrate containers with. They are not ready to lob on a whole new platform."
Polvi added that they tried to work with Docker on these issues, but the Docker team wasn't interesting in fixing these problems. So, CoreOS is working on its own container take: Rocket.
Hykes did not take this well. As Matt Asay, columnist and Adobe's VP of mobile, wrote, by publicly ripping into "critics, competitors and interested onlookers, [and] challenging the integrity of CoreOS," he did Docker no favors. Hykes has cooled down since then, but there's no love lost under the surface of some of the container companies. While some companies, such as cloud company Joyent, are fully supporting Docker's position, others quietly agree with CoreOS.
CoreOS was the first, but I'm told by sources in the industry that others also think Docker is biting off more than it can chew. You can expect to see other rivals to either come out with their own container takes or to support CoreOS's Rocket.
At the same time, other companies, such as Canonical, are expanding on Docker's base technology. Its LXD (Linux Container Daemon) adds more security to containers by relying on existing Linux security technologies, including kernel support for user name spaces and CGroups, and other Linux security mechanisms such as Secgroup and AppArmour. While Canonical's founder Mark Shuttleworth is saying that LXD is an enhancement rather than a replacement to Docker, I know of skunk-work projects that are being set up to rival Docker.
Parallels, which was working on commercializing containers long before Docker was a gleam in Hykes' eye, plans to work with both Docker and Rocket no matter how the dispute turns out. Indeed, Parallels recently announced Docker will be running inside containers on Parallels Cloud Server.
Bottomley told me that Parallels plans on delivering the "most secure, densest, best-integrated container solution" of any container company. He concluded, "In the long run, we see the future of containers depending on developing lots of use cases for them, like solutions to the cloud tenancy problem, the cloud security problem and even novel OS virtualizations to support the Virtual Network Functions, so we intend to work with all sections of the computing industry to expand the utility of containers."
He's right. Containers are going to continue to be hot and in even more areas than they currently are. The real question in 2015 will be who will be the front-runner. True, Docker has enormous support at this point. But, will they continue to deliver? Will a rival — and there will be several in 2015 — knock them off their stride? Or, and this is a serious technical problem with security, put them off their pace? Personally, I think Docker, and the companies that have supported and partnered with it, will do well in the coming year. It will, however, also be a year where Docker will be challenged. If you're in the data center, server, or cloud world, you're going to have to live with the ancient curse of living in interesting times.
This story, "2015 will be the year of container wars" was originally published by ITworld.