Life after virtualization: More data center tech for less

VMware’s vSphere 6 release marks the end of one chapter of the virtualization story, but another has already begun

computer data center servers

With the official release of vSphere 6 today, VMware is ushering in the next stage of the virtualization era. I posted a first look at vSphere 6 late last week, and while the new version is not without its challenges, vSphere is definitely still the top of the crop. The features and functions available to server admins at this point in time are simply stunning to behold -- even more so if viewed through the lens of IT from a few years ago.

It would be hard to overstate how much virtualization has changed the way IT works. We blithely make bold actions on production systems now because we know that we can easily revert to a snapshot or, at worst, recover the VM from a recent backup in minutes. We no longer worry about rebuilding a server or wrestling with ancient server building technologies like ghosting an image. Compared to previrtualization days, when an upgrade involved updating a physical server that could not be easily reverted if something went wrong and recovery times were generally measured in hours, builds and upgrades are a piece of cake.

We’re now enjoying what virtualization can offer across other areas of IT. We’re virtualizing desktops, applications, databases, storage, and networks. We’re able to move resources around like never before and even automate all of it so that we’re running only the physical hardware we need, when we need it. It’s damn cool stuff, though it may be making newer admins a little soft around the middle.

In the summer of 1999, I booted a Windows 98 system in a window of a Windows NT Workstation system. A group of admins clustered around and tried to figure out exactly what they were seeing. What good was this? Why would we want to do this? What about performance? How much RAM was in this box anyway? Of course, there were a few AS/400 guys around as well, and they chuckled softly to themselves and went back to their LPARs.

Nobody could have predicted the impact that computer virtualization would have back in 1999. It had become readily apparent by around 2003 that this was absolutely going to be the future of IT, but it took the better part of the decade for the pieces to fall into place. Battles were fought between software vendors and customers regarding the use of virtualization, with vendors steadfastly refusing to support their software on virtual servers, regardless of mountains of evidence showing hypervisors to be perfectly functional platforms. Some vendors actively worked against virtualization through their licensing schemes and generally made life much harder than necessary for the many years it took to dig their heads out of the sand.

I can recall writing column after column discussing virtualization and prodding my readers to dip their toes in the virtualization waters, even a little. I wrote features and, with fellow InfoWorld reviewers, conducted massive tests of the big platforms along the way. We watched as everyone struggled to catch up to VMware, shipping products that were so far behind VMware's offerings that they could only compare with VMware releases from five years prior. These were the times when VMware’s nearest competitors couldn’t perform a live VM migration, and VMware ran rings around everyone. But that wouldn’t last.

We also witnessed the competitors moving absurdly fast to close the gap, developing and releasing advanced features and catching up to the behemoth, or at least delivering a stable platform and feature set that served the needs of the majority of virtualization users. It didn’t take long before they all could support live migrations, snapshots, templates, load balancing, and high availability.

Of course, server virtualization spun off in many directions, with every OS offering a form of paravirtualization, such as FreeBSD jails, Solaris containers (not to be confused with modern containers), OpenVZ/Virtuozzo containers, LXC, and AIX Workload Partitions. These forms of virtualization revolutionized the server hosting industry, allowing hosting vendors to offer small slices of big iron for very reasonable money while completely avoiding the massive headache of shared Web hosting and other forms of evil.

Out of those technologies, we wound up with OpenStack and the cloud, including IaaS, PaaS, and all kinds of other integrations that give smaller projects incredible speed and agility, if not a long lifecycle within the constraints of those services. This lineage has given us Docker containers, for better or for worse, and laid the groundwork for technologies we haven’t developed yet. The virtualization frontier still has uncharted territory, such as the ever-tighter integration of the hypervisor and storage, or a combination of the two, like the ability to run VMs directly on a storage array, which is a bizarre full-circle ride from the early days of VMware GSX server when shared storage wasn’t an option.

VMware vSphere 6 isn’t a watershed release. It raises the limits for host resources and cluster sizes, bundles previously optional features into the main release, and makes significant strides on a number of fronts. It’s a worthy successor to the vSphere line and continues VMware’s leadership of the virtualization market. However, it’s not made of the same massively impactful and fundamental feature additions we saw in vSphere 4 or 5. That fact highlights a few details:

  1. Hardware virtualization is approaching a level of feature maturity.
  2. The competition draws ever closer.

This is good news for all of us. We can benefit from the substantial tools offered by virtualization both in our data centers and in the cloud, and we can pay less for the privilege. More tech for less bread -- that hasn’t changed since the heady days of 1999.