What’s all the fuss about virtual machines? From AMD to Intel, Microsoft to Novell to Red Hat, every major OS and hardware platform vendor today has a stake in the virtualization game. But the truth is that running multiple virtual systems on a single physical workstation or server is simply passé.
Sure, booting Windows 95 under VMware on Windows NT wowed the crowd back in 1998, but even then similar technology had enjoyed a venerable history -- virtual partitioning on mainframes date back to the 1970s. Over the years, commercial Unix vendors had steadily added virtualization features to their enterprise products. Why is it, then, that the industry now seems so hot to sell virtualization into the mainstream market?
If you examine how the market has changed in recent years, the answer is surprisingly clear: In the early days, the cost of entry to virtualized infrastructure was extreme and the applications were relatively limited, but the advent of affordable, robust virtualization on the x86 platform has meant that virtual machine technology is accessible to a broad audience for the first time, which coincides with inexpensive high-performance and high-reliability server hardware.
More important, as these customers begin to deploy virtual machines in production environments, demand for new management tools to take better advantage of virtualized environments is growing, and competition in this space is heating up. This year like never before, with the underlying technology mature and stable, vendors are rushing to market with new tools that use virtualization to address a broad range of challenges facing IT managers today.
The almighty buck
To a large extent, as is so often the case, the bottom line is driving customer interest in virtualization. The desire to keep costs low makes virtualization technology attractive even to midsize enterprises.
“Many small businesses are starting to get back into their server replacement cycle following Windows 2000/2003 upgrades of several years ago,” says Matt Prigge, senior network architect at SymQuest. “As a result, businesses that might usually buy servers one or two at a time are faced with the prospect of buying six or seven at a time. This provides a great opportunity to implement virtualization in architectures that might otherwise be too small to consider it. The prospect of gaining all of the many benefits of virtualization on two highly redundant servers for as much as or less than it would cost to reimplement a conventional installation is appealing.”
For larger enterprises, however, virtualization can be even more appealing. Peering into a large datacenter is usually an impressive sight -- dozens or hundreds of servers in racks, blinking lights, the whoosh of the air conditioning, the hum of the cooling fans -- but the hidden truth is that the CPUs of most of those servers are sitting idle. Sun Microsystems estimates that most production servers are only 15 percent utilized. The remainder of that potential is simply wasted, along with the power and HVAC resources necessary to maintain the physical hardware.
The rapid pace of CPU development and the comparatively slow progress in OS and application development have led us to a point where buying a new server to run old applications simply doesn’t make sense. Given today’s supercharged chips, even the most frugal IT director is forced to buy more horsepower than is really necessary. Applications that have run without problem for years on older servers don’t necessarily need buckets of RAM and the latest and greatest CPUs, but if you want reliable, supported new hardware you haven’t much choice.
Rather than buying the baseline of new hardware, then, many organizations choose to scale up. A single midrange server combined with a virtualization platform can often take the place of six or seven low-end servers. The savings can be counted in more than just initial purchase price. Viewed in terms of total cost of ownership, there’s much more to be gained when you add up power, maintenance, and cooling costs during the life of the server.
A virtual datacenter can significantly reduce administrative and management costs, as well. The capability to take snapshots of running servers is an impressive insurance policy against failed system patches, virus infections, and upgrades. What’s more, resource management is much simpler on virtualization platforms that allow dynamic allocation of CPU time, RAM, and network bandwidth.
The virtual tour
A broad range of vendors have stepped up to the plate to address these needs in a variety of ways. Even the word virtualization itself doesn’t imply a single approach. Broadly speaking, the field has grown in two distinct camps, in terms of the core technology.
On one hand are complete hardware emulation systems, a la VMware and Microsoft’s Virtual Server. These model the native hardware platform of the physical server for each virtual server, including a fully configurable BIOS. This method leaves each virtual server running as a single process on the host platform. On disk, each virtual server is totally independent of the others, with its own complete instance of the OS and all necessary applications.
The other approach can be classified as host-based virtualization, as exemplified by SWsoft’s Virtuozzo and Sun’s Solaris Containers. In this design, a single instance of the host OS supports multiple virtual OS instances, with the same host OS kernel handling I/O and scheduling needs of the virtual servers at the process level. All virtualization platforms employ a hypervisor, a software layer that sits above the base-level OS and below the VMs. The hypervisor is in charge of herding each VM’s resource requests through to the base OS, and handling all I/O interaction. The form of hypervisor is different in each virtualization platform, but the effects are generally the same.
Beyond the software, the latest generation of chips from both AMD and Intel are designed with hardware virtualization in mind. Intel’s VT (Virtualization Technology) and AMD’s SVM (Secure Virtual Machine) CPU extensions move some of the heavy lifting in virtual hardware emulation from software to hardware, and shift certain memory management functions into CPU microcode that today are handled in software. These endeavors are resulting in x86-platform CPUs better suited to the unique workloads created by virtual servers.
Still other vendors are busy adding pieces to the top of the pile, including virtual server management, consolidation, and migration tools. For example, HP and IBM Tivoli are offering tools that integrate into their overall management products, while even Dell is getting into the game with VMware tools for OpenManage.
Smaller ISVs are seeing opportunities as well; PlateSpin and Leostream both market server consolidation and migration tools that integrate with VMware and Microsoft virtualization solutions (see Test Center Review).
Making the move
Viewed as a whole, these new technologies are progressing at a breakneck pace. The server virtualization landscape has almost completely changed from this time a year ago. In nearly every measurable metric -- including performance, stability, SAN integration, and 64-bit support -- the new crop of virtualization platforms has charged ahead.
The other side of that coin, however, is that virtualized infrastructure isn’t without its challenges. One concern that worries many admins is the issue of putting too many eggs in one basket. A major hardware failure on a single server only affects the services on that server; if that server is running 10 virtual servers, however, the stakes are much higher.
What’s more, many virtualization customers come to realize that the hardest part of making the move to a virtual datacenter is the migration. It’s easy to install a big server and build a half-dozen virtual servers on it, but at first blush, migrating from the physical to the virtual realm is no different than a physical-to-physical server migration. In short, it can be a costly, time-consuming process, fraught with problems.
These problems aren’t insurmountable, however. In fact, you can expect to see more solutions aimed at addressing them to appear this very year. Any way you cut it, a peek into the datacenter in your future will show far fewer blinking lights and fewer servers in the racks. But this won’t mean fewer servers to manage; in fact, that number is likely to grow, since application silos will be the rule, not the exception. When it’s so simple to provide a service on a discrete server without worrying about resource utilization, dependencies, hardware requisitioning and installation, virtualization is a virtual no-brainer.
The only real question that remains is what flavor works best for your use case. In fact, the answer to that may actually be multiple solutions. (For now, that means multiple management tools, but even that may soon change.) Regardless of the flavor, server virtualization is entering adolescence with a solid foundation, a seemingly endless array of opportunities, and a very bright future.