Thanks to the backward, "all software owns the entire system" design of the x86 CPU architecture, PC client and server virtualization is one of the most challenging tasks facing system software developers. Even at their best, the benefits of x86 virtualization solutions from VMware and Microsoft are limited to reliability, convenience, and manageability. But virtualization's promise as a pathway to consolidation and the way to turn aggregated compute cycles into a provisionable distributed resource remains just that. Don't blame VMware and Microsoft. There's only so much virtualization one can do in software.
The sense of wonder one feels on first seeing a system split itself in two can be tough to sustain. We can be so impressed that we can run two copies of Windows simultaneously, or Windows and Linux, or Linux and that phony pirated copy of the Intel edition of OS X Tiger, that the end goal of virtualization is ignored or given up for lost. The dawn of the x86 virtualization era will break with the advent of two upheavals: CPU-assisted virtualization and paravirtualization.
CPU-assisted virtualization will carry x86 systems closer to the essential goal of linear virtualization, that is, the ability to split a CPU core into two virtual cores that operate at close to 50 percent of the performance of their real parent. Truly linear virtualization would require a zero-overhead VMM (virtual machine manager, or hypervisor) -- a theoretical goal on par with a 100 percent efficient solar panel.
But the Pacifica technology from AMD and Intel's forthcoming Vanderpool technology will obviate the need for the performance-sapping work-arounds that make software-based x86 virtualization behave a lot like emulation. Connectix, the company Microsoft acquired to bring Virtual PC and Virtual Server to its product line, illustrated this beautifully by releasing an x86 emulation solution for the PowerPC-based Macintosh that is functionally identical to its virtualization solution for x86 systems. It's as if Connectix figured that since x86 virtualization called for emulating some of the operation of the CPU itself, it might as well go the extra mile and craft an x86 entirely in software.
The Pacifica and Vanderpool on-CPU virtualization technologies eliminate the need to emulate an x86 to virtualize it. I have not seen either technology in hardware, but virtualization done with the Pacifica or Vanderpool hardware assist will simply blow the doors off software virtualization at its debut, assuming a virtual machine manager exists that exploits x86 hardware virtualization. In my estimation, AMD and Intel have placed virtualization within the reach of open source developers. AMD's delivery of a software-based CPU emulator incorporating the Pacifica specification guarantees that Pacifica will have software ready to roll on day one.
In the run-up to Pacifica and Vanderpool, the x86 virtualization landscape is evolving so rapidly that there seems to be more confusion than excitement. But there is ample reason for excitement; just keep your eyes on the prize and not on vendors' positioning. What's the prize? Operating systems that deliver secure, transparent, high-performance, hardware-based virtualization as a standard feature, and management tools that take advantage of the ability to create, destroy, suspend, relocate, and monitor virtual machines throughout an enterprise.