Still another way to achieve virtualization is to build in the capability for virtual servers at the OS level. Solaris Containers are an example of this, and Virtuozzo/OpenVZ does something similar for Linux.
With OS-level virtualization, there is no separate hypervisor layer. Instead, the host OS itself is responsible for dividing hardware resources among multiple virtual servers and keeping the servers independent of one another. The obvious distinction is that with OS-level virtualization all the virtual servers must run the same OS (though each instance has its own applications and user accounts).
What OS-level virtualization loses in terms of flexibility, it gains in native-speed performance. In addition, an architecture that uses a single, standard OS across all the virtual servers can be easier to manage than a more heterogeneous environment.
Easier but harder
Unlike mainframes, PC hardware wasn’t designed with virtualization in mind — software alone had to shoulder the burden, until recently. With the latest generation of x86 processors, AMD and Intel have added support for virtualization at the CPU level for the first time.
Unfortunately, the two companies’ technologies were developed independently, which means they are not code-compatible, although they offer similar benefits. By taking responsibility for managing virtual server access to I/O channels and hardware resources, hardware virtualization support relieves the hypervisor of its most demanding babysitting chores. In addition to improving performance, operating systems can run unmodified in para-virtualized environments, including Windows.
CPU-level virtualization doesn’t kick in automatically. Virtualization software has to be written to specifically support it. Because the benefits of these technologies are so compelling, however, virtualization software of all types is expected to support them as a matter of course.
A virtual toolbox
Each method of virtualization has its advantages, depending on the situation. A group of servers all based on the same operating platform would be a good candidate for consolidation via OS-level virtualization, but the other technologies have benefits as well.
Para-virtualization represents the best of both worlds, especially when deployed in conjunction with virtualization-aware processors. It offers good performance coupled with the capability of running a heterogeneous mix of guest operating systems.
Full virtualization takes the greatest performance hit of the three methods, but it offers the advantage of completely isolating the guest OSes from each other and from the host OS. It is a good candidate for software quality assurance and testing, in addition to supporting the widest possible variety of guest OSes.
Full virtualization solutions offer other unique capabilities. For example, they can take “snapshots” of virtual servers to preserve their state and aid disaster recovery. These virtual server images can be used to provision new server instances quickly, and a growing number of software companies have even begun to offer evaluation versions of their products as downloadable, prepackaged virtual server images.
It’s important to remember that virtual servers require ongoing support and maintenance, just like physical ones. The increasing popularity of server virtualization has fostered a burgeoning market of third-party tools ranging from physical-to-virtual migration utilities to virtualization-oriented versions of major systems management consoles, all aimed at easing the transition from a traditional IT environment to an efficient, cost-effective virtualized one.
Click for larger view.
Read more about virtualization in InfoWorld's Virtualization Channel.