Red Hat's server virtualization solution derives from the company's 2008 acquisition of Qumranet, a small company that had been building a desktop virtualization solution based on KVM (Kernel-based Virtual Machine) technology. Unlike the hypervisors of VMware, Microsoft, and Citrix, Red Hat's virtualization does not rely solely on emulated hardware but uses paravirtualization wherever possible to map virtual machines directly to hardware via the /dev/kvmkernel interface.
While the original goal of Qumranet was desktop virtualization, Red Hat has moved the solution into the server virtualization space, supporting RHEL and Windows Server VMs, as well as Windows and Linux virtual desktops. Red Hat Enterprise Virtualization (RHEV) boasts an easy install, good performance, and strong management capabilities, with features -- including automated load balancing and high availability -- to support larger environments. It also has some quirks.
Red Hat's KVM-based hypervisor installs quickly and easily. Though you can run KVM from a full RHEL installation, we installed the bare-metal RHEV Hypervisor, which is essentially an RHEL 5.6 build with only the packages necessary to support KVM. The installation went exceedingly fast on each host, requiring a scant few configuration parameters fed through the CLI host console. These included basic management network setup, password modifications, and later, the IP address of the management server.
Here's where the Qumranet acquisition sticks out: The management server, RHEV Manager, is a Windows application written in .Net that runs on Windows Server 2008 R2 and requires Internet Explorer -- not what you'd expect from Red Hat. However, a rewrite that runs on Windows and RHEL is apparently in the works.
The installation of RHEV Manager is straightforward, requiring only a few mouse clicks and the usual progress-bar observation. RHEV Manager uses a Microsoft SQL Express database, much like VMware, and installs a few other packages as well. Naturally, the Web server role and ASP.Net are all prerequisites on the Windows Server 2008 R2 system.
Once installed, each RHEV Hypervisor host is given the RHEV Manager IP address and checks in. Within the management interface, you then allow the host to participate and get started with the configuration. This process is certainly quick and easy, but other than storage, there's no facility for host configuration profiles or a way to apply a standard configuration to each host in a cluster.
Like the other platforms, RHEV runs with a data center and cluster mind-set, allowing you to collect hosts into various groups for ease of management and resource allocation. However, while storage is configured at the data center level and replicated across hosts, networks must be configured manually on every host in the cluster. The other solutions provide ways to replicate both storage and network configuration across hosts.
The desktop roots of RHEV surface in a number of places. Occasionally messages and alerts will specifically reference "desktops," while some features seem to have been built only with desktops in mind. For example, creating pools of VMs offers an easy and quick way to build plenty of servers from a single template, but limits the ability to configure them independently. You can't select a server built from a pool and turn high availability on or off, for instance, or set a priority level for that one server.
All the "big" features of server virtualization are present in RHEV, but some aspects don't function as you might expect. High availability and load balancing generally work quite well, but have a few quirks not seen in the other solutions. For example, if a single host has a very high VM load, and another server in the cluster is brought out of maintenance mode with no virtual machines whatsoever, virtual machines from the busy host will not begin migrating to the empty host unless at least one VM is manually migrated over. This manual step shouldn't be necessary.
Conversely, when a host is shut down via an external means -- such as by holding the power button or using remote management to power off the server -- RHEV will determine that the host has disappeared using IPMI hooks and begin booting the virtual machines that were flagged for high availability on other hosts. It will also automatically try to revive the downed host. However, if the host disappears completely and IPMI is no longer available -- as in the case of an abrupt power loss -- RHEV won't do anything unless and until an admin flags the server as down. This means that even virtual machines flagged for high availability will not automatically restart on other cluster hosts -- a big problem, should someone happen to unplug the wrong server or pull the wrong blade.
Scheduling backups of RHEV virtual machines is handled via the command line; there's no GUI equivalent. This essentially amounts to writing some cron jobs with CLI calls. It's not terribly difficult, but it's not as clean and fluid as the Microsoft and VMware solutions, which have built-in backup schedulers.
Updating the hosts is fairly straightforward, with automated processes to determine when updates are available, place the host into maintenance mode, evacuate the virtual machines, perform the updates, and bring the host back into the cluster. If the virtualization host is running the full RHEL installation versus the small-footprint RHEL Hypervisor build, updates can be delivered through Red Hat's RHN update mechanisms as well.
Snapshots and templates work as you might expect. However, as noted above, deploying many servers from a single template requires the use of pools, which are useful for desktops but don't really work well for servers. You can always resort to the CLI and scripting, but otherwise, deploying servers is a manual process.
RHEV held its own with the other vendors in the performance tests, though it threw us an interesting curveball. A virtual CPU in VMware and other solutions has traditionally corresponded to a single socket with a single core. RHEV and VMware vSphere (starting with v4.1) give you more flexibility. Four vCPUs can be configured as one socket with four cores or as four sockets with one core. Naturally, this affects performance.
When the benchmarks were first run, we had configured our RHEV virtual machine with one socket and four cores. The results were dismal. Although RHEV kept pace with the other solutions when two concurrent processes were run, as soon as four or more concurrent processes were run RHEV lagged the pack significantly.
Like Hyper-V, RHEV doesn't appear to take advantage of the AES-NI instructions in Intel Westmere processors, which accounts for the far lower crypto test results. And like Citrix XenServer, RHEV stumbled when the host was placed under significant load, the overall benchmarks of the VM under test dropping significantly. Hyper-V and VMware are better at handling wider loads than the others.
One significant benefit of RHEV is the ability to leverage multiple memory management technologies, including page sharing, memory compression, and ballooning. These features all have pros and cons -- page sharing may be better for server workloads, while ballooning is better for desktop VMs, for example -- but having all of these options available is a good thing.
The marriage of Qumranet and Red Hat is still in the early stages, and there's sure to be tighter integration with RHEL in the near future. As it stands now, RHEV exists on the edges of the overall Red Hat foundation. For instance, management agents are not yet available for RHEL virtual machines, though they are available for Windows. This means you can't issue a graceful shutdown command of an RHEL server from within RHEV, but must log into the server to perform the shutdown.
RHEV is sold on a subscription basis, which makes it initially cheaper than the alternatives. For 9-to-5 support, you can purchase RHEV for $499 per socket per year. A full farm of six dual-socket RHEV hosts will cost about $6,000 per year, and that price includes support and all upgrades as they are released.
Although RHEV certainly appears to be in a transitional phase, the underlying capabilities are strong, and current implementations of KVM outside of RHEV have progressed significantly beyond the ability of RHEV's management framework. This is evident in the advances made in KVM in RHEL 6, versus the version reviewed here, which is based on RHEL 5.6.1.