Virtualization shoot-out: Red Hat Enterprise Virtualization

Red Hat's server virtualization solution mixes ease and scalability with a few odd limitations

1 2 3 4 Page 3
Page 3 of 4

Scheduling backups of RHEV virtual machines is handled via the command line; there's no GUI equivalent. This essentially amounts to writing some cron jobs with CLI calls. It's not terribly difficult, but it's not as clean and fluid as the Microsoft and VMware solutions, which have built-in backup schedulers.

Updating the hosts is fairly straightforward, with automated processes to determine when updates are available, place the host into maintenance mode, evacuate the virtual machines, perform the updates, and bring the host back into the cluster. If the virtualization host is running the full RHEL installation versus the small-footprint RHEL Hypervisor build, updates can be delivered through Red Hat's RHN update mechanisms as well.

Snapshots and templates work as you might expect. However, as noted above, deploying many servers from a single template requires the use of pools, which are useful for desktops but don't really work well for servers. You can always resort to the CLI and scripting, but otherwise, deploying servers is a manual process.

RHEV performance
RHEV held its own with the other vendors in the performance tests, though it threw us an interesting curveball. A virtual CPU in VMware and other solutions has traditionally corresponded to a single socket with a single core. RHEV and VMware vSphere (starting with v4.1) give you more flexibility. Four vCPUs can be configured as one socket with four cores or as four sockets with one core. Naturally, this affects performance.

When the benchmarks were first run, we had configured our RHEV virtual machine with one socket and four cores. The results were dismal. Although RHEV kept pace with the other solutions when two concurrent processes were run, as soon as four or more concurrent processes were run RHEV lagged the pack significantly.

After some experimentation, we changed the vCPU configuration to four CPUs with a single core each. The results were dramatically improved, as RHEV kept right in line with the other vendors in the Linux tests and most of the Windows tests, on both the loaded and unloaded test passes. (For a full description of the benchmarks and comparative results, see the main article.)

These performance differences are likely due in part to how the schedulers within the guest OSes treat the CPU resources and possibly to how the RHEV Hypervisor deals with CPU interactions based on the vCPU configuration parameters. They certainly showed that there are two very different ways to split up four virtual CPUs.

15TC-server-virtualization-rhev3.jpg
RHEV Manager shows an overview of the farm and running VMs.
1 2 3 4 Page 3
Page 3 of 4