Late to the virtualization game, Microsoft has been running several lengths behind the competition in this space for years. However, the new features and strong performance present in Windows Server 2008 R2 SP1 show that the company hasn't been twiddling its thumbs. It's clearly been working hard at bringing a compelling and competitive virtualization solution to the market.
There's plenty to like in Hyper-V these days, not the least of which is the price comparison to the other major players. But whereas that lower price used to mean significantly diminished features and performance, that gap has closed. Hyper-V now offers the big features, including live VM migrations, load balancing, and high availability, as well as a more fluid management interface in Microsoft System Center Virtual Machine Manager 2008 R2 (VMM).
One very notable addition to Hyper-V in Windows Server 2008 R2 SP1 is dynamic memory. By specifying a minimum and maximum RAM allotment per virtual machine, as well as a buffer to maintain over actual memory requirements, you can configure Hyper-V to grow and shrink RAM allocations as virtual machines require. This means you could give a virtual machine 2GB of RAM, but allow it to grow up to 4GB as needed. If the VM needs less, Hyper-V can then reduce physical RAM usage on the host. In situations where a host exhausts physical RAM, Hyper-V will begin reducing the allotted RAM to running virtual machines based on their priority.
Like memory management in VMware's hypervisor, Hyper-V's dynamic memory allows you to run a higher density of VMs on each host. Microsoft's method of memory allocation, which utilizes a memory balloon that can expand and contract as needed, has clear benefits, but doesn't go as far as VMware's or Red Hat's, which leverage advanced features such as page sharing and RAM compression. Plus, Hyper-V's dynamic memory works only with Windows guests; VMware and Red Hat have no such limitation.
Unless you leverage additional Microsoft technologies such as System Center Operations Manager to build and manage your Hyper-V hosts (a significant task all to itself), be prepared to perform plenty of the same steps, over and over, on each host in the cluster as you build it and as you add hosts over time. For the test, we configured our four Hyper-V hosts manually.
We ran into a few relatively minor problems during the initial build revolving around VLAN tagging. Using the Intel X-520 driver's VLAN capabilities, we set up virtual interfaces with VLAN tagging and presented them to the Hyper-V hosts as regular networks. Even though these interfaces were already tagged, it was necessary to specify each network's VLAN tag not only within the network definition in the host, but also on each virtual machine built with a network connection to those networks -- a step not necessary with other solutions. Oddly, migrating VMs from one host to another outside of VMM caused those tags to disappear, rendering the virtual machine disconnected from the networks. When using VMM to migrate the same hosts, the VLAN IDs were maintained.
There are other ways to connect Hyper-V virtual machines to trunked VLANs, but they aren't as simple as defining a trunked VLAN as a network and applying that network to the VM. It's functional, but not a very fluid process. This is a notable issue, as very few significant virtualized infrastructures operate without VLANs.
Hyper-V R2 management
The management aspects of Hyper-V are not contained in a single console, but scattered throughout the various supporting players that Microsoft has leveraged to bring higher-end features to the solution. Although most of the basic VM tasks can be controlled through Virtual Machine Manager, other tasks such as load balancing, backups, host updates, and patching are handled by Operations Manager and Configuration Manager. The plethora of management tools can get tedious when you're looking for one specific function that might exist in one or more consoles. Also, there's a noticeable lag in host and VM status points in the VMM console, so a virtual machine that might be heavily loaded might actually reflect a low CPU load in the display, which is annoying.
To build our test virtual machines, I modified the PowerShell code generated from a simple clone action, changed the VM name, and then ran the script again to build the next VM. This process is simplified by a PowerShell button right on the console that launches a PowerShell prompt.
There are also provisions in VMM that allow for automated configuration of cloned instances, not unlike VMware's guest customization specifications. However, Microsoft's auto-configuration is limited to Windows guests and isn't as malleable as VMware's tools.
Live migrations of Linux and Windows VMs proved snappy and resulted in no significant processing or networking performance problems during the operation. Flood pings from servers during migrations showed no packet loss at 1,000 packets per second; there were delays in packet delivery during the switch, but nothing out of the ordinary.
The high availability and load balancing capabilities, delivered through VMM, Operations Manager, and Cluster Services, also worked as advertised. "PRO Tips" pop up alerts when certain thresholds are met or exceeded. The solution can then automatically act on these notifications and live migrate VMs to other hosts or simply issue recommendations that actions be taken.
It takes a while to dig into Operations Manager to set thresholds, and the whole process is significantly less straightforward than using VMware's Distributed Resource Scheduler, for example, but it is functional. On the DR front, when a host blade was yanked out of the rack, the VMs that had been running on that blade began booting on another host very quickly.
Hyper-V R2 performance
We tested Hyper-V performance under Windows and Linux VMs, both with and without other VM loads on the physical server. A significant caveat to this was that Hyper-V does not yet support Red Hat Enterprise Linux 6, so all of these tests were conducted on RHEL 5.5 with Microsoft's Linux Integration Services for Hyper-V tools installed. Thus, while the numbers can be said to be similar to the other vendors, this discrepancy prevents direct performance comparisons.
That said, the Hyper-V performance tests showed impressive improvement of Linux guests versus the last time I took a close look. In thread-per-thread comparisons between the physical server and a VM, both running Red Hat Enterprise Linux 5.5, the VM ran with a 3 to 4 percent overhead depending on the test, which is quite acceptable. We did experience two kernel panic events related to the Microsoft driver code (the Linux Integration Services components) on the RHEL 5.5 VMs, but they were sporadic and not repeatable. A heavy-duty, 16-hour run of three four-vCPU RHEL VMs did not result in another problem.
Microsoft Hyper-V performed quite well in the benchmarks overall, posting very competitive numbers in both the Linux and Windows tests. One exception was the crypto benchmark, which tested cryptography speed of the virtual machine. While the VMware and Citrix solutions were posting numbers around 1.6GBps in these tests, Hyper-V consistently hovered around 500MBps. This significant lag in AES performance is due to the fact that, unlike VMware and Citrix, Hyper-V doesn't expose the AES-NI instructions in the Intel Westmere CPU to the VMs.
The intercore bandwidth and latency tests were very close to those of VMware, and the memory tests were actually higher than the rest of the pack. All this points to the fact that Microsoft has definitely made strides in VM performance, especially on the Linux side of the coin.
Hyper-V has come a long way in providing enterprise-class features and delivering enterprise-class performance. However, one aspect of Hyper-V that cannot be overlooked is the reliance on a cast of supporting players. Once you add up all the Operations Manager VMs, the SQL Server VMs supporting the Virtual Machine Manager VMs, and the Configuration Manager VMs, you find yourself running eight VMs just to support your other VMs. On top of that, some of these management VMs consume vast amounts of RAM for what they're doing.