It’s been a rough week for virtualization beta testing, at least for me. Last week, I took a look at a late beta of VMware’s VI3 update (with ESX Server 3.5 and VirtualCenter 2.5), and was impressed overall, but found more than a few bugs and saw more than my share of host crashes. This week, I looked at Microsoft’s next-generation hypervisor, dubbed Hyper-V, which will be released as a component of Windows Server 2008. Hyper-V is a whole new virtualization platform for Microsoft, going beyond Virtual Server and adding new management features, expanded snapshot support, and hopefully better performance. After my brief look at the beta release, I can confirm that this is truly beta, and it has a long way to go to be production-ready.
[ See also: Preview: VMware Infrastructure 3 update builds on the base" ]
The basis of Hyper-V is the new Windows Server 2008 platform that carries with it a host of new features and a vaguely Vista-esque look and feel. Love it or hate it, Vista-ness is apparently here to stay. Installation of Hyper-V on Windows Server 2008 is as straightforward as you would think, requiring only that an admin add the Hyper-V role to the server and designate which network interfaces to use for virtual machines. After installation, the server reboots and Hyper-V is ready for action.
I installed the beta on a solid, middle-of-the-road server, a Dell PowerEdge 2950 with two dual-core 3GHz Intel CPUs, 4GB of RAM, and a single 72GB U320 SCSI drive. I had newer and more powerful hardware in the lab, but I wanted to run the beta on hardware that was virtually guaranteed to have built-in driver support. I wasn’t disappointed -- everything worked right out of the box. From there, I had the system ready to handle virtual machines in a matter of minutes. A few minutes later, I ran into problems.
Disk Manager lockup drill
I first created a Windows Server 2003 VM by running through the simple wizard. Aside from the fact that Hyper-V wanted to create a 127GB virtual disk on a fictitious 2TB file store instead of the actual 70GB local disk, the process was quick and easy. Then I tried to boot the VM from an ISO located on a network share, much like I’ve done for countless VMware installations. Unfortunately, although my user account had rights to the share, Hyper-V requires the system account to have read/write access to the share to read the ISO image. Rather than reconfigure network sharing security preferences, I copied the ISO to the local system and proceeded to install the new VM that way. While the Windows VM setup was running, I built another VM to run Linux, and configured it for PXE boot, again like I’ve done on many, many VMware installations. The VM successfully PXE booted and the installation proceeded normally.
While the two VMs were building, I explored the system a bit, specifically Server Manager and the iSCSI Initiator, in order to mount an iSCSI LUN for the VMs. This led me nowhere. Although I could successfully discover and log on to the iSCSI LUN, opening Disk Manager to partition and format the volume locked the application up tight. For a period of five minutes, I could communicate with the server, but all VM activity was frozen, and the Server Manager was completely unresponsive. Manually ending the process did bring the server back, but I couldn’t use the iSCSI LUN. In subsequent testing with the system quiescent and no iSCSI LUN mappings, the same scenario occurred consistently: Trying to run the Disk Manager resulted in a lockup.
So I scrapped the iSCSI idea, and went right for the management tools. The Hyper-V management console is laid out fairly well, although it’s certainly a departure from Microsoft’s normal management interfaces. It provides easy access to pertinent management tools for the server itself and the VMs running on the system. The dashboard shows all configured VMs and their state, as well as some limited performance data, and a recent screengrab of the server’s console window.
Speaking of console windows, don’t expect to manage VMs in Hyper-V from an RDP session, as VM console mouse support is basically nonexistent via Remote Desktop. It’s hopefully going to be possible to manage the server using the Hyper-V management tools installed on another system. However, I wasn’t able to test this on 32-bit Windows XP because the only code currently available is x64, and it won’t run on a 32-bit platform. In production, installation of the management tools on a workstation is probably going to be the only feasible method of administering VMs.
My personal hotkey hangups
While working with the management tools, I was constantly annoyed at the method of releasing input focus on the VM. VMware’s Ctrl-Alt hotkey for this action is embedded in my brain, and other hypervisors use the same hotkey specifically to fit that reflex. Microsoft uses the Ctrl-Alt combination, but also throws in the left arrow, requiring two hands to release the mouse from the VM console. This may seem like a minor issue, but it raised my ire many times while I jumped in and out of VMs during installation. There is a facility in the Hyper-V Manager to change this hotkey, but it offers only four choices, and all are as bad or worse than Ctrl-Alt-Left Arrow.
Another ubiquitous hotkey, Ctrl-Alt-Delete, works differently for VMs. Hitting that combination to log into a VM will result in the server itself capturing the keys rather than the VM, so to pass that through to the VM, it’s Ctrl-Alt-End, versus VMware’s Ctrl-Alt-Insert. Again, it’s a little thing, but highly annoying when doing lots of work on multiple platforms. It would seem to me to be worthwhile for Microsoft to use existing popular hotkey combinations rather than force their own.
These annoyances (and system stability issues) aside, it’s obvious that Microsoft has made strides with Hyper-V. It’s not yet a threat to any of the established virtualization players such as VMware or Virtual Iron, but it has promise. The basic functionality is there, as is nascent cross-platform support, and despite the stumbling blocks, I had Windows and Linux VMs running under Hyper-V. Some basic I/O testing on an otherwise quiescent Hyper-V host showed streaming writes on a Linux VM system running approximately 38MB/s to local disk, which is decent performance, but with reads inexplicably running in the neighborhood of 8MBps. Further testing on production code and with faster hardware will tell the performance tale. Given Virtual Server’s track record, VM performance is probably the biggest issue Microsoft has to overcome.
From what I’ve seen, Microsoft’s Hyper-V is roughly analogous to VMware Server 1.0, although not as polished. It doesn’t appear to be a significant challenge to VMware’s Virtual Infrastructure and ESX Server products, and given the fact that VMware Server is free, runs on Linux and Windows, and is considerably more mature, it’s questionable how many infrastructures will benefit from using Hyper-V over VMware Server. Hyper-V is certainly behind the curve, but shows that Microsoft sees the need to be competitive in this space. Only time will tell whether Microsoft can catch up to the virtualization leaders, or be forced to settle for a secondary role.
Having trouble installing and setting up Win10? You aren’t alone. Here are many of the most common...
It's all about knowing how to build an open source community -- plus experience running applications in...
Win7 Update scans got you fuming? Here’s how to make the most of Microsoft’s 'magic' speed-up patch
Can you really use Google’s G Suite instead of Microsoft Office? Here's how they compare on Windows,...
These were heralded as impressive new gotta-have-it features, but in hindsight, they're pretty useless ...
From shape-shifting furniture to holographic displays, the workplace of the future promises more than...
Once again, Microsoft cuts corners on user interface, functionality, and cross-platform support. Will...