First look: VMware vSphere 4.1 keeps the virtualization crown

With scalability improvements, network and storage I/O control, and countless other enhancements, VMware continues to redefine the possibilities for server virtualization

Page 2 of 4

The new host affinity rules in DRS might not be useful to everyone, but the ability to create rules regarding what hosts a certain virtual machine can be migrated to (or not to, as the case may be) can help in situations where not every host in a cluster is identical or connected to the same networks. For instance, if only a few hosts have connections to a DMZ network, you can create rules that force DMZ-connected hosts to vMotion only to those hosts.

There have also been improvements in USB device mapping. It's now possible to map a USB device to a virtual machine and maintain that mapping even through a vMotion of the virtual machine. This is especially important for applications that require USB hardware license keys to operate.

Additionally, for those working with Intel's Nehalem-EX 8-core server processors, vSphere 4.1 officially supports that platform.

Network and storage I/O control
One of the main thrusts of vSphere 4.1 is two new I/O control frameworks. Storage I/O control is essentially QoS for storage, based on rules assigned to virtual machines. If there is congestion present on a storage link, higher-priority virtual machines will be given a larger share of the pipe than lower-priority virtual machines. While it's never a good idea to operate with a consistently congested storage pathway, this feature can ensure that critical virtual machines aren't choked during high-traffic periods or unexpected traffic surges.

In a similar fashion, network I/O control can be used to dictate bandwidth allotments to particular virtual machines when a network link is at or near capacity. There are some server hardware offerings, such as HP's Virtual Connect, that offer similar functionality on the switching side, but this feature is now available within vSphere proper. It's really designed for high-density hosts with 10G links, but can be leveraged at just about any level.

There are other enhancements at the host level too, such as support for iSCSI offloading NICs from Broadcom, NFS performance enhancements, and a functional boot-from-SAN manager for ESXi that can run over iSCSI, FCoE, and Fibre Channel.

A few new features are found in the HA and DRS functions, mostly providing tighter integration with FT (Fault Tolerance) features. Virtual machines configured for FT can now play nice with DRS, for instance, allowing for load balancing of virtual machines that also require fault tolerance. In addition, Windows clustering services can now be integrated with VMware's HA functions, ostensibly providing a deeper level of failover functionality in Windows environments.

In the lab with vSphere 4.1
I tested a vSphere 4.1 release candidate on a variety of boxes ranging from a new Dell R810 2U server running two Intel Nehalem-EX CPUs to an old Sun X4150 1U server running two Intel E5440 CPUs, all linked to a Dell EqualLogic 3800XV iSCSI SAN array and a Snap Server NAS.

As with previous VMware clients, you'll run into some trouble trying to access older versions of vCenter with the newer client. This can be a problem in environments that are migrating between different versions or that have multiple versions running in production. A new twist is that client downloads are no longer available from the ESX hosts, but from a VMware-hosted client distribution site. Otherwise, the client installation on 64-bit Windows 7 was normal.

| 1 2 3 4 Page 2
From CIO: 8 Free Online Courses to Grow Your Tech Skills
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies