The new host affinity rules in DRS might not be useful to everyone, but the ability to create rules regarding what hosts a certain virtual machine can be migrated to (or not to, as the case may be) can help in situations where not every host in a cluster is identical or connected to the same networks. For instance, if only a few hosts have connections to a DMZ network, you can create rules that force DMZ-connected hosts to vMotion only to those hosts.
There have also been improvements in USB device mapping. It's now possible to map a USB device to a virtual machine and maintain that mapping even through a vMotion of the virtual machine. This is especially important for applications that require USB hardware license keys to operate.
Additionally, for those working with Intel's Nehalem-EX 8-core server processors, vSphere 4.1 officially supports that platform.
Network and storage I/O control
One of the main thrusts of vSphere 4.1 is two new I/O control frameworks. Storage I/O control is essentially QoS for storage, based on rules assigned to virtual machines. If there is congestion present on a storage link, higher-priority virtual machines will be given a larger share of the pipe than lower-priority virtual machines. While it's never a good idea to operate with a consistently congested storage pathway, this feature can ensure that critical virtual machines aren't choked during high-traffic periods or unexpected traffic surges.
In a similar fashion, network I/O control can be used to dictate bandwidth allotments to particular virtual machines when a network link is at or near capacity. There are some server hardware offerings, such as HP's Virtual Connect, that offer similar functionality on the switching side, but this feature is now available within vSphere proper. It's really designed for high-density hosts with 10G links, but can be leveraged at just about any level.
There are other enhancements at the host level too, such as support for iSCSI offloading NICs from Broadcom, NFS performance enhancements, and a functional boot-from-SAN manager for ESXi that can run over iSCSI, FCoE, and Fibre Channel.
A few new features are found in the HA and DRS functions, mostly providing tighter integration with FT (Fault Tolerance) features. Virtual machines configured for FT can now play nice with DRS, for instance, allowing for load balancing of virtual machines that also require fault tolerance. In addition, Windows clustering services can now be integrated with VMware's HA functions, ostensibly providing a deeper level of failover functionality in Windows environments.
In the lab with vSphere 4.1
I tested a vSphere 4.1 release candidate on a variety of boxes ranging from a new Dell R810 2U server running two Intel Nehalem-EX CPUs to an old Sun X4150 1U server running two Intel E5440 CPUs, all linked to a Dell EqualLogic 3800XV iSCSI SAN array and a Snap Server NAS.
As with previous VMware clients, you'll run into some trouble trying to access older versions of vCenter with the newer client. This can be a problem in environments that are migrating between different versions or that have multiple versions running in production. A new twist is that client downloads are no longer available from the ESX hosts, but from a VMware-hosted client distribution site. Otherwise, the client installation on 64-bit Windows 7 was normal.
You may still be better off sticking with Win7 or Win8.1, given the wide range of ongoing Win10...
Early results look promising: the many-hours-long Win7 waits may be behind us
Now that we're down to the wire, many upgraders report that the installer hangs. If this happens to...
Sponsored by Hewlett Packard Enterprise
Sponsored by Intel
Inertia, more than any other factor, now binds creative and power users to the Mac
We've seen this 'one device for everything' movie before, and it ends just as badly this time
Long before self-driving cars triumph, new and enticing auto-related products will lure you into...
The newest edition of the powerful Python-to-C compilation framework adds speedups harvested from the...