2. vMotion undergoes scalability and performance enhancements
VMware vMotion is a core technology in vSphere and has been one of the feathers in VMware's cap over the years when compared to its competitors. But as other hypervisor technologies have advanced, VMware has had to continue to improve its own features to maintain its competitive advantage.
Many of vSphere 5.0's features rely upon the existance of vMotion, so there should come as little surprise that VMware has updated this core component. One of the most substantial changes to vMotion is its multi-NIC capabilities. vMotion is now capable of using multiple NICs concurrently in order to decrease the amount of time it takes to do a migration. That means that even a single vMotion can now leverage all of the configured vMotion NICs. Prior to vSphere 5.0, only a single NIC was used for a vMotion-enabled VMkernel. Enabling multiple NICs can remove some of the constraints from a bandwidth or throughput perspective associated with large and memory active virtual machines. vMotion can use up to 16 1GbE NICs or 4 10-GbE NICs to saturate all of the connections, greatly increasing the speed of migrations.
vMotion will also scale better with these new enhancements, enabling an increase in accepted latency for long distance vMotion. Prior to vSphere 5.0, the maximum supported latency for vMotion was 5 milliseconds, which restricted many organizations from being able to enable cross-site clustering. vSphere 5.0 increases the maximum supported latency to 10 milliseconds for environments using Enterprise Plus -- a component called "Metro vMotion." This still requires a fairly fast and low-latency network connection between hosts, but it opens the door for more customers to enable DRS between sites across longer distances.
3. vSphere 5 updates an often overlooked component of the platform -- VMFS
VMFS is a purpose-built clustered file system for virtual machines, and it's been around since the early days of VMware ESX. Many people just take VMware VMFS (Virtual Machine File System) for granted, while more hardcore virtualization administrators have complaints about it. But that's just their way sometimes.
With vSphere 5.0, VMFS has once again undergone a series of changes. vSphere 5.0 introduces VMFS-5, an upgrade from VMFS-3 used in vSphere 4.x and VI3. The first change to consider with VMFS-5 is the unified block size, which is now 1MB. VMFS-3 was able to format at 1MB, 2MB, 4MB, or 8MB. With VMFS-5 supporting 1MB block sizes, the maximum size for Virtual Machine Disk Formats (VMDKs) are no longer limited like previous block sizes. For years, VMware administrators had to deal with various block sizes and limited virtual disk sizes, but VMFS-5 solves many of these issues. Administrators are now able to create larger files (larger than 256GB) by using those blocks.
Another important change is related to the sub-block algorithm allocation. VMFS-5 introduces support for a smaller sub-block. This is now 8KB rather than the 64KB used in previous versions. Now, small files less than 8KB but more than 1KB in size will only consume 8KB rather than 64KB. This can reduce the amount of disk space being stranded by very small files.
One other important update to the VMFS-5 file system to note is the boost in the large single extent volume, which has increased from previous versions of VMFS where the largest single extent was 2TB. Now with VMFS-5, this limit has been increased to 60TB.
VMware customers who remember upgrading from VMFS-2 to VMFS-3 may remember the complicated process; however, VMware has taken care to make the upgrade path from VMFS-3 to VMFS-5 a much simpler and straightforward process. Datastores can be upgraded, however; if you have the luxury of doing so, it is still recommended to create a new VMFS-5 file system with a 1MB block size instead and use something like storage vMotion to move the virtual machines over to the new datastore.