VMware upgrade reaches for the cloudsFollow @infoworld
There more than 150 cmdlets available, and while their use is just as terse as those cmdlets provided by Microsoft, they're also very powerful in some cases. We could provision massive numbers of VMs with a simple PowerShell script, and tear them down just as easily. Security for use of PowerCLI cmdlets is controlled strictly via Microsoft's Active Directory services, which need to be perfected prior to deployment. An absconded administrative logon could wreak havoc.
Obtaining this functionality for VMware ESX existing servers via upgrade was mindlessly simple in our lab environment. We used an NFS share to load the upgrade components -- a CD or DVD of the upgrade components can't be used.
One note of warning: We found that the upgrade can disturb bootloaders (grub in our case) if your ESX 3.5 setup isn't the "expected" way; caution is needed here. The vSphere installation routine upgrades all VMware tools after installation, too. These tools are VM guest-dependent, and contain administratively optional 'hooks' that improve management.
Once vSphere was alive, we tried vApp, a resource that groups together VMs as aggregations for object control purposes, like starting them, powering them down, and allowing them to be treated as a single object. We're reminded of the big red switch that turns on stadium lights.
Better still is Datastore Migration that allows a running VM's connected storage to be virtualized - and moved to a different logical drive on the fly. Our drive might be in the local server, or it could be changed to another drive via iSCSI (or an NFS share, Fibre Channel) at will.
This further abstracts storage from a virtualized operating system/application instance in a way that makes far better utilization of SAN resources -- by trapping and redirecting storage. If you're not prepared for what your applications might do, of course, it could be a disaster when files/folders/locks are missing, so using this is state-dependent on what a particular VM is doing at the time. Nonetheless, we liked the VM instance availability that could be anchored from this ad hoc disk change capability.
VSphere allows up to eight virtual CPUs (vCPU) for any specific VM instance, double the vCPUs permitted in the prior version. With this addition comes the capacity to also add/subtract VM instance allocated memory or vCPUs on the fly, depending upon the guest operating system in the VM.
The implications of hardware resource additions/changes are interesting, especially in test platforms where 'tinkering' with these settings can find performance optimization points with varying resources for a virtualized operating system and hosted application performance.
The new vSphere ESX 4.0 hypervisor performed about the same as its predecessor when we ran three virtual machine guests on a single virtual CPU, but improved when we gave the VM guests more vCPUs. We tested vSphere's ESX 4 versus the older 3.51 and found across the board improvements. We also upgraded virtual hardware drivers to see its impact, in our table below.
We tested vSphere 4.0 on the same HP DL-580G5 server that we've used in the past, in order to compare performance numbers between vSphere and its latest competitors, including speed demon Citrix XenServer. We tested with SPEC's SPECjbb2005 using the exact same Windows 2008 Enterprise Server (R1) and Novell SUSE Linux 10.2 that we've used in other tests.
Overall, VMware's vSphere is keeping up with the competition, but the performance numbers weren't dramatic - just very good.