Deep dive into VMware's virtual infrastructure
VI 3 swims through our server consolidation test, demonstrating some amazing capabilities and a few quirks
The DRS and VMotion combo is the key to a healthy and scalable VMware installation. There are some caveats, however. VI3 is very sensitive to host CPU differences, and will stop VMotions from occurring unless the processors are nearly identical. This is to prevent running applications that use certain CPU extensions from crashing and possibly corrupting data when they are migrated to a CPU without those extensions. Thus, building a cluster with dual-core and single-core Opteron CPUs in the VI3 hosts is guaranteed to be problematic, and even a cluster with different revisions of Intel EM64T CPUs might not pass muster. Migrating offline VMs between disparate host processor types works because the VM will properly determine the CPU type at the next boot.
To put DRS through its paces, we installed a PHP/MySQL application on our two new VMs, one VM a dedicated Web server, the other a dedicated MySQL server. The application was built to randomly distribute load between the two servers when hit with a large number of Web requests. The front end would serve static pages in response to the majority of the requests, and serve dynamic pages with heavy database calls to a small number of requests, causing the load to shift randomly between the two servers.
With both of these VMs running on a single VI3 host, the load generator was fired up and pointed at the Web server. Within a minute or so, the load on both servers grew, and DRS noticed. As soon as DRS determined that the MySQL server had the highest resource requirements, it automatically moved the VM (by triggering VMotion) to another VI3 host, and the performance of both VMs improved. When we later tasked DRS with high loads on a larger number of VMs, it again moved several VMs around seamlessly to distribute the load evenly among all available VI3 hosts. In fact, DRS responded to the heaviest load across all Windows and Linux VMs by migrating several VMs in the space of two minutes, and the Web and file servers on the target VMs didn’t miss a beat. Slick.
DRS can be configured in two ways: automatic and manual. Manual DRS skips the automatic VMotion step, instead informing admins that changes should be made, and providing information on the steps that should be taken, but stopping short of triggering the move.
Failure? What Failure?
At one point during our test, after the full set of servers had been migrated to VMs and the blades were positively humming, a VMware engineer nonchalantly walked over to the rack and pulled a VI3 blade out of the chassis. VirtualCenter took a few seconds to register that the rug had been pulled out from under the host, and quickly made some changes. All the VMs that had been running on the “failed” blade suddenly appeared under other VI3 hosts and began booting. Within a minute or two, those VMs were up and available. Obviously, problems such as file system corruption that are encountered with any unexpected server shutdown could result, but the downtime was limited to only a few minutes. And although the existing VI3 hosts now had a much heavier load to handle, when the blade was reseated and booted, DRS obligingly spread out the load again, this time with no reboots required thanks to VMotion.
This is VMware’s High Availability in action. Licensed separately, HA can be deployed only in a cluster of VI3 hosts, subject to the same shared storage rules as VMotion. Further, HA is heavily dependent on DNS, which can prove to be an Achilles’ heel. If the VI3 hosts cannot contact each other by DNS name, then they cannot engage in HA actions. If the DNS servers are VMs that were running on failed hosts, for instance, you’re out of luck. VMware has a few recommendations for avoiding this problem, including running DNS servers on physical servers outside the VMware realm, which is rather ridiculous, considering DNS servers are prime candidates for virtualization (their workload is generally low, but the need for availability is quite high). Another option is manually configured host files on the VI3 servers themselves. Hopefully a more elegant solution to this Catch 22 will be forthcoming from VMware.
State of the Virtualization Art
All told, VMware Day at Fergenschmeir was a raging success. Ultimately, VI3 gave us everything we needed to move forward with Fergenschmeir’s virtualization strategy. Some structural issues and missing pieces required some planning to work around, however.
First, no drivers are available for 10-gigabit network interfaces, which can be quite limiting, especially when deploying a large virtualization infrastructure. Network admins would much rather leverage a single 10-gig port on redundant switches per blade chassis or eight-way VI3 server than wrangle a dozen or more network cables and ports per chassis. Implementing 10-gig on a blade chassis or high-capacity server would make it far easier to handle failover and support high-bandwidth applications, all while simplifying cabling and management.
Also on the networking front, VMware itself seems to be slightly in the dark as to how true load-balancing and fail-over NIC configurations should be handled. VI3 offers simple transmit load-balancing within the network configuration of each host but provides no clear-cut way to enable fully redundant NIC teaming. In fact, VMware engineers at our test site seemed to be at odds about this.
Another oddity is that VirtualCenter doesn’t handle management of VMware Server, which means that an environment running both VI3 and VMware Server requires multiple points of administration, which is rather obtuse. VMware expects to add this support in the future.
VMware is not alone in the x86 virtualization business. The many vendors offering enterprise virtualization platforms include Virtual Iron, XenSource, and Microsoft, which is developing a VMware ESX-like hypervisor and VM management framework that will supersede Virtual Server. Another is SWsoft, whose low-overhead Virtuozzo excels at host-based virtualization and management. (See our review of Virtuozzo 3.0, and our beta preview of Virtual Iron 3.1.)
VMware certainly has the jump on the competition, as well as the lion’s share of the market at the moment, and the array of features and performance available in VI3 shows why. VMware Infrastructure 3 is clearly the best hardware-emulation platform available today, but the market is changing quickly, the competition is heating up, and VMware will have to keep hustling to maintain its lead.
InfoWorld Scorecard |
Value (10.0%)
|
Usability (25.0%)
|
Manageability (25.0%)
|
Setup (15.0%)
|
Scalability (25.0%)
|
Overall Score (100%)
|
---|---|---|---|---|---|---|
VMware Infrastructure 3 | 9.0 | 9.0 | 8.0 | 9.0 | 9.0 |
Copyright © 2006 IDG Communications, Inc.