Xen masters take aim at VMware

Virtual Iron and XenSource offerings lack power and polish of the virtualization leader, but they're gaining fast

It seems all roads lead to virtualization these days. From every conceivable angle, computing resources are being collapsed into abstraction layers that enable greater flexibility, and storage, application, server, and desktop virtualization vendors are riding the wave. The biggest push and most appealing opportunity is server virtualization, and the biggest and most appealing vendor is VMware. VMware isn't just the biggest player, however; it's also the most expensive option.

The market that Virtual Iron and XenSource are currently targeting is the low-to-middle end of the spectrum. Its offerings boast most of the features of VMware's flagship products without the hefty price tag. After working with both virtualization platforms for the past few weeks, I can report that these vendors are well on their way, but rough edges abound.

VMware's head start over the rest of the market is substantial. Leveraging nearly a decade of experience and development, VMware Infrastructure 3 has proven to be a very stable, high-performance platform, with wide OS and hardware support and a very clean and functional set of management tools in VirtualCenter (read my December 2006 review). Virtual Iron and XenSource are relative newcomers to the virtualization scene, the duo leveraging the open source Xen hypervisor. Although Xen is the core of both, the Virtual Iron and XenSource products are very different in form and function, not to mention design.

I tested the two platforms on Dell PowerEdge 2950s with dual dual-core 3GHz Intel Xeon 5160 CPUs and 4GB of RAM, using a NetApp StoreVault S500 as the iSCSI back end and Cisco gigabit copper switches in the middle. The NICs in the machines were built-in Gigabit Ethernet, with the addition of another Intel NIC to provide the three-NIC layout required by Virtual Iron when using iSCSI. A low-cost multidialect filer, the NetApp StoreVault fits with the budget-conscious theme of both XenSource XenEnterprise and Virtual Iron Enterprise.

Virtual Iron Enterprise Edition
Virtual Iron's take on virtualization is different from nearly every other vendor. The basis of its enterprise product is a dedicated management server and a bevy of potentially diskless processing nodes. The installation of Virtual Iron 3.7.1 is very straightforward. After building a server with Red Hat Enterprise Linux 4, Suse Linux Enterprise Server 9 SP3, CentOS 4.4, or Windows Server 2003, installing Virtual Iron is as simple as double-clicking an icon.

img90025.gif

There are certain prerequisites for the management server, however, most specifically that it have several network interfaces and that those interfaces be connected to specific network segments. The front-end interface is used to provide access to the management tools, and the back-end interface is used to boot servers that will be handling the virtual machines. This split design is quite elegant, and it provides a mechanism for extremely fast deployment of new host servers into the mix. Once the management server is built and running, adding new compute servers is generally as simple as turning them on and configuring them to PXE (Preboot Execution Environment) boot from the NIC connected to the back-end network. The downside of this approach is that you lose a NIC on each server, possibly requiring the installation of at least one more NIC in each server, and probably several more NICs if iSCSI storage will be used.

Virtual Iron does offer a Single Server Edition of the platform that does away with the management network altogether, but this edition lacks the extended capabilities of the enterprise version; for example, live VM migration and load balancing aren't supported.

thumb90034.png
Click for larger view.

Virtual Iron is also relatively picky about hardware support. If you're not running the newer Intel-VT or AMD-V CPUs in your server, you're out of luck. This restriction will have an impact on smaller infrastructures hoping to leverage slightly older hardware in a virtualization design. On the other hand, by requiring the newer virtualization extensions at the CPU level, Virtual Iron can leverage those performance enhancements to provide a better overall experience.

In the lab, I tested Virtual Iron 3.7.1 by installing the management server on an older Dell PowerEdge 2800 with two 3GHz Intel EM64T CPUs, 4GB of RAM, and a four-spindle RAID 5 array running CentOS 4.4. These specs are far above the minimum requirements for the management server; all you really need is a single CPU with 1GB of RAM and 30GB of available disk. The Java-based installer required very little interaction other than a license file, and the server was ready to go.

The administration console for Virtual Iron, dubbed the Virtualization Manager, is Java-based and accessed via browser. I ran the app on Windows, Linux, and Mac OS X without hassle, which is a definite leg up over VMware's Windows-centric management tools.

Virtualization Manager is relatively well laid out, and navigation and configuration are simple. The basis of the application is that every action or set of actions must be accompanied by clicking Commit. This is both a blessing and a curse. It's easy to step through several configuration options and wonder why nothing is happening, until you remember that the actions have yet to be committed. On the plus side, it's harder to make inadvertent configuration errors because changes don't happen immediately.

I built several Linux and Windows VMs, both 32- and 64-bit, and found the experience straightforward and easy. I had problems booting from ISO images in many cases, though direct CD and PXE installations were no problem. Both Windows and Linux guests are supported with full hardware virtualization, unlike Xen's paravirtualized Linux guests, and the OS support is also broader than Xen's. On the other hand, the VS Tools drivers that can be installed in the guest OS only support certain distributions, and even then, only specific kernels on specific distributions. Thus, kernel upgrades to Linux VMs may result in an inoperable VS Tools installation. VMware overcomes this problem by compiling kernel modules within the guest as needed. Virtual Iron offers new VS Tools packages for specific kernels on its Web site; it also offers the VS Tools source code for customers needing to perform manual compilations, but this process isn't necessarily straightforward.

Configuring both local storage and SAN disk is much improved over previous releases. Volumes can now be named, whereas they were previously designated only by long random ID strings. The iSCSI implementation is also well done. Once a designated iSCSI network has been defined, specific ports on the physical servers can be mapped to that network and an iSCSI target configured. Upon rebooting the physical servers, LUNs (logical unit numbers) on the iSCSI SAN are mapped, allowing logical disks to then be created on the LUN. Further, iSCSI passthrough is supported, giving VMs direct access to iSCSI LUNs.

thumb90035.png
Click for larger view.

Tripped up
I did run into some significant problems with Virtual Iron. It seems that Virtual Iron's management framework uses the MAC (media access control) address of the management interface on each server as a unique identifier. When I swapped out interfaces on one of the servers, I suddenly had three orphaned VMs and duplicate entries for their physical host. After discussing the problem with Virtual Iron, it became clear that the easiest way to fix the problem was to reinstall the management server. It's possible to manually alter the management server's database to solve this problem, but it's far simpler to reinstall, rediscover the hardware resources, recreate the VMs, and remap the disk volumes. Taking this simpler route, I was able to retrieve all the orphaned VMs, although all disk identifiers were wiped out, leaving me guessing which disk volume belonged to which VM.

During this adventure, the management interface exhibited some very odd behavior, even locking up a few times. All in all, this experience proved to be a mixed bag: It's disconcerting that it happened at all, but it was corrected without the loss of any VMs.

Virtual Iron Enterprise contains enterprise-level features such as VM snapshots, LiveMigrations, and LiveCapacity. LiveMigrations are the Virtual Iron equivalent of VMware's VMotion, where a VM is moved between physical hosts without requiring a reboot. LiveCapacity corresponds to VMware's Distributed Resource Scheduler, allowing the management server to make decisions on VM placement on a server farm to compensate for unbalanced loads. In practice, all of these functions worked nicely: LiveMigrations occurred quickly and without interrupting processes on the VM, and LiveCapacity adequately shuffled VMs around. In addition, there's nascent support for IPMI (Intelligent Platform Management Interface), providing some offline server maintenance capabilities.

On the monitoring end of things, Virtualization Manager has performance graphing and reporting features, gathering individual VM performance metrics though the use of the VS Tools packages installed on the VMs themselves. The graphs are presented in real time, and they can be laid out in a grid and digested at a glance. The reporting tools pop out HTML pages with the requested information. Both the graphs and reports look good, but they need some work to be truly useful, such as greater trend analysis and more output formats.

Virtual Iron Enterprise 3.7.1 is a very capable, cross-platform virtualization solution. The VM support is limited when compared to VMware, but all virtualization packages are limited when compared to VMware. The cost structure of Virtual Iron is very compelling, and the company is building rapidly on a solid foundation. The extraordinary rate at which Virtual Iron releases updates and new features indicates a true desire to deliver a viable alternative to VMware at a significantly reduced cost.

XenSource XenEnterprise
XenSource is tasked with all things Xen -- from working with the open source community to building commercial offerings on the open source hypervisor. XenEnterprise 3.2 is the latest iteration of XenSource's high-end offering, and like Virtual Iron Enterprise, it's come a long way in a short time.

Unlike Virtual Iron Enterprise, XenEnterprise servers are built one by one. No management network is required, and each server carries a XenEnterprise installation on local disk. Each server exists as an autonomous system, with no communication required or even possible between nodes. Although XenEnterprise does technically support iSCSI SANs, that support is very immature, requiring significant back-end configuration. More important, no form of shared, centralized storage is supported. Thus, features such as VM migrations, automated capacity adjustments, and even centralized management aren't possible. XenSource is certainly addressing these issues; the company plans to release a more robust version of XenEnterprise sometime in 2007.

The version that I've been working with performed well in the lab, given the noted limitations, and I found it to be a capable solution. It's a good choice for a small virtualization project.

Installation is very straightforward, proceeding much like any Red Hat-based Linux install. During installation the detected disk and network devices are configured and prepared, with the bulk of the local storage reserved for virtual servers. The secondary CD containing support for various Linux distributions can be installed during this initial build or done manually after the base installation. Once installed, the server boots to a text console log-in screen with the server's configured IP addresses listed for reference.

After the server is built, the management tools must be installed on a separate workstation. These tools are Java-based, available for Windows and Linux. Installation on a Windows XP system and a Fedora Core 6 x86_64 workstation proved simple, as was connecting to the newly built XenEnterprise host. When firing up the management tool for the first time, the admin is prompted to enter a master password. In this fashion, the same management console can be used to control multiple XenEnterprise servers without requiring separate authentication each time a different server is accessed.

The management application is well laid out, with a top pane showing the server itself and all VMs running on that server. Each of these entries is accompanied by resource utilization information that gives at-a-glance performance monitoring of each VM and the host server, which is a nice touch. All VM configuration and server configuration occurs in the bottom pane, which is also where the VM consoles are accessed, although they can be popped out of the main window into windows of their own.

thumb90023.png
Click for larger view.

Building VMs on XenEnterprise is simple but requires specific OS templates be present on the server itself for non-Windows VMs. When the Linux pack is installed, templates are presented for most major distributions from Red Hat and Suse, as well as for Debian Sarge. These templates are necessary because XenEnterprise relies on paravirtualization to run these VMs: They don't truly run in their own emulated server space. Windows guests are handled differently: It's possible to boot a Windows VM from a Windows Server 2003 install CD and build the VM from scratch.

1 2 Page
Mobile Security Insider: iOS vs. Android vs. BlackBerry vs. Windows Phone
Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies