Virtual Iron and XenSource offerings lack power and polish of the virtualization leader, but they're gaining fast
It seems all roads lead to virtualization these days. From every conceivable angle, computing resources are being collapsed into abstraction layers that enable greater flexibility, and storage, application, server, and desktop virtualization vendors are riding the wave. The biggest push and most appealing opportunity is server virtualization, and the biggest and most appealing vendor is VMware. VMware isn't just the biggest player, however; it's also the most expensive option.
The market that Virtual Iron and XenSource are currently targeting is the low-to-middle end of the spectrum. Its offerings boast most of the features of VMware's flagship products without the hefty price tag. After working with both virtualization platforms for the past few weeks, I can report that these vendors are well on their way, but rough edges abound.
VMware's head start over the rest of the market is substantial. Leveraging nearly a decade of experience and development, VMware Infrastructure 3 has proven to be a very stable, high-performance platform, with wide OS and hardware support and a very clean and functional set of management tools in VirtualCenter (read my December 2006 review). Virtual Iron and XenSource are relative newcomers to the virtualization scene, the duo leveraging the open source Xen hypervisor. Although Xen is the core of both, the Virtual Iron and XenSource products are very different in form and function, not to mention design.
I tested the two platforms on Dell PowerEdge 2950s with dual dual-core 3GHz Intel Xeon 5160 CPUs and 4GB of RAM, using a NetApp StoreVault S500 as the iSCSI back end and Cisco gigabit copper switches in the middle. The NICs in the machines were built-in Gigabit Ethernet, with the addition of another Intel NIC to provide the three-NIC layout required by Virtual Iron when using iSCSI. A low-cost multidialect filer, the NetApp StoreVault fits with the budget-conscious theme of both XenSource XenEnterprise and Virtual Iron Enterprise.
Virtual Iron Enterprise Edition
Virtual Iron's take on virtualization is different from nearly every other vendor. The basis of its enterprise product is a dedicated management server and a bevy of potentially diskless processing nodes. The installation of Virtual Iron 3.7.1 is very straightforward. After building a server with Red Hat Enterprise Linux 4, Suse Linux Enterprise Server 9 SP3, CentOS 4.4, or Windows Server 2003, installing Virtual Iron is as simple as double-clicking an icon.
There are certain prerequisites for the management server, however, most specifically that it have several network interfaces and that those interfaces be connected to specific network segments. The front-end interface is used to provide access to the management tools, and the back-end interface is used to boot servers that will be handling the virtual machines. This split design is quite elegant, and it provides a mechanism for extremely fast deployment of new host servers into the mix. Once the management server is built and running, adding new compute servers is generally as simple as turning them on and configuring them to PXE (Preboot Execution Environment) boot from the NIC connected to the back-end network. The downside of this approach is that you lose a NIC on each server, possibly requiring the installation of at least one more NIC in each server, and probably several more NICs if iSCSI storage will be used.
Virtual Iron does offer a Single Server Edition of the platform that does away with the management network altogether, but this edition lacks the extended capabilities of the enterprise version; for example, live VM migration and load balancing aren't supported.
Virtual Iron is also relatively picky about hardware support. If you're not running the newer Intel-VT or AMD-V CPUs in your server, you're out of luck. This restriction will have an impact on smaller infrastructures hoping to leverage slightly older hardware in a virtualization design. On the other hand, by requiring the newer virtualization extensions at the CPU level, Virtual Iron can leverage those performance enhancements to provide a better overall experience.
In the lab, I tested Virtual Iron 3.7.1 by installing the management server on an older Dell PowerEdge 2800 with two 3GHz Intel EM64T CPUs, 4GB of RAM, and a four-spindle RAID 5 array running CentOS 4.4. These specs are far above the minimum requirements for the management server; all you really need is a single CPU with 1GB of RAM and 30GB of available disk. The Java-based installer required very little interaction other than a license file, and the server was ready to go.
The administration console for Virtual Iron, dubbed the Virtualization Manager, is Java-based and accessed via browser. I ran the app on Windows, Linux, and Mac OS X without hassle, which is a definite leg up over VMware's Windows-centric management tools.
Virtualization Manager is relatively well laid out, and navigation and configuration are simple. The basis of the application is that every action or set of actions must be accompanied by clicking Commit. This is both a blessing and a curse. It's easy to step through several configuration options and wonder why nothing is happening, until you remember that the actions have yet to be committed. On the plus side, it's harder to make inadvertent configuration errors because changes don't happen immediately.
I built several Linux and Windows VMs, both 32- and 64-bit, and found the experience straightforward and easy. I had problems booting from ISO images in many cases, though direct CD and PXE installations were no problem. Both Windows and Linux guests are supported with full hardware virtualization, unlike Xen's paravirtualized Linux guests, and the OS support is also broader than Xen's. On the other hand, the VS Tools drivers that can be installed in the guest OS only support certain distributions, and even then, only specific kernels on specific distributions. Thus, kernel upgrades to Linux VMs may result in an inoperable VS Tools installation. VMware overcomes this problem by compiling kernel modules within the guest as needed. Virtual Iron offers new VS Tools packages for specific kernels on its Web site; it also offers the VS Tools source code for customers needing to perform manual compilations, but this process isn't necessarily straightforward.
Configuring both local storage and SAN disk is much improved over previous releases. Volumes can now be named, whereas they were previously designated only by long random ID strings. The iSCSI implementation is also well done. Once a designated iSCSI network has been defined, specific ports on the physical servers can be mapped to that network and an iSCSI target configured. Upon rebooting the physical servers, LUNs (logical unit numbers) on the iSCSI SAN are mapped, allowing logical disks to then be created on the LUN. Further, iSCSI passthrough is supported, giving VMs direct access to iSCSI LUNs.
I did run into some significant problems with Virtual Iron. It seems that Virtual Iron's management framework uses the MAC (media access control) address of the management interface on each server as a unique identifier. When I swapped out interfaces on one of the servers, I suddenly had three orphaned VMs and duplicate entries for their physical host. After discussing the problem with Virtual Iron, it became clear that the easiest way to fix the problem was to reinstall the management server. It's possible to manually alter the management server's database to solve this problem, but it's far simpler to reinstall, rediscover the hardware resources, recreate the VMs, and remap the disk volumes. Taking this simpler route, I was able to retrieve all the orphaned VMs, although all disk identifiers were wiped out, leaving me guessing which disk volume belonged to which VM.
During this adventure, the management interface exhibited some very odd behavior, even locking up a few times. All in all, this experience proved to be a mixed bag: It's disconcerting that it happened at all, but it was corrected without the loss of any VMs.
Virtual Iron Enterprise contains enterprise-level features such as VM snapshots, LiveMigrations, and LiveCapacity. LiveMigrations are the Virtual Iron equivalent of VMware's VMotion, where a VM is moved between physical hosts without requiring a reboot. LiveCapacity corresponds to VMware's Distributed Resource Scheduler, allowing the management server to make decisions on VM placement on a server farm to compensate for unbalanced loads. In practice, all of these functions worked nicely: LiveMigrations occurred quickly and without interrupting processes on the VM, and LiveCapacity adequately shuffled VMs around. In addition, there's nascent support for IPMI (Intelligent Platform Management Interface), providing some offline server maintenance capabilities.
On the monitoring end of things, Virtualization Manager has performance graphing and reporting features, gathering individual VM performance metrics though the use of the VS Tools packages installed on the VMs themselves. The graphs are presented in real time, and they can be laid out in a grid and digested at a glance. The reporting tools pop out HTML pages with the requested information. Both the graphs and reports look good, but they need some work to be truly useful, such as greater trend analysis and more output formats.
Virtual Iron Enterprise 3.7.1 is a very capable, cross-platform virtualization solution. The VM support is limited when compared to VMware, but all virtualization packages are limited when compared to VMware. The cost structure of Virtual Iron is very compelling, and the company is building rapidly on a solid foundation. The extraordinary rate at which Virtual Iron releases updates and new features indicates a true desire to deliver a viable alternative to VMware at a significantly reduced cost.
XenSource is tasked with all things Xen -- from working with the open source community to building commercial offerings on the open source hypervisor. XenEnterprise 3.2 is the latest iteration of XenSource's high-end offering, and like Virtual Iron Enterprise, it's come a long way in a short time.
Unlike Virtual Iron Enterprise, XenEnterprise servers are built one by one. No management network is required, and each server carries a XenEnterprise installation on local disk. Each server exists as an autonomous system, with no communication required or even possible between nodes. Although XenEnterprise does technically support iSCSI SANs, that support is very immature, requiring significant back-end configuration. More important, no form of shared, centralized storage is supported. Thus, features such as VM migrations, automated capacity adjustments, and even centralized management aren't possible. XenSource is certainly addressing these issues; the company plans to release a more robust version of XenEnterprise sometime in 2007.
The version that I've been working with performed well in the lab, given the noted limitations, and I found it to be a capable solution. It's a good choice for a small virtualization project.
Installation is very straightforward, proceeding much like any Red Hat-based Linux install. During installation the detected disk and network devices are configured and prepared, with the bulk of the local storage reserved for virtual servers. The secondary CD containing support for various Linux distributions can be installed during this initial build or done manually after the base installation. Once installed, the server boots to a text console log-in screen with the server's configured IP addresses listed for reference.
After the server is built, the management tools must be installed on a separate workstation. These tools are Java-based, available for Windows and Linux. Installation on a Windows XP system and a Fedora Core 6 x86_64 workstation proved simple, as was connecting to the newly built XenEnterprise host. When firing up the management tool for the first time, the admin is prompted to enter a master password. In this fashion, the same management console can be used to control multiple XenEnterprise servers without requiring separate authentication each time a different server is accessed.
The management application is well laid out, with a top pane showing the server itself and all VMs running on that server. Each of these entries is accompanied by resource utilization information that gives at-a-glance performance monitoring of each VM and the host server, which is a nice touch. All VM configuration and server configuration occurs in the bottom pane, which is also where the VM consoles are accessed, although they can be popped out of the main window into windows of their own.
Building VMs on XenEnterprise is simple but requires specific OS templates be present on the server itself for non-Windows VMs. When the Linux pack is installed, templates are presented for most major distributions from Red Hat and Suse, as well as for Debian Sarge. These templates are necessary because XenEnterprise relies on paravirtualization to run these VMs: They don't truly run in their own emulated server space. Windows guests are handled differently: It's possible to boot a Windows VM from a Windows Server 2003 install CD and build the VM from scratch.
Tripped up again
My first VM installation on XenEnterprise flushed out a few problems. I initially configured a new Red Hat Enterprise Linux 4 Update 4 VM with 1GB of RAM and an 8GB disk. Once I started the new VM and linked to the console, XenEnterprise's customized Red Hat Enterprise Linux 4 installer was already running. I ran through the familiar installer, opting to do the installation via NFS. When configuring the NFS mount to find the required installation packages, I inadvertently mapped to an NFS directory containing the x86_64 version of Red Hat Enterprise Linux 4 Update 4, not the i386 version. Rather than throwing an error, the management application and the server itself locked up tight, requiring a reboot. After that, I was able to build Red Hat Enterprise Linux 4 Update 4 and Windows Server 2003 VMs with no issues, as long as I was sure not to use 64-bit versions.
XenEnterprise doesn't claim to support 64-bit VMs, so the fact that they didn't run on the server wasn't a surprise. But the server locking up certainly was -- a warning dialog here is really mandatory.
Once I had several VMs running on the server, I brought the iSCSI SAN into the fray. I quickly discovered that there's no way to do this via the management application, as all disk management occurs at the command line. I'm no stranger to the open-iSCSI toolset, so I quickly configured the server to map a LUN from the NetApp StoreVault and presented it to the OS as a new device. That's when things got a little interesting. After some research on XenSource's forums, I found the back-end commands required to present that volume to the Xen service and rebooted the server. Much to my surprise, my local disk store was replaced with the new disk store, leaving my VMs without any disk. However, I could create new VMs with their virtual disks residing on the SAN array. Suffice it to say that this basic iSCSI support will be fine for those well versed in Linux and iSCSI, but largely insurmountable for those without this experience.
Sticking with local disk, I ran performance tests on the VMs running under XenEnterprise. I found that the paravirtualized Linux VMs ran remarkably well and didn't buckle under extreme stress. The Windows servers didn't perform quite as well, but they were certainly responsive and capable of supporting a reasonably heavy workload.
The Linux VMs do not require separate management tools, as with VMware or Virtual Iron, but the Windows VMs do, since they're not paravirtualized. These tools are installed much like VMware Tools, via an ISO image presented to the VM as a CD-ROM drive. They provide a few new drivers and some host-guest communications.
The performance monitoring in XenEnterprise is presented in the management app window with graphs representing the host server's workload as well as the workloads of individual VMs, but it lacks granularity. You can definitely get a good feel for when a host or VM is working too hard, and track some trends, but that's about it. Also, I occasionally lost keyboard access to the VM consoles, a problem that could be rectified by popping the console window out of the main app window and back again a few times. Like Virtual Iron, console access is based loosely on VNC (virtual network computing), though I have to say that the mouse tracking with Windows VMs in XenEnterprise was better than in Virtual Iron.
Two on the cheap
XenEnterprise and Virtual Iron Enterprise have a long way to go to provide the same level of stability, features, and performance found in VMware Infrastructure, but VMware's tail lights are in sight. I found myself liking both of these Xen-based packages, and I could certainly see myself building out a virtualized environment on either platform. However, I couldn't see that being a possibility for someone without a solid Linux background, especially with XenSource.
Virtual Iron is clearly out in front of XenSource, thanks to support for physical server farming, VM migrations, load balancing, and easily managed iSCSI and Fibre Channel SAN connectivity. Nevertheless, if XenSource makes good on its promises, XenEnterprise will have these features ready later this year.
I'm left with the feeling that VMware better not sit on its laurels. These two products are on their way to providing truly enterprise-grade virtualization foundations for a mere fraction of VMware's licensing fees.
Ease of use (25.0%)
Overall Score (100%)
|Virtual Iron Enterprise Edition 3.7.1||7.0||7.0||8.0||8.0||9.0|
|XenSource XenEnterprise 3.2||7.0||6.0||8.0||7.0||7.0|
Windows 7 is suddenly telling users it isn't genuine -- and it has nothing to do with Windows being...
Windows users are reporting significant problems with four more October Black Tuesday patches
The larger design is very welcome, but there's much more to the iPhone 6 than a bigger screen
Sponsored by Rackspace
Sponsored by Nuage Networks
Sponsored by Fibre Channel Industry Association
More control over video, pluggable languages, stronger microformats -- here’s where W3C should steer...
The Appery.io online mobile development platform crosses categories, but support for native apps and...
In its debut, the Aurelia modular framework enables customization and accommodates the latest...
Hortonworks launches a comprehensive three-point plan to shore up data governance in Hadoop