Xen has had a relatively rough road since it began as a research project at the University of Cambridge. Early releases of the open source virtualization package were quite buggy, yet highly touted by major players in the Linux field, which has led many to view the project skeptically.
Initial packaging of Xen into Fedora Core 4 and 5 releases didn’t help matters when it became clear that it was at best difficult to run and at worst simply broken right out of the box. Later releases have made significant usability and functional improvements, and the next release will officially include support for Windows guests, but it still lacks the comprehensive management framework offered by VMware. Make no mistake: Xen works, but it’s is still in its infancy as an enterprise virtualization solution.
Demonstrating Xen’s enterprise potential is Virtual Iron 3.1, which, as do XenEnterprise and Enomalism, seeks to leverage the open-source model to provide a viable alternative to VMware at a significant cost savings.
Before turning to Xen, Virtual Iron had spent two years developing a homegrown hypervisor technology aimed not at consolidating many virtual servers onto a single physical server, but allowing a single virtual server to run across multiple physical servers. Although this was certainly a worthwhile concept, the pace of processor development and the progress of clustering technologies were beginning to render this concept outdated before it even matured.
Pumping Virtual Iron
The upcoming release of Virtual Iron 3.1 lacks many of the advanced features of VMware’s Virtual Infrastructure Server 3, but it does showcase that VMware’s competition is not terribly far behind. In some ways, in fact, the competition is actually ahead: Virtual Iron 3.1 supports as many as 16 CPUs and 96 GB of RAM per virtual machine, compared with VMware’s current limits of four CPUs and 8 GB of RAM.
Moreover, Virtual Iron extends Xen by enhancing memory management to allow 32-bit and 64-bit guests to run side-by-side, full virtualization to allow guest OSes to run completely unmodified (the current Xen release requires the guest OSes to be modified to run in a Xen environment), and significant work to increase I/O performance of guest OSes. These features will be present in the forthcoming Xen 3.1 release, but Virtual Iron is offering them now, with the GUI management tools.
Virtual Iron 3.1 is a pure Java application that can find a home on a Windows or Linux server, and it ships as a binary GUI install wizard. The setup is minimal, and this first-built server is the equivalent to VMware’s VirtualCenter with one important difference: It serves as the host server deployment system as well as providing the management tools.
When installing Virtual Iron 3.1, it’s recommended to build the server with several network interfaces. One of these NICs will become a management network that should be constructed as an isolated network segment between all virtualization host servers and the management server. This is because by default the Virtual Iron 3.1 server will act as a DHCP/PXE boot server, making deployment of virtualization hosts generally as easy as turning on a new server on that segment.
When the hosts PXEboot, they run a highly modified Linux kernel and no console, so there’s no need for any KVM on the server, because there’s nothing to see and no way to access the system other than through the Virtual Iron management console. Disks local to these servers are available, as are any NICs and HBAs that are supported by the Virtual Iron kernel. In the testing I was able to conduct in Virtual Iron’s labs, this included Emulex and QLogic 2Gb FC (Fibre Channel) HBAs, SATA, and SCSI disk, and Intel and Broadcom NICs.
Once booted, these servers are visible from within Virtual Iron’s Java-based management application, which lays out hosts and virtual servers in an easily digestible hierarchy. The interface is quirky, requiring that every action be followed by clicking the Commit button, which becomes annoying after awhile, and the flow stutters in places, but it’s otherwise functional.
Room to grow
Creating a virtual server entails essentially choosing the number of CPUs, RAM size, and specifying disk resources to be used, much like VMware. Prior to the 3.1 release, however, the disk resources were required to be either an FC LUN or local disk resource. No virtual disk support existed. With 3.1, vDisks conforming to Microsoft’s standard are supported, making deployment easier.
On the downside, there’s no iSCSI SAN or NFS support, so if you’re lacking a Fiber Channel SAN, you’re forced to use local disk, and this precludes the use of the LiveMigration, LiveRecovery, and LiveMaintenance features.
All of these features are predicated on the use of shared storage, and the ability to shift running virtual servers from one host to another, akin to VMware’s VMotion. In practice, LiveMigrate is very similar, with the guest OS migrating with nearly no operational interruption and no reboot required. LiveRecovery will handle the abrupt failure of a host server by booting the VMs that were running on that server on another hardware node. LiveMaintenance is simply a quick way to initiate LiveMigrations of all servers on a single hardware node to other nodes in order to bring down a server for maintenance.
In addition, LiveCapacity will dynamically migrate VMs between hosts to distribute the overall load evenly among all hardware resources, such as VMware’s DRS (Distributed Resource Scheduler). All of these features worked in my copy of the 3.1 beta.
So what’s lacking? Polish, performance, and the little bits around the edges. The console interaction provided by Virtual Iron 3.1 is fair for Windows guests, but quite sloppy for Linux guests running X11. This is rather surprising, but mouse tracking under Windows is far superior. Of course, most Linux guests won’t be running X11, which mitigates this problem somewhat.
Also missing is VM snapshot support, as well as basic backup tools. Coupled with the lack of iSCSI and NFS support, very basic network configurations, questionable I/O performance, and the obvious wet-behind-the-ears feel of the package, it may be a bit of a hard sell for production use.
But then, Rome wasn’t built in a day, and I believe that the lack of these features is more reflective of “haven’t gotten there yet” rather than “won’t get there,” and it certainly seems that Virtual Iron is well on its way to becoming a true competitor in the virtualization world. If the next release — slated for first quarter 2007 — manages to address these issues, the company may find that market open wide, especially because at $499 per processor, a full Virtual Iron 3.1 license costs a fraction of a comparable VMware license. In short, if Virtual Iron can keep up this pace, it’s definitely a contender.
You may still be better off sticking with Win7 or Win8.1, given the wide range of ongoing Win10...
Early results look promising: the many-hours-long Win7 waits may be behind us
Now that we're down to the wire, many upgraders report that the installer hangs. If this happens to...
Emergencies like the Dyn DDoS attack will keep occurring. The only solution is a better, more secure...
The reason: Microsoft hasn't taken the vagaries of on-the-go-environments seriously enough
By treating cloud transformation as simply an IT project, you can surely expect the rest of the...
If tweaks to your Puppet setups are causing breakage across your deployments, GitHub's Octocatalog-diff...