Citrix XenServer is a commercial implementation of the open source Xen virtualization solution. Citrix has extended the base Xen engine with management tools and tightened up various components related to implementing and managing Windows and Linux virtual machines, not to mention integrating the whole shebang with the company's virtual desktop initiative, as well as its foundational server-based desktop and application delivery solutions.
Citrix XenServer is a solid server virtualization offering that installs easily, boasts good hypervisor performance, and includes enterprise capabilities such as load balancing and high availability. I did encounter a few snags with the management console and tripped over some issues with the overall solution. There is a lot to like about Citrix XenServer, but it isn't as polished as some of the other options.
Citrix XenServer installation
Citrix XenServer 5.6.1 installs as easily as VMware vSphere and Red Hat RHEV. Fire up a physical server with the install media, and within a few minutes you have a functional XenServer host. Like other Linux-based virtualization products, you can opt to install via PXE and pull the required packages in from HTTP, FTP, or NFS repositories. Plus, the ability to leverage automated installation scripts makes installing multiple hosts extremely simple and straightforward. Once the hosts are built, the Windows-based XenServer management console is installed and connected to one of the new hosts, and you're off and running.
Configuring the host is fairly simple, with the usual steps of identifying and configuring network trunks, locating the storage, and other general configuration tasks. In these tests, XenServer was able to leverage the Citrix StorageLink APIs that allow XenServer to configure the iSCSI SAN array itself. This is a bit of a double-edged sword, though, as it brings with it the benefits of copy and zero off-loading and other advanced features, but requires that each VM reside in a dedicated LUN on the array, rather than a large general LUN. With a large SAN and a good amount of VMs, the number of LUNs can grow exponentially, complicating otherwise simple management and administration tasks outside of XenServer. It would be nice to have the option of using dedicated LUNs or a general LUN, while still getting the advanced SAN features.
After the first host has been configured for storage and networking, configuring the other hosts is literally as simple as adding them to a pool. Each host is automatically configured identically to the initial master host. This makes an implementation of many hosts extremely simple. You may need to manually assign IP addresses to the storage networks on each host as well, unless you're using DHCP.
Citrix XenServer management
The XenServer management console is a Windows application that connects to the master server in the farm. Like VMware's vCenter Server and Red Hat's RHEV Manager, and unlike Microsoft's Virtual Machine Manager, the XenServer console allows management of all hosts and VMs. There is also a management API and integration SDKs for a variety of development platforms. XenServer is based on Linux, so it's no real surprise that all management operations have a CLI counterpart that can be run on any host, and there are a few CLI commands that can come in handy when the GUI has problems conducting an operation. For example, more than once we found that the only way to disconnect an ISO image from a VM was via the command line due to stalled operations within the client management.
Overall, the management UI is straightforward, offering several different perspectives on the infrastructure, such as viewing all VMs by operating system type, current status, by folder, or even by a tag assigned. This makes finding specific VMs in a large implementation easier than paging through a huge list.
Internally, however, VMs are referred to by a unique UUID, which makes sense from an architectural standpoint, but requires looking up the UUID of a specific VM on which to run CLI operations. Thankfully, the CLI provides tab completion of these IDs, which does help considerably.
The XenServer console also handles all snapshotting and backup features, which are essentially the same thing. You can configure scheduled snapshots to occur on a per-VM basis, and you can select the number of snapshots to maintain for each host.
XenServer's high-availability and load-balancing features are quite functional, but require some supporting players and configuration. To enable high availability, a central storage LUN must be configured and available to each host, though it only needs to be a gigabyte in size at most. Each server maintains state information on this file system, which is used by the cluster to determine if a host is truly down or if a networking issue is disrupting normal communication. When I pulled a blade from the chassis, taking down a host that had been running six VMs, it took roughly five minutes before those VMs began booting on other hosts -- slightly longer than the other solutions, but still quite reasonable.
The XenServer management console is not without its quirks. For instance, to assign ISO images to boot each VM for installation, you must define and link to specific NFS or CIFS ISO repositories. You can't simply map a DVD device to a local ISO on your PC or to an ISO somewhere on the iSCSI storage. Further, mapping that ISO isn't handled via the VM settings, but defined during VM creation. As a result, modifying it later was problematic, throwing errors when different ISOs were linked to an already configured VM.
This issue would also crop up when trying to start VMs linked to ISOs in a repository that was either offline or otherwise inaccessible, without a clear error message to that effect. This may sound like a minor problem, but it was frustrating to build a VM with a linked ISO, begin the installation, and then realize that you needed to reboot the VM to restart the installation, only to find that the VM wouldn't reboot but needed to be deleted and recreated.
There are other idiosyncrasies related to VM management, such as the fact that all VM operations are serialized. Although you can select multiple VMs to boot or shut down, they will do so one at a time, making bulk operations tedious and slow. Parallelization of these functions would be a significant time-saver.
By contrast, XenServer's instant cloning feature offers a fast, easy method to make many copies of a template. Rather than require any input, it simply begins building a new VM identical to the template with a single click. On the other hand, once you convert a VM to a template, you can't convert it back, so modifications of existing templates are a bit of a pain.
Like Microsoft Hyper-V, XenServer uses a balloon driver to handle dynamic memory allocation to VMs. This method of stretching physical memory resources is functional, but doesn't go as far as the memory management features in the VMware or Red Hat hypervisors.
Overall, the Linux concurrent thread tests showed that XenServer has a slight edge over Hyper-V, RHEV, and even VMware vSphere, but by a very small margin. This advantage was more pronounced in the Windows tests, but only when the physical host was not also carrying the weight of other loaded VMs. Running on an otherwise quiescent host, the Windows tests showed significant leads in intercore bandwidth, but with an accompanying increase in latency. This is likely due to the scheduling and core selection methods of the XenServer hypervisor.
Once the same physical host was loaded down with other VMs, several of XenServer's Windows numbers drop significantly, coming in under the results posted by the competition. Other numbers, including the crypto bandwidth tests, were in line with those of VMware. (Like VMware vSphere, XenServer exposes the AES-NI instructions of the Intel Westmere CPUs to the VMs.) When compared to vSphere, the storage numbers were slightly lower.
Citrix XenServer offers plenty of virtualization bang for no up-front cost whatsoever, and the licensing for the Enterprise edition -- which includes every feature save for the physical host provisioning, site recovery, and lifecycle management options -- is a reasonable $2,500 per physical server. Because there are no restrictions on server type, you can use four-socket servers with the same license.
Read the main article and the other reviews: