High availability used to be expensive, requiring both specialized software and redundant hardware. Today it almost grows on trees, but you need two key ingredients: virtualization and network storage. By creating a virtual server farm across multiple physical servers, and storing all of your virtual machines on a central SAN or NAS, you can ensure that the failure of any given piece of physical hardware will not bring down your virtual environment.
We recently created a highly available virtual server cluster based on the free edition of Citrix XenServer; this article outlines the process step by step. Although the obvious choice for any enterprise-grade virtualization deployment is VMware vSphere, I chose XenServer for two reasons. First, we're cheap. We don't like spending money on things that we can get for free. Second, and more important, the licensing of VMware is extremely confusing; we're never sure what exactly is required to be properly licensed. (I guess this also amounts to being cheap. We always feel that we are being overcharged for items we aren't fully utilizing.)
It should be mentioned that there are many different flavors of Xen available. Just about any version of Linux includes a Xen implementation. The version discussed here is not "pure" open source Xen, but XenServer, the commercial bare-metal hypervisor originally developed by XenSource, which was subsequently purchased by Citrix. The version of XenServer we used was the latest available at the time, 5.6.0.
Aside from the cost and licensing issues, the primary reasons to implement a virtual server farm on XenServer are the same as for using any other virtualization suite:
Step 1: Install XenServer
The current version of XenServer is available from Citrix. You will want the Product Installer ISO, which will be used to install both XenServer and its central management console, XenCenter. Additionally, if you are going to use virtual machines to run Linux, you will need the Linux Guest Support ISO. These will need to be burned to CD for installation on a bare-metal system.
Installation is very straightforward, following the instructions from the guided setup. Below is a brief excerpt of the installation process. Booting from the CD, you will be met with a Citrix screen. Pressing Enter or waiting on the timeout will proceed to the installation. An abbreviated version of the questions presented is listed in the table below, along with some generic responses.
|Select Keymap||[qwerty] us|
|Welcome to XenServer Setup||< OK >|
|EULA||< Accept EULA >|
|Select Installation Source||Local Media|
|Supplemental Packs||Yes -- Choose Yes if you intend to run Linux VMs in my environment. If your environment is going to be purely Windows, there is no need for Supplemental Packs.|
|Verify Installation Source||Skip verification -- If you are not confident in the downloaded iso, you can verify, but I have used these disks repeatedly and they are known to be good copies.|
|Set Password||< choose your password >|
|Networking||eth0 (< MAC >) -- This choice can vary depending on your system and the number of NICs you have. It is best practice to plug in the NIC you intend to use for administration, and unplug all others. The unused NIC(s) will indicate "[no link]"|
|Networking (cont.)||Static Configuration: |
IP Address: < varies >
Subnet Mask: < varies >
Gateway: < not needed >
|Hostname and DNS Configuration||Hostname: < what you want > |
DNS: < must have at least a dummy IP >
|Select Time Zone||America|
|Select Time Zone (cont.)||Los Angeles|
|System Time||Manual time entry|
|Confirm Installation||Install XenServer|
After the basic installation you will be prompted to install supplemental packs. We installed the Linux Guest Support pack and hit <OK>. A prompt asks if you want to verify the disk, use it, or go back. Choose <Use>. It is possible to install more than one supplemental pack, so you will be prompted again. There are no other supplements that we intend to use, so <Skip>. When finished, you will be prompted to reboot: <OK>.
Naturally, this installation procedure will be repeated on as many physical hosts as you want in your environment. For our purposes, we used three physical servers that we called xennode01, xennode02, and xennode03.
Installation of XenCenter is very straightforward, following a wizard. Once this is done, launch XenCenter. From this point forward, 99 percent of your administration of XenServer will be handled from XenCenter. XenCenter allows you to create, delete, start, stop, and administer virtual machines.
Step 3: Add servers using XenCenter
To administer each of the XenServer servers through XenCenter, you can simply "add a server." There are multiple shortcuts to this function, but for the sake of simplicity, using the top toolbar, select Server -> Add.
You will be prompted for the server IP and user/password credentials. Unless you changed something from the directions above, the user name will be root and the password will be as set during installation.
You may be prompted to verify the SSL certificate. <Accept>
Step 4: Create a XenServer pool
A pool in XenCenter is a collection of servers that you can manage as a group. If your physical servers are all of the same type, creating this pool will simplify administration. If you are intending to use XenServer's high-availability functions, a pool is required. By creating a pool and storing all of your virtual machines on an external share, to the virtual machines are freed from ties to any specific physical host. In the event of a physical host failure, the VMs on that host can be restarted immediately on another host in the pool.
Step 5: Install virtual machines
With a pool set up with external storage, you can create virtual machines that are not tied to any physical server. During the installation, choose the option to create the virtual hard drive on the external store, as well as the option "Don't assign this VM a home server..." It should be noted that these VMs are still assigned a server, to use for processing and memory allocation, but the virtual hard disks are stored elsewhere. We chose to create both a Linux VM and a Windows VM, and our pool looked something like this:
- Fedora Test
We get in return:
So now we have the uuid for the host that is down (note the
uuid a1716dba-7a75-4e99-94f6-27c00b8b122d. Now we enter:
This command lists the VMs the cluster thinks are running on the downed node. (The is-control-domain=false parameter removes