Free high availability: Create a XenServer virtualization cluster

With the free Citrix XenServer virtualization platform, it's easy to create a highly available virtual server cluster; here's how

1 2 3 4 5 Page 4
Page 4 of 5

Step 5: Install virtual machines
With a pool set up with external storage, you can create virtual machines that are not tied to any physical server. During the installation, choose the option to create the virtual hard drive on the external store, as well as the option "Don't assign this VM a home server..." It should be noted that these VMs are still assigned a server, to use for processing and memory allocation, but the virtual hard disks are stored elsewhere. We chose to create both a Linux VM and a Windows VM, and our pool looked something like this:

myXENpool

- xennode03

- xennode01

- Fedora Test

- Windows XP Test

- xennode02

- CIFS ISO library

- iSCSI Target

As you can see, the VMs had been assigned to xennode01. Before moving on, we verified that both machines had good network connectivity by simply pinging the network Interfaces on each.

Step 6: The true test (graceful)
Now that we had two VMs running, we could run some high-availability tests. With a ping test running against both machines, we wanted to see what happened if we stopped xennode01 (the hosting server). To do this gracefully, we would put that server into Maintenance Mode. Right-clicking xennode01 and selecting Maintenance Mode gives us a prompt about migrating the VMs -- namely, a live migration requires the installation of XenServer Tools on the VMs. Doing so on either Linux or Windows prompts a reboot (which does interrupt the ping test).

After the installation and reboot, verify that XenServer Tools has installed. You can easily see this on the General tab of the instance in question. Under "Virtualization state," you will see either "Tools not installed" or "Optimized (version 5.6 installed)." Verification is important, as the XenServer Tools did not install properly on my Linux machine the first time.

With the XenServer Tools were properly installed in our Linux and Windows VMs, right-clicking xennode01 and selecting Maintenance Mode results in a smooth migration. During the migration, ping times rise from less than 1ms to about 30ms, and the VMs land successfully on xennode03, after which the pings return to less than 1ms.

Step 7: The true test (clumsy)
So that was cool, but if there is going to be a hardware failure, it is doubtful that we will be able to switch gracefully to Maintenance Mode. Ensuring both the Linux and Windows VMs are running on xennode03 (which happens to be our master controller), we physically remove power (pull the plug) on xennode03.

Result? No surprise, the pings fail and we lose access in XenCenter. Trying to reconnect to the pool doesn't work because XenCenter accesses the pool and all nodes through the main controller. So how do we get control back? From one of the other physical servers, I use the local XenServer interface to navigate to Resource Pool Configuration. After a long wait, it would appear that we are getting nowhere here. Using SSH to access xennode01, we type:

xe pool-emergency-transition-to-master

This command forces xennode01 (which we are currently SSH'ed to) to become the master controller.

xe pool-recover-slaves

This command causes the master controller to find the other nodes that are part of the pool, and inform them of the master controller change.

Were back! Well, not quite. We can now see the pool, by using a different IP, but the VMs are not back online. Still using SSH:

xe host-list params=uuid,name-label,host-metrics-live

This returns a list of the pool members.

1 2 3 4 5 Page 4
Page 4 of 5
How to choose a low-code development platform