Review: HP Virtual SAN Appliance teaches dumb storage new tricks

HP’s LeftHand P4000 Virtual SAN Appliance offers a wealth of flexibility with a few caveats

1 2 3 4 5 Page 3
Page 3 of 5

The next thing to do was import a copy of the VSA virtual machine onto each host's local disk through the vSphere Client -- a relatively painless process that required only a couple of minutes each. After both VSAs had finished importing, I attached their NICs to the new iSCSI VM port group and attached a 200GB VMDK-based disk from each of the hosts' local storage. (If you do this, note that you must use SCSI ID 1:0 through 1:4 for the system to recognize the disks you add and you must use them in order.) From there, I powered up the VSA VMs and used the vSphere Client to access the console and configure basic IP address info. Once I was able to reach the VSAs' IP addresses over the network, it was time to install the management console.

All P4000-series SANs -- virtual or physical -- are managed through the same common client: the Centralized Management Console. This Windows-based client can be installed and run anywhere so long as it has access to the VSAs, though it's generally best not to run it on a virtual machine that will be dependent on the VSAs themselves.

The CMC prompted me to discover existing VSA systems, a task that can be completed by manually entering the VSA's IP addresses or by discovering a range of IP addresses (an ability that makes adding a large number of P4000 appliances easy). Once the CMC had discovered the VSAs, it prompted me to create a Management Group to contain the new appliances. The Management Group is a collection of P4000 appliances that will be managed within the same administrative domain. Each Management Group has its own administrators, iSCSI server definitions, and alerting properties. Most organizations will have a single group.

Creating a storage cluster
My next task was to create a storage cluster with my VSAs. It should be noted that this isn't strictly necessary. You can allow each VSA to offer up its own storage without being in a cluster, but I wanted to take advantage of the redundancy benefits that come from mirroring storage across multiple appliances. Note that clustering does allow you to create a single-VSA member on backup hardware to act as a nonredundant remote replication target if you wish.

Creating a cluster is typically as simple as picking the VSAs you want to participate, specifying a virtual IP address for the cluster, and hitting go. However, one wrinkle was introduced by the fact that my test configuration involved only two vSphere hosts, each with its own VSA. One of the challenges that the VSA must deal with as it effectively implements RAID over the network is a storage isolation scenario where one or both of the VSAs or hosts becomes disconnected from the network.

In these cases, it's extremely critical the VSAs do not both assume the other has failed and continue to operate. This situation can lead to the dreaded "split brain" scenario wherein the two mirrored copies start to diverge as the active virtual machines on each host continue to make changes to their volumes independently of each other.

To avoid this, P4000 clusters must always maintain a quorum of more than half of the member nodes. If that quorum isn't achieved, all of the appliances will take their volumes offline. However, in a two-node cluster, there is no way to maintain a quorum. In this situation, a third, storageless VSA called a Fail-Over Manager (or FOM) is introduced to the Management Group.

The FOM's job is to ensure that a quorum can always be achieved by at least one of the cluster nodes should an isolation scenario occur. If I had three hosts to work with, I wouldn't have needed to introduce the FOM to the mix. Fortunately, the FOM is extremely easy to install -- very similar to the process used to install the VSAs except that no local storage is added to the appliance. Although I installed the FOM on one of my two VSA hosts, in practice it should be located on a third, completely separate box.

The P4000 VSA gives you a number of RAID options depending on the number of nodes in your cluster. I chose Network RAID10 to mirror the data across my two nodes.
The P4000 VSA gives you a number of RAID options depending on the number of nodes in your cluster. I chose Network RAID10 to mirror the data across my two nodes.
1 2 3 4 5 Page 3
Page 3 of 5