Review: HP Virtual SAN Appliance teaches dumb storage new tricks

HP’s LeftHand P4000 Virtual SAN Appliance offers a wealth of flexibility with a few caveats

1 2 3 4 5 Page 4
Page 4 of 5

Creating volumes
After I directed the CMC to discover the FOM appliance and add it to the Management Group, I could create my storage cluster and start to allocate storage. When creating a volume, I was able to choose between two different types of volume redundancy: Network RAID0 and Network RAID10.

As the names imply, Network RAID0 will stripe the stored data across each of the VSAs without any redundancy beyond that offered by the RAID controller on each host, while Network RAID10 will synchronously mirror data across the nodes. If I had a larger cluster, my choices would have expanded to include Network RAID5 (minimum of four nodes) and Network RAID6 (minimum of eight nodes) as well. These RAID levels utilize background snapshots to effectively reduce data redundancy and increase capacity efficiency. It's worth noting that all of the RAID levels utilize RAID10 for writes; the transition to RAID5 or RAID6 only happens after the system takes a snapshot.

Since I was looking for redundancy, I chose to use Network RAID10. After specifying the size of the volume and the hosts I wanted to access it, the CMC commanded the VSAs to create the volume. Moments later, I was ready to attach to the volume from the vSphere hosts, format a VMFS file system, and start using the new storage.

Now that the new VSA-based iSCSI volume was accessible by both hosts, I could start moving VMs onto the storage, effectively moving the VMs off one host's local storage and into a mirrored storage container that crossed both hosts. Because I was running vSphere Enterprise Plus on my test servers, I could accomplish this with no downtime by using VMware's Storage vMotion. Shops without licensing for that feature will need to power off their VMs prior to moving them.

Expanding storage
With my VMs running on the VSA, I was ready to make some changes to the environment that would commonly be undertaken by real-life users. In an era where storage needs are growing in leaps and bounds, one of the most common storage management tasks involves adding more storage, either to individual presented volumes or to the storage cluster as a whole.

Growing an individual volume is extremely easy: Simply edit the volume in the CMC interface and punch in a larger number. Growing my initial test volume from 200GB to 250GB took only a few seconds. Afterward, all that remained to do was to expand the VMFS volume from within the vSphere Client -- again, a matter of only a few seconds.

Adding storage to the entire VSA cluster is slightly more complex; an equal amount of storage must be added to each VSA (the usable space is limited to that of the smallest cluster member), and each VSA must be shut down in order to add the disks. These two factors combine to make the process fairly time consuming.

Each VSA shutdown and restart cycle -- while not disruptive if volumes are configured using Network RAID10 (mirroring) -- requires a storage resync before the next VSA can be taken down for maintenance. This resync is generally fairly quick, with only the changes made since the VSA was taken down copied over, but it is by no means instant and can vary heavily depending upon how much write activity is taking place on the volumes that the cluster serves.

The P4000 VSA's central management console includes a detailed performance graphing utility, but requires you to explicitly start and stop its recording of performance statistics.
The P4000 VSA's central management console includes a detailed performance graphing utility, but requires you to explicitly start and stop its recording of performance statistics.
1 2 3 4 5 Page 4
Page 4 of 5