Review: HP Virtual SAN Appliance teaches dumb storage new tricks

HP’s LeftHand P4000 Virtual SAN Appliance offers a wealth of flexibility with a few caveats

1 2 3 4 5 Page 2
Page 2 of 5

Otherwise, overall performance of the VSA depends entirely upon the server, network, and storage hardware used in concert with the hypervisor on which it runs, and it can be made to scale to almost any heights that the underlying hardware can soar. That said, when designing clustered storage systems that will take advantage of SAN/iQ's Network RAID for redundancy, keep in mind the aggregate performance available to iSCSI clients will be slightly less than half what the underlying hardware is capable of (due to the overhead of mirroring writes across the network).

From a capacity perspective, the P4000 VSA is more limited than its physical counterparts due to the license-based 10TB/VSA limitation. And the P4000-series limitations are already notable: Because best practice generally dictates using both RAID10 on the underlying storage hardware (DAS or SAN) as well as Network RAID10 across nodes in the storage cluster, the ratio of accessible storage to raw storage is about four to one -- extremely low by any measure.

Arguably, the performance and capacity hits that result from network-based mirroring should be expected in any synchronously replicated storage platform. While I think that's largely true, the P4000's approach to synchronous mirroring has one major drawback you don't find in other solutions. Typically, storage vendors offer synchronous mirroring as a means to provide an extremely low RPO when protecting a storage infrastructure that's already shielded by multiple layers of redundancy: local RAID, multiple controllers with cache mirroring, diverse storage networks, and so on. In other words, synchronous mirroring is only used for extremely mission-critical systems that can benefit from it and for which the added capacity and wasted performance is worthwhile.

But in the P4000 series, synchronous mirroring is almost always used because each storage node represents a nonredundant controller and storage combination. To protect against the relatively common eventuality of a catastrophic "controller" (server) failure, both the controller and the storage attached to it must be duplicated. It's a trade-off born from the assumption that using redundant industry-standard server hardware and DAS is ultimately less expensive and more flexible than a purpose-built SAN that includes fully redundant controller resources. While this may be the case in a large number of instances, it's important that potential customers account for the resulting overhead in their planning.

The HP P4000 VSA lacks support for multicore vSMP -- a potential issue in environments with extremely heavy storage loads. Above, a sustained read pushes the VSA's CPU utilization to 90 percent.
The HP P4000 VSA lacks support for multicore vSMP -- a potential issue in environments with extremely heavy storage loads. Above, a sustained read pushes the VSA's CPU utilization to 90 percent.

The P4000 VSA in the lab
In testing the P4000 VSA, my goal was to replicate the process of implementing shared storage in a preexisting virtualization environment. To that end, I installed VMware vSphere 5 on two HP ProLiant DL385 G7 servers, each equipped with dual AMD "Interlagos" 6220 processors and 32GB of RAM. Each server also included a brick of 15K SAS disks attached to the onboard P410i RAID controller and accelerated by a flash-backed write cache.

Once the initial tasks of configuring a virtual machine for VMware's vCenter management console and a few Windows server test boxes (to emulate existing VMs) were complete, it was time to get the VSA running. There are two ways to do this. You can manually import and configure the OVF-based (Open Virtualization Format) virtual appliances onto the virtualization hosts and install the Centralized Management Console, or you can use HP's automated, wizard-driven installation tools that do all of that for you.

Getting started
I opted to take the road less traveled and go about the task manually. This allowed me to get an idea of what's actually happening under the hood. Please note that much of the following can be achieved in much less time using the wizards.

The first thing to do was prepare each of the hosts to connect to a SAN via iSCSI (most stand-alone hosts would not already be configured for this). In my case, that meant attaching a pair of unused gigabit NICs (the DL385 G7 ships with four) to a new VMware vSwitch, configuring a pair of VMkernel interfaces for the host to use to connect to the SAN, and configuring a VM port group to allow the VSAs to coexist on the same network. I then connected those NICs to a pair of redundant switches and configured the switch ports for a new VLAN that would be dedicated to storage traffic.

I installed two P4000 Virtual SAN Appliances, created a cluster, and configured a redundant, shared Network RAID10 volume in about an hour -- and I would have been done sooner if I'd used HP's wizard-driven installation tools.
I installed two P4000 Virtual SAN Appliances, created a cluster, and configured a redundant, shared Network RAID10 volume in about an hour -- and I would have been done sooner if I'd used HP's wizard-driven installation tools.
1 2 3 4 5 Page 2
Page 2 of 5