LeftHand boosts its SAN/iQ

Powerful, easy-to-use management tames third-party and proprietary hardware

Many companies would embrace the superior performance and enhanced reliability of clustered storage were it not for the fear that adoption would cost a fortune and lock them into proprietary hardware.

Although that perception is, unfortunately, all too often justified, LeftHand Networks has been offering for quite some time a clustered storage solution dubbed SAN/iQ that runs not only on proprietary hardware but on plain vanilla gear such as HP ProLiant servers. Further, it offers a range of much-needed features including replicas and snapshots. SAN/iQ also offers a set of powerful automated features, such as load balancing and reallocation of existing volumes across nodes, which remove significant burden and cost from storage administration.

Last fall, LeftHand released SAN/iQ 6.6, adding larger disk drives to its clustering capabilities, as well as a redesigned management console with a treelike interface that makes administering a storage cluster more intuitive.

In my test environment, I had four HP ProLiant DL380 G4 servers, each mounting six SCSI drives with 146GB capacity. The fifth machine in my cluster was an NSM 260 (Network Storage Module), a proprietary storage array from LeftHand with 12 SATA drives of 250GB capacity. Each machine was running LeftHand Networks SAN/iQ, which made each of them an active node of the clustered iSCSI storage network.

Throughout my testing, flexibility jumped out as a clear differentiator between SAN/iQ and traditional nonclustered solutions. For example, you can use SAN/iQ management tools to easily combine the capacity of two or more nodes without disrupting live applications. Another remarkable feature: When you add a node, SAN/iQ will automatically redistribute existing volumes over it. This translates to improved performance in that I/O operations will be spread over more disks and more controllers.

Click for larger view.

Admins can access the system’s various features via SAN/iQ’s management console, a Java application that runs either on a Windows or Linux machine connected to the iSCSI network. Through the console, admins can combine nodes into a cluster; create a new volume from that pool of storage and assign it to an application server; and set the level of data redundancy for each volume and, if needed, limit the bandwidth used for background tasks such as restriping a volume to a new node.

The console has numerous wizards that facilitate just about any administrative task. In addition, SAN/iQ creates a level of abstraction from the storage device that makes working on an HP machine or on the LeftHand proprietary NSM equally seamless. That simplicity of management, however, can unleash some powerful features, such as an unlimited number of snapshots that makes optimum use of space by copying only the delta of changed data.

Another great feature automatically maintains as many as three copies of the same replica on different nodes inside the cluster to protect from simultaneous enclosure failures.

Thin provisioning, also noteworthy, virtually eliminates wasted space caused by inaccurate estimates. Thin provisioning is tantamount to issuing a capacity IOU to an application that will be honored automatically when the initial allocation for a volume is completely used. This feature not only saves from overallocating space, but also enables a utilitylike, buy-as-you-grow approach to storage.

The first script in my test plan was to define a primary storage location for a SQL server database over a four-node cluster and to create a secondary storage pool over the NSM for disaster recovery. To ensure maximum data integrity, the database had to have local and remote replicas automatically updated according to a schedule.

Using the SAN/iQ console, it took me minutes to group the four HP DL380s into a cluster, carve two volumes — one for my database data and the other for the database log — and assign them to the SQL server machine. Creating the remote cluster and setting up scheduled replicas of the database proved equally fast and easy.

I certainly could have created the same configuration using conventional storage solutions instead of SAN/iQ. However, it probably would come at a cost and a level of complexity that makes similar ironclad data protection only a dream for many midsize companies.

My second exercise was to measure how adding more nodes to a cluster improves performance. To simplify the test, I shrank my primary cluster down to a single node via the management console, an activity just as easy and nondisruptive as adding nodes.

From my application server, I prepared Iometer to run a standard script with random read/writes of 8KB and set the number of outstanding I/Os to 32. After starting the test, Iometer settled around 1,000 IOPS (I/O operations per second). Without stopping the benchmark, I added a second node: The IOPS dropped to the low hundreds while SAN/iQ was restriping the volume across the two nodes. Notably, you can easily contain the performance drop during restriping by setting a threshold from the management console, which I didn’t bother doing during my testing.

A few minutes later, the restriping completed and I saw Iometer showing about 1,900 IOPS on that volume, which was, as I expected, about twice the performance measured on a single node. Adding a third node produced a similar linear increase in the performance.

Based on my testing, it’s difficult to find fault in SAN/iQ, perhaps because it’s difficult to compare the product to traditional, array-centered storage solutions. Based on my review, though, I can say that SAN/iQ is easy to manage, scales well, and includes tools to create a responsive and safe storage platform for your databases at a reasonable price. Keep SAN/iQ in mind if your company is growing fast.

InfoWorld Scorecard
Reliability (20.0%)
Performance (20.0%)
Value (10.0%)
Management (20.0%)
Interoperability (10.0%)
Scalability (20.0%)
Overall Score (100%)
LeftHand Networks SAN/iQ 6.6 9.0 9.0 9.0 9.0 8.0 9.0 8.9