LeftHand Networks looks beyond DAS
DSM 4.2 is a scalable SAN with improved capacity and administration toolsFollow @infoworld
Many storage solutions can expand only up to a point, limited by the number of disk enclosures their controllers support. After crossing that threshold, you’re back to managing discrete storage systems, although not as many as you would with DAS (direct attached storage). An ideal storage system should accept additional capacity as need arises, without compromising manageability and performance.
LeftHand Networks’ DSM (Distributed Storage Matrix), comes very close to that ideal with a clustered storage solution for Linux and Windows servers based on modular storage arrays linked over an IP network — it scales easily and is pretty much self-administering, which will keep IT happy.
I Want My DSM
Previous DSM products were based on four-drive NSM (Network Storage Module) 100 boxes; DSM 4.2 includes both the NSM hardware and management software, SCC (Storage Control Console), and the NSM OS. Its modular nature means companies can order a DSM with one or more NSMs, based on their needs.
Also, DSM now supports larger eight-drive, 2U-sized NSM 200 boxes. It can provision storage for Windows Server 2003 and offers optional Remote IP Copy software that supports asynchronous replications of remote volumes.
I ran DSM 4.2 on three rack-mountable NSM 200 modules. Each unit mounts eight hot-swappable drives in 160GB or 250GB capacity, includes two GbE (Gigabit Ethernet) ports, and has a redundant, field-replaceable power supply.
I connected the NSMs to my GbE switch, set their IP addresses, and installed the Java-based SCC management software on a Mandrake Linux machine (Windows is also an option).
LeftHand Networks developed its own connectivity protocol, AEBS (Advanced Ethernet Block Storage), while iSCSI was still on the drawing board, so I had to install AEBS drivers on my Windows Server 2003 and Windows 2000 machines. iSCSI support is slated to arrive by year’s end, and although they weren’t in my test set, AEBS drivers for Linux should be available at the end of October.
Each NSM box includes controllers and storage enclosures, and can work in peer-to-peer cooperation to form storage systems with exceptional, self-managed resilience, performance, and scalability. To achieve this, the SCC management software acts as the glue to group NSMs into homogeneous storage pools.
Using SCC, I automatically discovered the three NSMs in my LAN and assigned them to a Management Group. Administrators can use Management Groups to separate storage for production and test environments or for different departments, placing capacity where it is needed most.
Within each Management Group, you can further aggregate NSMs in clusters, storage pools from which to carve volumes. To optimize performance, volumes automatically spread across all NSMs in that cluster, according to built-in algorithms. Adding a new NSM to a cluster automatically reallocates existing volumes to take advantage of the additional spindles, without disrupting client access.
To test this, I started with a cluster containing a single NSM box, created a 3GB volume, assigned that volume to an Authorization Group (similar to a Fibre Channel SAN zone), and granted access by adding one of my servers to the same group.
Moving to that server, I acquired the new volume using AEBS, then did a format using the standard Windows Disk Manager. Using a script, I wrote some data to that volume and launched Iometer to measure performance. The average response time was 48 milliseconds.