Six iSCSI SANs unleashed

Adaptec, Celeros, EqualLogic, Intransa, NetApp, and Rasilient move our megabytes

We've all been hearing about the simplicity and low cost of iSCSI for years now -- and how iSCSI would topple FC (Fibre Channel) as the storage networking technology of choice for shops moving from DAS to SAN. Yet entry-level SAN systems, such as those from Dell/EMC and Hewlett-Packard, although quick to adopt low-cost SATA drives, have continued to stick with FC interfaces. Even those that have offered iSCSI typically included FC as well. Fibre Channel has remained king, even for small SAN deployments.

That's finally changing. The current crop of iSCSI storage arrays prove that cost-effective SAN storage is definitely mature, leaving FC arrays comfortable only at the high end. For most small infrastructures, the performance of FC simply isn't necessary, and the cost differential is significant. The combination of SATA hard drives and the iSCSI protocol has finally engendered the new era of storage we've been waiting for, especially for small to midsize infrastructures.

At the true low end of iSCSI SAN implementations, you find that iSCSI-connected servers don't require HBAs to talk to the SAN array -- standard Gigabit Ethernet NICs and software iSCSI initiators will provide connectivity. Standard gigabit switches can then be used to create the SAN itself.

Above the iSCSI software initiators come the iSCSI HBAs that off-load iSCSI processing from the server CPU and handle it in hardware on the HBA itself. Working with Gigabit Ethernet switching that supports jumbo frames, this type of iSCSI connection could push more than 100MBps to and from the iSCSI array. Not too shabby for commodity hardware.

The six products I tested show the current spectrum of iSCSI solutions available on the market today. The Adaptec, Celeros, and Rasilient units represent the lower end in price and functionality, whereas EqualLogic, Intransa, and NetApp bring more enterprise features to their products at additional cost. In addition to providing iSCSI services, these products also present a relatively wide variety of storage features, such as replication, clustering, snapshotting, and supported RAID levels. Adaptec and NetApp even include NFS and CIFS file sharing.

Notably absent is EMC, which expressed interest but ultimately declined to participate. The poor performance turned in by the Dell/EMC AX100i might have something to do with that, but I would have liked to have given the EMC CX300i a run through the lab.

2005_3900.xml
Click for larger view.
My evaluation focused on configurability, storage management features, and iSCSI performance. I ran a battery of performance tests under both Linux and Windows, using software initiators as well as iSCSI HBAs from Alacritech and QLogic. I tested throughput and I/O using various block sizes, running random split tests (50/50 reads and writes) as a harsh measure of general-purpose performance and streaming read and write tests to get a feel for maximum raw throughput.

Because the 4KB tests are relative indicators of many real applications, such as Microsoft Exchange, I've included those results here; the results of tests for 8KB, 32KB, and 256KB block sizes are included in the online version of this article. The accompanying charts show the numbers gathered from Iometer tests on Red Hat Enterprise Linux 4 with the linux-iscsi software initiator. Given the sector offset issues that cropped up during Windows testing, the Linux results are the best baseline numbers available for all units.

Adaptec Snap Server 18000

Adaptec's Snap Server 18000 will appeal to the lower-budget storage buyer who wants NAS-plus-SAN functionality without the costs or complexity of a Network Appliance filer. In fact, the Snap Server speaks even more network file-sharing protocols than the NetApp unit, supporting Apple file sharing via FTP and AFP (Apple File Protocol) as well as NFS and CIFS shares.

The Snap Server 18000 is built on Linux, with a standard mainboard and PC connections at the back of the unit. The system cannot be configured via KVM, however, but will request DHCP on first boot. Additional configuring is performed via the Web GUI. The LCD panel on the front helpfully shows the IP address assigned to the primary NIC to assist in initial configuration.

The Web GUI is very lean -- the most Spartan of all the arrays in the test -- and functional but gives the impression that iSCSI support was rather hastily added to the appliance. For example, although you get support for iSNS (Internet Storage Name Service) servers and CHAP (Challenge Handshake Authentication Protocol) authentication, there is no way to restrict iSCSI volumes to specific hosts via IQN (iSCSI Qualified Name).

Internally, the Snap Server 18000 sports dual 3GHz CPUs and 2GB of RAM. The storage is limited to eight internal disks, but additional disk arrays can be connected via FC to expand the capacity of the appliance. The SATA drives are driven by an internal SATA controller that does not support hardware RAID. Instead, RAID is supplied by Linux software RAID, which is quite capable but requires significant processing resources, which explains why this appliance has dual CPUs.

Similar to the Celeros EzSAN, the Snap Server 18000 neither provides redundant features nor can be clustered with another appliance to provide fail-over. Snapshots are limited to read-only. On the plus side, you can expand the array by adding a disk shelf via FC.

The Snap Server is the Fiat to the NetApp's Ferrari. It's definitely suitable for use in smaller shops looking to provide CIFS or NFS services to a network -- and possibly even as storage for Exchange or in disk-to-disk-to-tape backup scenarios -- but its usefulness as a resilient iSCSI target is limited. Nevertheless, the nice price makes this SAN a good fit for smaller budgets that need big storage.

Celeros EzSAN XR23

The EzSAN XR23 is a 2U appliance that appears to be a regular server chassis with 12 SATA drives in hot-swap cages in the front, mounted behind a black bezel. On the back, you'll find a standard mainboard connector panel with keyboard, mouse, and USB ports, as well as two copper Gigabit Ethernet ports. Thus, the EzSAN is really a standard server with an embedded NetBSD OS, 12 drives, and a 3Ware 9500S-12 SATA RAID controller. Of course, that's one of the reasons it's the lowest-cost array in the test.

Setup of the EzSAN was as straightforward as any of the others, with the substitution of a standard KVM connection to the appliance rather than a serial console. After booting, you enter a quick IP configuration and then complete the configuration via the Web GUI.

2005_3900.xml
Click for larger view.

The GUI and the OS are licensed from Wasabi Systems, which provides a PHP-driven Web interface to configure every aspect of the appliance. The interface is clean and intuitive and provides for all the expected features: CHAP authentication, initiator IQN LUN (logical unit number) assignment, volume creation, system status, and so forth. The RAID controller supports RAID levels 0, 1, 5, 10, and 50, and it's backed by battery power, but the system doesn't approach the EqualLogic or NetApp appliances in terms of resiliency. What's also missing is any form of controller redundancy, whether within the unit or via a redundant appliance.

That said, Celeros' motto of bringing sanity to storage costs does ring true. The EzSAN held its own in the performance tests when used with software initiators or hardware iSCSI accelerators on Windows. Unfortunately, the appliance didn't fare well with the QLogic QLA4010 iSCSI HBA on Linux -- the I/O tests seemed to tickle an interoperability bug, and all of the I/O tests using the HBA failed. The EzSAN wasn't alone here; the Rasilient Rastor 4000 also hit a snag with the QLogic HBA.

Overall, the Celeros hardware is solid, the lines are clean, and the solution is powerful, if not very scalable. Snapshot support is sorely lacking, and the absence of redundancy beyond power supplies is a concern. The sub-$8,000 cost for 3TB of iSCSI storage, however, is a strong argument. For what it aims to be, the EzSAN XR23 delivers.

EqualLogic PS200E

EqualLogic has obviously poured much effort into its PS200E. I received two units, each boasting 5TB of raw storage laid across 14 400GB SATA drives.

The PS200E is a no-nonsense array. Instead of a glowing LCD panel or fancy front bezel, it sports a low-key face highlighted by disk access and array health LEDs. The redundant controllers set in the back of the chassis each contain three gigabit NICs with both copper and SFP (Small Form-factor Pluggable) fiber connections. The unit runs NetBSD and boots quickly to a console-based initial configuration script that provides addressing for the NICs and defines a storage group to assign the controllers. In EqualLogic's view, all storage arrays are grouped into logical units. This abstraction provides a smooth way to cluster arrays for management and redundancy purposes.

After the controllers had been configured on the network, all further administration was handled by the Java-based Web GUI. I found the UI to be well-organized and quite versatile, although I did run into problems related to the JRE (Java Runtime Environment) version on a few workstations. I settled on a revision of 1.4.2 that seemed to play nice and created my volumes. As with most iSCSI targets, each volume can be assigned access rights to permit only certain initiators to connect and mount any given volume. The iSCSI standard calls for the use of CHAP, which provides a modicum of initiator authentication, and the PS200E handles that without an issue. Also, initiators can be assigned to LUNs by mapping the initiator IQN to that LUN. Volume presentation is then determined by the requesting initiator IQN. There is no means of grouping or aliasing IQNs, which can get tedious when working with several servers.

I configured the PS200E for performance, running the array at RAID 10 with two hot spares. RAID 10 is a mirror set of striped arrays, providing better performance than RAID 5 while maintaining redundancy via the mirror. The downside is that only 50 percent of the raw capacity of the filer is usable. But with SATA drives reaching 500GB per spindle, this isn't the constraint it once was.

In performance tests, the PS200E led the field, claiming the highest marks in the raw single-threaded read tests and showing a superlative 101MBps 256KB streaming write throughput with the Alacritech iSCSI accelerator on Windows. Interestingly, the PS200E also responded well to the QLogic HBA, posting the best file creation and deletion times -- especially impressive considering the HBA's lack of jumbo frame support. Overall, the EqualLogic PS200E posted the best raw iSCSI performance numbers in the test.

When I built the second PS200E, I initially created a completely separate group for it and configured replication between the arrays. This is extremely simple to do, and the PS200Es will do a block-level synchronization of volumes at scheduled intervals or when manually triggered. The controllers provide no bandwidth shaping, however, so factor that into your plans if you're replicating over WAN links.

After resetting the second PS200E to factory defaults, I joined it to the original array group -- again, very simple -- and was able to manage both arrays from the group UI. When joined, the two units immediately reallocated volumes between them for better load balancing -- a very nice touch. The downside of this is that a failure of one of the group members can affect all the volumes between both units, bringing everything to a halt.

When you add arrays to an EqualLogic storage group, you not only add disk, you add controllers. Each set of controllers in the array has three active Gigabit Ethernet interfaces and uses active load balancing between them to balance server requests. The interfaces on the dormant controller can be used as fail-over interfaces as well. Thus, as you add more disk, you also add more network capacity -- another nice feature.

One issue I did have with the PS200E involved the Microsoft iSCSI initiator. In fact, this problem also affected the Intransa IP5500, as the PS200E and IP5500 are the only solutions in the test that use iSCSI redirection to achieve load balancing. At certain times, the iSCSI volumes would simply refuse to mount or would abruptly disconnect during heavy I/O. The crux of the issue was related to the Alacritech iSCSI accelerator cards used on the Windows server in conjunction with Version 1.6 of the Microsoft initiator. Updating the Alacritech drivers to the most recent version and moving to Version 2.0 of the Microsoft initiator resolved these issues.

Overall, the PS200E is a well-designed and well-executed SAN, providing not only a significant bang for the storage buck but also a simple, powerful interface for storage management. Delivering good performance, redundancy, and scalability, this solution is definitely enterprise-ready.

Intransa IP5500

Of all the units in the test, Intransa's IP5500 is unique. Whereas the other solutions either embed the controllers within the disk shelf or use Fibre Channel to connect external disk shelves, the IP5500 uses Gigabit Ethernet to connect the controllers to the physical disks.

1 2 Page
Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies