Six iSCSI SANs unleashed

Adaptec, Celeros, EqualLogic, Intransa, NetApp, and Rasilient move our megabytes

Page 2 of 2

The IP5500 disk arrays have a 3U form factor, containing 16 SATA drives. Each shelf has eight Gigabit Ethernet connections spread across redundant controllers at the rear of the disk chassis. Each copper Gigabit Ethernet port is then connected to a standard gigabit switch. The iSCSI controllers are branded 1U servers that run without hard drives or redundant power supplies, booting a customized Linux kernel from flash and connecting to the "disk" network via a dual-port Gigabit Ethernet PCI card at the rear of the unit. Finally, the iSCSI network is linked to an embedded gigabit NIC, and the management network is connected to the other embedded NIC. The management network and the iSCSI network must be two separate networks; they cannot coexist on the same subnet. Furthermore, the disk network is another subnet entirely. Each disk in the disk array is assigned an IP address via DHCP from the master iSCSI controller during boot. Thus, each disk can operate as a separate entity within the overall solution. The iSCSI controllers serve as DHCP servers for the disk network, using a configurable private subnet.

Of course, with all these network connections, more switching is needed. Intransa recommends that a dedicated switch -- ideally a pair of switches -- be allocated to the disk network. These switches should support jumbo frames, and they should have spanning tree disabled to permit rapid convergence during power-up. Obviously, some spanning-tree modifications will be necessary to permit redundant disk network switches, adding complexity. Also, the connections between the iSCSI controller and the entire 4TB disk array are limited to 2Gbps, or a nominal maximum of 240MBps. Given the lack of redundancy on each controller, you'd be smart to implement at least two controllers.

Intransa supports iSCSI controller clustering using the concepts of realms. Each realm has a master iSCSI controller and one or more member controllers. Each controller is assigned a physical IP address, and the realm itself is accessible via a virtual IP that is load-balanced among all available members via iSCSI redirection, similar to the EqualLogic approach. Because each controller has only a single gigabit connection to the iSCSI network, this could create a bottleneck when dealing with multiple iSCSI-enabled servers.

Intransa claims that as many as six controllers may exist in a realm at a time; my tests were conducted on a pair of iSCSI controllers connected to a pair of 4TB disk arrays. Intransa's approach to disk connectivity is definitely different, but it also has the benefit of leveraging the connectionless basis of IP networking to permit dynamic capacity increases. To add another disk array to a realm can happen on the fly by simply cabling and powering up the new array -- the new storage is detected and seamlessly added to the realm, requiring no downtime.

The Intransa arrays are large in raw disk capacity, but because RAID 5 is not supported, the true capacity works out to be significantly less. RAID 0, 1, and 10 are the only supported RAID levels, but you can chose among those on a per-volume basis. This reduces the 4TB of raw disk in each array to 2TB. Only certain portions of available disks need be consumed by a volume. For instance, the creation of a new 250GB volume -- using default settings -- will use roughly half of four 250GB drives, placing the volume data on the outside sectors of each disk to maximize performance. This provides RAID 10 mirroring of a striped volume but uses only two spindles in each stripe. To overcome this limitation, Intransa has developed a policy-based disk allocation method. Available only through the CLI at the moment, this is a powerful tool, enabling admins to create specific policies governing disk allocation. An example might be to create a policy to maximize database performance by utilizing a minimum of eight disks with a specific stripe size. The custom policy can then be selected when creating a new volume.

The GUI is Java-based but is not delivered via browser, instead requiring installation on a Windows system. The UI is clean and relatively intuitive, but it lacks some human touches -- for instance, controllers are referred to by "module" names comprised of alpha characters and a MAC address, which makes referencing specific controllers needlessly difficult.

In the lab, I found performance under Windows to be truly abysmal, although the Linux numbers were quite respectable. This disparity was due to the fact that, by default, Windows does not sector-align primary partitions. By using Windows DiskPart to align the partition on a 4KB boundary, I was able to give the IP5500 a significant speed boost, suggesting that sector alignment is an absolutely required step for volumes created under Windows. This modification, however, introduced a problem with the Alacritech iSCSI accelerator -- causing abrupt volume log-offs -- that proved elusive to remedy.

Intransa has taken a truly unique approach to iSCSI storage, building some impressive features on IP concepts. The lack of redundant power in the controllers is a concern, as is the limited bandwidth available to each controller, but overall, the solution is elegant.

NetApp FAS3020c

Network Appliance is an old hand at big storage. Before the advent of the SAN, NetApp made its name on true storage appliances, providing NFS and SMB (Server Message Block) file sharing to the network in stand-alone systems. Since moving into the SAN arena, the company hasn't forgotten its roots. The NetApp FAS3020c builds on this heritage by providing NFS and CIFS access to storage volumes via NIS (Network Information Services) and Active Directory integration and by serving as an iSCSI target. In this way, it's possible to configure all your desktop file-sharing needs and server-centric storage requirements from a single box, removing the need for a separate file server. This capability is quite worthwhile for many shops, and it proves to be a strong selling point. My tests, however, focused on the iSCSI functions alone.

The FAS3020c consists of a 3U controller set with eight NICs and redundant power and a 3U fiber-attached disk shelf. Multiple shelves can be connected to a single controller to scale out the array, but even at its smallest, it will consume 6U. The hardware is refined and even attractive, with a central backlit LCD panel providing instant status information and I/O-per-second statistics.

Configuring the FAS3020c was simple and straightforward, requiring a quick pass through the console to perform the initial setup and then work in the FilerView Web application to complete the installation. FilerView is a standard Web front end; it ran well under every browser I could throw at it. The interface takes a wizard-based approach that can get in the way when you're making major changes to the configuration, but it does help prevent you from losing your way or misconfiguring the SAN.

The FAS3020c uses the concept of volume aggregation. Essentially, the array is configured as a single raw volume that can then be divided into smaller volumes to present to servers, or to share via NFS or CIFS, or both. This abstraction layer makes dynamic volume growth simpler to manage. As do most of the other products reviewed here, the FAS3020c provides both CHAP and IQN access controls to ensure proper initiator control, with FilerView providing a relatively straightforward method of defining initiator groups that are then assigned to LUNs for volume presentation. This grouping method is found on many other SAN arrays, and it makes administration much simpler. All in all, it took about 20 minutes to get from box to bytes.

The raw performance of the FAS3020c didn't match up to my expectations. It definitely has punch, but I couldn't push it much past 60MBps in the read tests, and writes dropped below that. This is likely due to the dual-dialect nature of the solution, but I'm reasonably sure that some tweaking could improve those numbers.

The clustering capabilities of the FAS3020c are something to behold. Of all the redundant solutions in this test, NetApp's was by far the most complex, requiring specific Fibre Channel loop wiring between the controllers and disk shelves and two massive clustering cables that connect the controllers. Maybe every vendor should go so far: The result was completely seamless fail-over, the test unit accomplishing a full takeover of a failed controller without a hitch.

The replication features in the FAS3020c offer the ability to replicate volumes to other arrays on an immediate or scheduled basis, functioning almost exactly like a Unix cron job.

Network Appliance has done a good job of integrating iSCSI into its seasoned filer line. The FAS3020c is full-featured and rock-solid, and NetApp's support is the stuff of legend, with reports of customers receiving replacement disks before they even knew that a disk in their filer had gone bad. Although it didn't post the best numbers in my performance tests, the FAS3020c is hard to beat.

Rasilient Rastor 4000

Rasilient's Rastor 4000 is a 3U, 15-spindle storage array incorporating redundant controllers within the chassis. In this way, it resembles the EqualLogic PS200E, but the comparisons don't go much further than that. When I unpacked the Rastor 4000, I immediately thought that the hot-swap trays were too flimsy. The release handles are made from thin plastic, and I feared that they might break during the seating or removal of a drive. Also, the construction tolerances in the chassis itself leave something to be desired: Drives don't always line up with their companions, leaving the array looking somewhat snaggletoothed. Looks aren't everything, however.

The Rastor 4000 is based on a custom Linux kernel, a trait shared by several units in the test, and it incorporates two separate controllers in the chassis. Each controller consists of a Pentium 4-based mainboard, contains a gigabyte of RAM, and boots from flash.

As with the other arrays, a brief console session to establish IP address information on the controllers led me to the Web GUI. The interface is notable for its simplicity, but it's not as intuitive as some of the others in the test. Volume creation and host presentation can leave you scratching your head, and CHAP authentication and selected IQN presentation parameters require some digging. There is also an interface for viewing the system status and modifying e-mail addresses that should receive alerts. What's missing is any form of alert-level configuration. It's either on or off, and the Rastor 4000 generates e-mail alerts fairly regularly, which can get annoying -- 18 e-mails are sent every time the system boots. It would be nice to be able to configure alert levels per address.

In the performance testing, the Rastor 4000 held its own through many of the tests, with a solid showing near the middle of the pack, but faltered in the streaming read and write tests under Windows. The same tests under Linux went more smoothly, with the Rastor turning in a solid performance. As with all of the other arrays, tweaking could potentially drive these numbers up.

The Rastor 4000 supports snapshots, but it doesn't offer snapshot allocation settings or the ability to mark snapshots read/write -- snapshots can only be read. The Rastor 4000 managed a controller failure well, turning in a sub-30-second fail-over time that was handled smoothly by both Linux and Windows.

In the end, the Rasilient Rastor 4000 is a capable storage array and a fully redundant iSCSI target, but it simply lacks finesse. The Rastor would be more attractive if it were constructed with a little more attention to detail.

On target with iSCSI

Given the cost of big SCSI SAN storage today, and the fact that most infrastructures simply don't require the speed and throughput of a Fibre Channel SAN, making the case for iSCSI storage is simple. SATA drives are more than adequate for most e-mail, database, and file storage applications, and so is the 1Gbps iSCSI transport. The low cost of entry, combined with the ease of integration, make the SATA-iSCSI combination a no-brainer when compared with even a stand-alone file server. A rack-mount server with six 147GB SCSI disks will generally cost you more than a low-end iSCSI storage array, and it's a less effective way to provide storage to multiple applications.

All of the arrays I tested are capable of providing large storage at the center of an infrastructure, but their performance and resilience will differ wildly, depending on the application. For a general-purpose storage array in a midsize infrastructure, the EqualLogic and NetApp products are excellent choices. Both products are feature-packed and polished. The Intransa solution takes the bronze here, although its capacity, resiliency, and throughput are likewise capable of supporting most applications.

The Adaptec, Celeros, and Rasilient solutions match up well for smaller infrastructures where the dollar needs to go farther. The Snap Server 18000 in particular would function well as a small-office or branch-office storage unit, providing NFS and CIFS file sharing in addition to iSCSI disk-to-disk backups. The EzSAN XR23 and Rastor 4000 provide more native capacity than the Snap Server does, and they're better tuned to provide big volumes to smaller networks. The redundancy in the Rastor 4000 gives it an edge over the EzSAN, albeit at twice the price.

InfoWorld Scorecard
Scalability (20.0%)
Management (20.0%)
Reliability (20.0%)
Performance (20.0%)
Value (10.0%)
Interoperability (10.0%)
Overall Score (100%)
Adaptec Snap Server 18000 7.0 7.0 7.0 7.0 7.0 9.0 7.2
Celeros EzSAN XR23 6.0 7.0 7.0 8.0 8.0 8.0 7.2
EqualLogic PS200E 9.0 8.0 9.0 9.0 9.0 9.0 8.8
Intransa IP5500 9.0 8.0 8.0 8.0 8.0 8.0 8.2
NetApp FAS3020c 9.0 8.0 9.0 8.0 8.0 8.0 8.4
Rasilient Rastor 4000 6.0 7.0 8.0 7.0 7.0 9.0 7.2
| 1 2 Page 2
From CIO: 8 Free Online Courses to Grow Your Tech Skills
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies