Multilingual filers shatter storage-standard barriers

Competing file sharers from Adaptec, Celeros, Dell, and NetApp speak iSCSI, NFS, and CIFS

As does a hydra, storage and file-sharing technologies have many heads, and these days, they seem to be moving in every direction at once. iSCSI is finally in the limelight and the push toward virtualization is only heating up that market. Meanwhile, NFS and CIFS, the old warriors of file sharing, aren't going anywhere soon. Fortunately, there are ways to get to iSCSI now through single "multilingual" filer packages.

Network Appliance once was really the only player in this space. The company's products have been speaking NFS and CIFS for ages, while most other storage vendors were off concentrating on FC (Fibre Channel) SANs. Much has changed: Most companies' filer systems now eschew FC for iSCSI on the SAN side, while providing native NFS and CIFS support. For shops that have a little bit of this and a little bit (or a lot) of that, these devices might be just the ticket. Bear in mind, though, the expression "jack of all trades, master of none."

I looked at four products in this space, all priced far below conventional SAN solutions. Surprisingly, Network Appliance was one, with its sub-$10K StoreVaultS500. Adaptec's Snap division was also represented in the form of the new Snap Server 650. Dell pitched its PowerVault NX1950, and Celeros delivered its brand-new EzSANFiler XD.

Most of these systems are SATA-based, but some can support SAS and SATA side by side, delivering the speed of SAS where it's needed, and the low cost of SATA where speed isn't the issue. Truth be told, none of these solutions is a speed demon; look to true SAN hardware if you need lightning-quick storage.

I put all four systems through the same series of tests. I ran the NFS gauntlet using IOMeter from a Dell PowerEdge 2950 with two dual-core 3.0GHz Intel Xeon CPUs and 2GB of RAM running Red Hat Enterprise Linux 4. I conducted the iSCSI tests using IOMeter on a Newisys N2100 dual-CPU Opteron server running Windows Server 2003 and Microsoft's iSCSI initiator without any hardware acceleration, and I ran CIFS tests from the Dell 2950 using the smbtorture suite of tests from the Samba project.

The numbers were generally all over the place. It was clear that the default configurations of these devices need to be Click for larger view. tweaked to specific workloads to get the best performance. That said, all four did well across all the tests, with the Snap 650 showing the best overall performance and the StoreVault S500 claiming the most consistent performer award.

The numbers were generally all over the place. It was clear that the default configurations of these devices need to be Click for larger view. tweaked to specific workloads to get the best performance. That said, all four did well across all the tests, with the Snap 650 showing the best overall performance and the StoreVault S500 claiming the most consistent performer award.

Unlike SANs, however, you need more from these devices than performance; they need to integrate into existing environments in ways that SANs simply don't. They tie into Active Directory, they need to bind to NIS (network information service) domains, and they need to provide at least a modicum of iSCSI features such as LUN (logical unit number) masking and CHAP (Challenge Handshake Authentication Protocol) authentication. They also need to be simple to install and manage, since it's almost a given that they will be deployed in environments without dedicated storage administrators. In short, they need to do a whole lot for very little.

Adaptec Snap Server 650
Ever since Adaptec bought Snap, it's been anyone's guess whether the company would continue with the product line. The Snap 18000, introduced years ago, was one of the first low-end competitors to Network Appliance, and its Linux-based solution has fared well over the years, but there's been very little rustle from the company until recently. The Snap Server 650 in this test bears little physical resemblance to the 18000, but the management interface is GuardianOS, same as it ever was.

The Snap 650 is a 1U appliance with four SAS drives and an up-front LCD status panel. Since SAS capacities are low, and the prices high, SATA shelves can be tacked onto the Snap 650 to provide more storage. They're attached using external SAS/SATA connections, and storage volumes can be spanned across the two arrays. In practice, this isn't a good idea because like disks should be grouped together. This does offer the benefit of tiered storage within a single unit, however, since high-demand volumes can be placed on the SAS array, and lower-priority volumes on the SATA array, all within one box.

I tested the Snap 650 on both the SAS and SATA arrays for all tests to determine the performance difference between them, and it would definitely behoove an admin to place SQL databases or a Microsoft Exchange datastore on the SAS side of things, while normal file sharing lives on the SATA end. Although the internal drives in the 650 are SAS, it's possible to add disk shelves with 10K and 15K SAS drives, and 7200 rpm SATA drives.

Integrating the Snap 650 into the network was straightforward, using basic Active Directory and NIS bindings. Shares are created on any volume and can be accessed from both CIFS and NFS, as well as FTP, HTTP, and AFP (AppleTalk Filing Protocol) protocols. This separates the Snap650 from the others, as they may offer a few extra protocols, but not all of them.

ACLs can be managed from the UI or from Windows and POSIX systems, though NFS permissions will be Greek to anyone who's never used a standard NFS server before. There's no sugar-coating -- it's basically like editing /etc/exports. To NFS admins, this is actually a benefit, but to others, it might be a bit of a pain.

Network connectivity is provided by a pair of gigabit NICs that can operate independently or bonded, although the management of a load-balanced configuration can be a bit wonky, with seemingly benign spurious errors during reconfiguration. This didn't cause any problems, but was certainly odd.

On the performance side, the Snap 650 really cooks for a filer in this price range. Armed with two dual-core Opteron CPUs and 2GB of RAM, it's ready for just about any workload. It consistently posted the highest scores in the performance tests, and generally ran circles around the other systems.

GuardianOS has had its problems, and is in need of a refresh, but its basis on Linux and the XFS file system is fairly solid. The Snap 18000 I've had in the lab for a while recently had a hiccup after more than a year of faithful service, and all appeared to be lost. It took some time and some support from Adaptec, but the array was repaired, the file system reconstituted, and all the data eventually recovered.

The Snap Server 650 offers significant expansion, quick setup, and a lot of horsepower for the price, and is hopefully a sign that Adaptec is finally getting serious about its Snap filer line.

Celeros EzSANFiler XD
Celeros isn't exactly a big name in the storage arena, but the company's products continue to impress me. Based on an off-the-shelf SuperMicro chassis, the EzSANFiler XD provides iSCSI, NFS, and CIFS services as well as FTP, secure FTP, and HTTP sharing with a fresh Web-based UI and commodity parts -- there's definitely a benefit to being able to source your own replacement RAID controller when push comes to shove.

Speaking of RAID controllers, the EzSANFiler XD ships with an Adaptec 4805-SAS controller, which can control both SAS and SATA drives in the same chassis. My test unit came with six 137GB SAS drives, and four 750GB SATA drives. They can be seated anywhere in the chassis, and even combined into RAID arrays, though that would hardly be a good use of resources. In this fashion, the EzSANFiler XD can provide limited tiered storage much like the Snap Server 650, but within the same box. The Adaptec controller driving the disks is relatively autonomous, however, with all disk-level configuration performed within the cards' BIOS, which isn't accessible from the management interface.

One differentiator is that the EzSANFiler XD boots from a flash drive. Unlike the other solutions that all host their operating system on one or more of the disks in their arrays, Celeros has placed the entire OS on a flash adapter that plugs directly into an IDE header on the mainboard. This means that no matter what happens to any hard drive, the OS will remain stable. On the other hand, I found that the internal multilane connection to the disk backplane didn't have retention clips, which made the connection alarmingly loose. It's a small bone to pick, but a necessary one.

The EzSANFiler XD didn't perform well initially, especially in write and random testing, but this was largely attributable to the disabled write cache on the RAID controller. Enabling the write cache produced far better results. Production versions of the EzSANFiler XD will ship with a battery-backed RAID controller to mitigate this issue. The system has a single dual-core Intel Xeon 5110 running at 1.6GHz per core and 4GB of RAM, which seems to be sufficient to handle normal operations.

As far as the network goes, the EzSANFiler XD is simply rife with interfaces. Six gigabit NICs were in my test unit, and they could be run independently or bonded with EtherChannel to create multigigabit pipes. Network I/O isn't a problem on this system.

On the management side, the Celeros management application is Web-based and played quite nicely with every Web browser I tried. It's written in PHP (PHP: Hypertext Preprocessor) and presents a clean interface for managing the appliance. I did have some initial problems with Active Directory integration, but they were resolved after some digging. The unit I received was one of the first, since this is a brand-new product from Celeros. There were some rough edges to be sure, but the overall direction of the product is solid, and the price is very attractive.

Some of the issues I encountered with the EZSANFiler XD are already being addressed by Celeros, as the company plans on releasing a code update quite soon that will add 2Gb and 4Gb Fibre Channel support, as well as integrating hardware RAID management into the Web GUI.

The performance to cost ratio on this system is quite high, and there’s considerable comfort in knowing that the OS running your storage is running on solid-state disk.

Dell PowerVault NX1950

Dell’s entry into this fray is a conglomeration of sorts. If you marry a Dell PowerEdge 1950 to an MD3000 SAS disk array and run Windows Unified Data Storage Server on it, you’ve just built an NX1950. Of course, because the NX1950 is sold as a distinct Dell product, it’s specifically supported in this configuration from Dell, and Dell preinstalls a pile of tools to assist in initial configuration and overall management.

It’s simple to set up, though the defaults are rather odd. Given that the NX1950 in the lab had the smallest raw storage (15 36GB SAS drives) of any of the four systems, making the default configuration four logical RAID5 drives was a bad idea from both an allocation and performance perspective. Fortunately, it’s easy to fix, and shortly after power-up, all 15 drives were built into a RAID5 array with one hot spare. These drives live on the MD3000 shelf that connects to the server via a single external SAS connection. There’s no redundancy there, but the disk shelf as well as the server were outfitted with redundant power supplies.

The next step was configuration. The external shelf appears to the Windows server as a normal volume (or volumes), and mounts on a drive letter, as you might expect. Creating and managing Windows shares is exactly as it would be on a normal Windows server, but creating and managing iSCSI targets is a different matter. Using the Windows storage management plug-in for MMC (Microsoft Management Console), it’s easy to create targets and assign virtual disks to those targets. LUN masking is relatively straightforward, with the basic Windows tabular approach to setting properties to specific objects. Creating virtual disks for iSCSI LUNs is the work of a few mouse clicks, and that’s about the size of it. It really is simple. The NFS side, however, isn’t.

Microsoft released SFU 3.5 (Services for UNIX) a while back, and with it expanded support for NIS (Network Information Server) as both a client and a server. In addition to NIS, NFS support was also bolstered, at least as well as it could be since the underlying file system isn't POSIX compliant. These two services are generally bound together to provide authentication and ACLs (access control lists) on file systems, and thus, are absolutely necessary to a successful Windows Storage Server deployment.

1 2 Page 1