Product review: Sun's StorageTek Honeycomb is sticky and sweet

Innovative, scalable storage system meets the special needs of "fixed content" archiving with a cellular architecture, easy management, strong performance, and extraordinary resilience

Most storage solutions are optimized for fast access and frequent updates, a formula that fits the requirements of transactional applications to a tee but isn't necessarily well suited to archiving files that — whether by law, or policy, or practice – either must not or will not be changed.

As a sweeping simplification, this "fixed content" – such as legal documents, financial data cutoff, engineering diagrams, medical images, audio files, and video files – are ready for archiving the instant they're created. Using a tiered storage approach to managing them makes little sense. Moreover, these files typically produce large archives that must be maintained for a long time, which makes the conventional approach to data protection, namely tape backup, difficult and expensive to implement.

As a result, storage vendors have begun supporting fixed-content archiving with specialized lines of products, separate from their transactional offerings, and have put a great deal of emphasis on reliability and simplified administration in these products.

Sun's StorageTek 5800, a.k.a. Honeycomb, addresses fixed-content archiving needs with a resilient, cell-based solution that can scale from 8 to 16 nodes per cell (a half rack), or up to 32 nodes in a single rack, and can be expanded further by adding more cells.

Sun has taken a different approach to companion software than vendors such as EMC, Hitachi, and HP, which have married their fixed-content archiving solutions to compliance applications. (Like Honeycomb, the HP solution is based on cells; I reviewed the debut version, which focused on e-mail archiving, in early 2006. Since then, HP has rounded out the application offerings.) Sun has not wedded Honeycomb to any specific application, leaving that task to partners and customers. The upside of Honeycomb's openness is that the possibilities are endless. In fact, Honeycomb's powerful, built-in administrative software is complemented by an SDK that allows Java or C developers to define their own metadata schemas consistent with the specifics of their application. Recently Sun made the Honeycomb software publicly available at OpenSolaris.org, as StorageTek 5800 Open Edition, under the BSD license. A software emulator of the ST5800 to run applications built using the SDK is also available for download.

Inside the Honeycomb
For logistical reasons, I conducted my evaluation of the ST5800 at one of Sun's labs in Colorado. My test unit was a fully populated, 16-node cell connected to three client machines running Sun Solaris, Red Hat Enterprise Linux, and Windows Server 2003.

Each Honeycomb node is essentially a server running Solaris 10 and the ST5800 application software. Each server mounts four 500GB SATA drives and connects via redundant links to two GbE switches. The redundant switches are integral components of the cell and, of course, provide protection against a failure of either one.

Of the 16 nodes, one is the elective master and coordinates the activities of the other nodes, but the system has a mechanism to quickly and automatically replace a failed master with another node, which is just one example of the reliability features built into the ST5800 (more on this later).

The ST5800 has a simplified administrative interface that can be accessed, for example, via an SSH connection. The whole CLI boils down to fewer than 20 commands (I believe the actual total is 19), which cover setting the configuration of a cell, monitoring the physical health of the system, displaying I/O statistics from performance counters, and performing basic tasks such as rebooting, changing passwords, and setting the date and time.

Compared to other storage management software I have worked with, those commands are both intuitive and very powerful. For example, typing "sysstat" creates a concise status summary in just a few lines, from which I learned that all 16 nodes and all 64 drives of my test cell were working properly and ready to go. 

Honeycomb also has a management GUI, which I used only because I had an obligation to report on its features. I suspect that in normal daily operations, no admin will need or want to go beyond the CLI.

It's important to understand what's possible to do with the ST5800. In essence, you have a restricted set of I/O operations that allow you to store, retrieve, or delete storage objects, but not update them. To complete that set of operations, you can create data-specific metadata that the system will automatically index to speed up query time. 

In my evaluation, I used predefined schemas and focused mostly on the performance, reliability, and management characteristics of the system. The aforementioned SDK and emulator not only allow customers to assess beforehand how they might use the system, but also have the potential to extend dramatically the variety of objects that the ST5800 can support.

How Honeycomb stores objects is one of the secrets to its reliability and persistence. Whether the object is an X-ray image, a business contract, or any other piece of data that is unique, immutable, and eligible for archiving, the ST5800 automatically splits it into multiple, distinct fragments and calculates two parity fragments. Each fragment is stored in a different node, which makes for very low vulnerability even to multiple hardware failures.

Having objects spread across multiple nodes and spindles also favors fast performance and quick rebuilds after failure. To further ensure data reliability, Honeycomb maintains an ongoing scan of its repositories to detect and correct possible bit rot.

Stirring the honeypot
I was understandably eager to challenge the promises mentioned before with actual testing. My test plan didn't include creating new schemas, and I used structures that were already defined in the test system. Here is an example of a schema for Honeycomb.

Part of the Honeycomb command set is devoted to creating or displaying schemas, but this is an area where the Java-based management GUI proves to be more helpful than the CLI.

To simplify testing, I used a system of scripts. Various parameters allowed me to choose the number of client machines to use during the test and the types of operations to perform, which included storing, reading, or deleting objects, or running a query. One of the parameters of the script was the object size, which allowed me to crank up the number of operations per second when using small objects, or to push the limits of the system's transfer rate when working with large objects.

Honeycomb responded with predictable variations in the performance results when I changed the conditions of the test from one machine and a single thread to multiple clients and multithread runs. For example, when storing or retrieving large objects in multi-user tests 2 and 4, the transfer rate nearly approached the throughput limit of GbE (at roughly 100MBps and 109MBps, respectively). By contrast, the system quickly stored and retrieved a considerable number of small objects when challenged with multiple operations in tests 6 and 8.

The usual caveats apply: These results are just an indication of the workload that a 16-node ST5800 can absorb when challenged with three clients. A different configuration, and different data, would produce different results.

What I feel reasonably sure would remain consistent across different environments are the system's remarkable resilience and persistence. My test plan included abruptly pulling drives, shutting down two nodes, and killing one of the switches to trigger fail-over to the standby unit. In every case, the ST5800 kept on ticking and returned quickly to normal status when the failure was removed.

Perhaps I had the best indication of the system's reliability when, after powering down two nodes and checking that my test script was still running, I decided to pull out another drive. Sun had warned me that if an additional failure occurred after Honeycomb lost 8 of its 64 drives, the system would go into quiescent mode, suspending all activities. Nevertheless, I couldn't resist crossing the line.

Sure enough, as expected, after I disabled one more drive, the cell went offline. The key thing, though, is that it came back online almost immediately after I restored the drive. I then began restoring the two offline nodes, and Honeycomb soon returned to healthy status with all 64 drives and 16 nodes active.

A new breed of storage
Conventional NAS simply isn't designed for long-term archiving. The typical NAS would choke under the load of storing multiple large objects at the same time, and it would die with its third consecutive drive failure. Honeycomb addresses the performance and resilience requirements of content archiving with a new architecture. Unlike plain NAS solutions -- and fixed-content archiving solutions built on conventional storage systems (think EMC Centera) -- it's made for the job.

I was quite impressed by this first look at Honeycomb, but the system I tested is hardly a final version. More features, including remote replication across cells, for example, are in the works. Perhaps the most promising and innovative among them is Storage Beans, small programs that will essentially enable each Honeycomb cell to make object handling decisions based on content.

As for the present, the Sun StorageTek 5800’s good performance, easy management, and incredibly resilient architecture make it a very attractive archiving solution at a price that, although significant, will challenge many competitors.

InfoWorld Scorecard
Scalability (20.0%)
Value (10.0%)
Reliability (20.0%)
Performance (20.0%)
Interoperability (10.0%)
Management (20.0%)
Overall Score (100%)
Sun StorageTek 5800 "Honeycomb" 10.0 8.0 10.0 9.0 9.0 9.0 9.3
Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies