Product review: Sun's StorageTek Honeycomb is sticky and sweet
Innovative, scalable storage system meets the special needs of "fixed content" archiving with a cellular architecture, easy management, strong performance, and extraordinary resilienceFollow @infoworld
To simplify testing, I used a system of scripts. Various parameters allowed me to choose the number of client machines to use during the test and the types of operations to perform, which included storing, reading, or deleting objects, or running a query. One of the parameters of the script was the object size, which allowed me to crank up the number of operations per second when using small objects, or to push the limits of the system's transfer rate when working with large objects.
Honeycomb responded with predictable variations in the performance results when I changed the conditions of the test from one machine and a single thread to multiple clients and multithread runs. For example, when storing or retrieving large objects in multi-user tests 2 and 4, the transfer rate nearly approached the throughput limit of GbE (at roughly 100MBps and 109MBps, respectively). By contrast, the system quickly stored and retrieved a considerable number of small objects when challenged with multiple operations in tests 6 and 8.
The usual caveats apply: These results are just an indication of the workload that a 16-node ST5800 can absorb when challenged with three clients. A different configuration, and different data, would produce different results.
What I feel reasonably sure would remain consistent across different environments are the system's remarkable resilience and persistence. My test plan included abruptly pulling drives, shutting down two nodes, and killing one of the switches to trigger fail-over to the standby unit. In every case, the ST5800 kept on ticking and returned quickly to normal status when the failure was removed.
Perhaps I had the best indication of the system's reliability when, after powering down two nodes and checking that my test script was still running, I decided to pull out another drive. Sun had warned me that if an additional failure occurred after Honeycomb lost 8 of its 64 drives, the system would go into quiescent mode, suspending all activities. Nevertheless, I couldn't resist crossing the line.
Sure enough, as expected, after I disabled one more drive, the cell went offline. The key thing, though, is that it came back online almost immediately after I restored the drive. I then began restoring the two offline nodes, and Honeycomb soon returned to healthy status with all 64 drives and 16 nodes active.
A new breed of storage
Conventional NAS simply isn't designed for long-term archiving. The typical NAS would choke under the load of storing multiple large objects at the same time, and it would die with its third consecutive drive failure. Honeycomb addresses the performance and resilience requirements of content archiving with a new architecture. Unlike plain NAS solutions -- and fixed-content archiving solutions built on conventional storage systems (think EMC Centera) -- it's made for the job.
I was quite impressed by this first look at Honeycomb, but the system I tested is hardly a final version. More features, including remote replication across cells, for example, are in the works. Perhaps the most promising and innovative among them is Storage Beans, small programs that will essentially enable each Honeycomb cell to make object handling decisions based on content.