The general advantages of data deduplication are undeniable, as it is likely the most viable means for achieving significant savings on storage infrastructure and management. Yet choosing the best solution for your enterprise requires homework. More so than with the other technologies discussed here, data deduplication should be test-driven before purchase to assess its actual impact on your company's data assets.
However challenging choosing the optimal offering might prove to be, not choosing data deduplication will probably be the worst decision you can make, as its upside will give competitors who deploy it a measurable advantage over those that don't.
Tiered storage has been essential to daily IT operations since the dawn of computing. Founded on the fact that not all storage media are created equal, the concept involves migrating data to the media that best satisfies business requirements and cost objectives.
The logic behind tiered storage hasn't changed much since the Paleolithic age of computing, when managing tiers was often as easy as loading a file of punch cards to disk, running a much faster batch processing of that data, and returning that precious online space to a common pool when the processing was complete.
But the number and variety of storage systems currently available, as well as the amount of information enterprises must now manage, have made tiered storage's inherent benefits -- cost savings and increased responsiveness to business requirements -- even more desirable and perhaps easier to attain.
For example, recent advances in drive technologies have produced SATA devices that favor capacity and offer a cost per gigabyte significantly lower than that of typical high-performance FC, SCSI, or SAS (serial attached SCSI) drives. That said, high-performance drives now offer a blend of capacity and performance, and whereas SATA devices lead with capacities of as much as 1TB and growing for a single unit, high-performance drives have extended their capacities into the range of hundreds of gigabytes.
Based on such advances, storage vendors now offer an unprecedented granularity of storage arrays that range from very dense solutions based on high-capacity SATA drives to spindle-rich systems that provide fast interactive access at sensibly higher acquisition and operating costs.
By grouping homogeneous storage media in tiers, companies can store data more efficiently -- for example, maintaining frequently accessed transactional records on the fastest devices and moving older or seldom accessed files to a less expensive tier. As such, tiered storage provides obvious financial benefits, reducing the average cost of data that is parked for longer periods of time and rarely referenced, if at all.
And when it comes to seldom accessed data, the lower acquisition price of SATA systems can be reason enough to move to a tiered storage architecture. According to a recent IDG study, the cost per gigabyte of "capacity optimized" systems is less than half that of "performance optimized" systems, a ratio that seems likely to extend into the future.
Though the cost gap between systems can be even greater than those worldwide averages, acquisition savings are not the only benefits of tiered storage. Purchasing dense devices, for example, can avoid or delay capital expenditures to extend the datacenter.
Although difficult to put a dollar value on, isolating critical tier-1 data from the crowd of less sensitive data is the first step in establishing a more business-conscious storage environment -- likely the most desirable aspect of employing a tiered storage strategy in the enterprise.
In fact, some vendors are now offering "tier 0" devices to create a very fast, memory-based buffer between servers and conventional, disk-based storage. Not to be confused with traditional cache memory, which is either embedded within the application server or the storage device, these tier-0 devices are SSDs (solid state drives) that are fed with or deprived of data to improve the response time of the storage system.
Xiotech, for example, recently announced SSDs for its Magnitude 3D 3000 SAN systems. Gear6, a startup recently out of stealth mode, has customers tapping its CacheFX, a RAM-based NFS accelerator.
Obviously, such implementations target a different objective than traditional tiered storage does -- namely, creating a top performing layer of storage, rather than reducing cost. However, even if more expensive, tier-0 solutions respond to the same optimization criteria that suggest moving your data from enterprise storage to high-capacity SATA drives and eventually to tape. Managing those data allocations efficiently is the new challenge that storage admins face.