Not that long ago, properly aligned storage tiers were created by hand, with fixed purposes for fixed applications. This is changing rapidly, as new methods of ensuring data availability, speed, and redundancy are emerging.
Automated tiering has only recently become possible due to advances in processing power, storage types, and bandwidth growth. Add in the cloud, and the situation grows even more fluid.
In this week's New Tech Forum, Dick Benton, principal consultant at GlassHouse Technologies, takes us through the quickly evolving world of storage tiering -- from where we are today to where we're going across all types of storage media. -- Paul Venezia
The sea change in storage tiering philosophy
In the early days of storage tiering, tiers were developed primarily according to key cost differentiators, the first being the cost of additional storage needed to support data protection. That protection was provided by replication, snapshots, backup copies, and RAID configuration, and it was generally assumed that an application requiring tight and expensive recovery would also require good performance.
Vendors cooperated by ensuring that their enterprise-class offerings all supported synchronous, if not asynchronous, replication, along with lots of ports and performance characteristics. Tiers were created where typically the highest and most expensive ones supported a near-zero data loss, with return to operation well under 24 hours. Lower tiers supported increasing data loss tolerance and lengthier recovery times. Many organizations had one set of data loss tolerances and return-to-operation targets for localized partial failures, as well as a different and often somewhat looser set of attributes for recovery in the event of an entire site failure (aka disaster). Then four things happened.
A shake-up in storage tiering
First, the sophisticated replication and snapshot technology, heretofore only available on expensive, enterprise-class storage frames, gradually began to appear in midlevel and low-end frames. Today, there is probably not a storage array on the market that does not support replication and snapshots.
The second game changer was the new capability that allowed data to be striped across the entire array instead of over just the few disks in a SCSI tray. This capability put the importance of RAID configurations on the back burner, as RAID 10 performance could now be achieved through striping across multiple spindles.
Next came the advent of solid-state storage devices (SSDs) with the ability to automigrate heavy I/O loads from low- to higher-capability media and back down again as IOPS requirements dropped.
Finally, the fourth was good old Moore's Law, or a derivative of it, that drove down the cost of storage hardware to the point that remote replication to a distant site became economically feasible, if not justifiable, as data started its explosive growth to 30 percent or more per year. At the same time, customer service and compliance awareness increasingly demanded higher levels of availability through quick recovery.
These game changers made many of the design concerns and philosophy that had gone into the old bundled tiers based on protection and performance obsolete. Replication and snapshots became something that could be available on high-end, midlevel, and low-end arrays. The requirements of mirrored disks for write efficiencies were eliminated by the ability to stripe across the frame, and the ability to restore and recover from a single disk failure was similarly enhanced through the new striping paradigm.
Performance issues could now be addressed by migration within the storage array across different disk/bus technologies -- from cheap SATA to extraordinarily expensive SSDs. Indeed, just a 1 percent SSD holding could accelerate the performance of that frame by significant increments.