Companies may want to skip using a tiered storage architecture and move directly to an all-SSD (solid state drive) architecture, according to a new report from Forrester Research.
In the report, Forrester contends that while enterprise-class SSDs are vastly more expensive than hard disk drives, deduplication can reduce capacity requirements, making flash a cost-effective, better-performing alternative.
[ Managing backup infrastructure right is not so simple. InfoWorld's expert contributors show you how to get it right in this "Backup Infrastructure Deep Dive" PDF guide. ]
"If cost were no object, you would put all your data on flash-based SSD media," the report said. "It's not only much faster than spinning disk drives are today, but it also has no moving parts, consumes less power, and eliminates the seek time and variable performance -- and there's no chance disk drives will catch up in any of these areas."
SSDs are now used as a top tier of storage in external storage arrays, alongside a combination of different hard drives, such as high-capacity SATA and lower capacity, but higher performance, SAS and Fibre Channel drives. The idea behind tiered infrastructures is to put the most highly accessed data on the highest performance drives, migrating less frequently used data to high-capacity, low-cost hard drives.
But major storage vendors of tiered arrays have "shoehorned flash drives" into their existing disk arrays, which can translate into I/O bottlenecks. It also means administrators must know what data to place on the SSD or rely on still-nascent automated data tiering software.
High costs, management woes
According to Forrester, SSDs can be up to 10 times more expensive than hard drives; other research firms peg the costs far higher. Market research from other firms such as iSuppli and Objective-Analysis shows SSD pricing averages around $17 per gigabyte today; it's expected to drop to $12 a gigabyte next year and dip to $5 per gigabyte by 2015.
While tiered architectures can offer better performance and higher disk utilization rates, Forrester's report said that tiering also creates data management problems.
For example, many corporate IT shops don't use advanced storage performance analytics tools, so they have to manually determine which data requires the highest performance and manually move it throughout a tiered architecture. Additionally, "hot data," or the data most frequently accessed, changes over time. That means IT staff will be busy monitoring and moving data as it changes.
While there is automated tiering software, such as Dell Compellent's Fluid Data storage offering and EMC's FAST (Fully Automated Storage Tiering) software, retrofitting existing systems that weren't designed for sub-volume data movement "is a significant challenge," Forrester said.
"The efficiency and effectiveness of these solutions vary. There's also an inherent performance overhead penalty to the constant movement," the report said. "Finally, the information used to make decisions is backward-looking -- just because a piece of data hasn't been hot recently doesn't mean that it won't be in the future."
Enter inline data deduplication
However, a new architecture now making waves is an all-SSD infrastructure where inline data deduplication is used to reduce back-end capacity requirements by eliminating redundant data sets before they're stored.