Over the past few years, solid-state disks have made enormous inroads into the enterprise storage market. All the major storage vendors have begun integrating some form of SSD into their storage lines, and a promising new crop of startups has fielded an impressive array of all-flash offerings.
The excitement around SSD isn't just hype. Over the past couple of decades, compute performance has increased at rates that storage technology has been utterly unable to match. For example, the highest-performance magnetic disk you'll commonly find in enterprise storage today is the 15,000rpm SAS disk. The first 15,000rpm disk was released more than 13 years ago. Granted, today's versions are certainly faster than those of the early 2000s, but they're the same basic design. There are few areas of modern IT where that kind of ponderous technological advancement has been seen.
What makes SSD so exciting is that it largely closes that gap and brings nonvolatile storage out of the dark ages. However, SSDs aren't without drawbacks and complications. Advances in the last few years include the creation of innovative solutions for performance, reliability, and cost. Understanding these advancements and why they came about is key to understanding how to leverage SSDs in your environment.
The challenges of SSD
Modern NAND solid-state drives are subject to two primary challenges around which other design decisions revolve.
The first is that SSDs have limited write endurance: Each cell can be rewritten only so many times before it becomes unreliable. Although traditional magnetic disk technology also has reliability concerns, those for SSD are more certain. For example, memory cells in standard consumer-grade SSDs typically fail after only 3,000 to 10,000 write cycles. This might be acceptable for a laptop that will typically see a low duty cycle, but it presents clear problems when faced with the heavy write-based duty cycles typical of enterprise storage workloads.