What you need to know about today's SSDs

All solid-state disks aren't created equally; here's how they differ and what you should keep an eye out for

Over the past few years, solid-state disks have made enormous inroads into the enterprise storage market. All the major storage vendors have begun integrating some form of SSD into their storage lines, and a promising new crop of startups has fielded an impressive array of all-flash offerings.

The excitement around SSD isn't just hype. Over the past couple of decades, compute performance has increased at rates that storage technology has been utterly unable to match. For example, the highest-performance magnetic disk you'll commonly find in enterprise storage today is the 15,000rpm SAS disk. The first 15,000rpm disk was released more than 13 years ago. Granted, today's versions are certainly faster than those of the early 2000s, but they're the same basic design. There are few areas of modern IT where that kind of ponderous technological advancement has been seen.

What makes SSD so exciting is that it largely closes that gap and brings nonvolatile storage out of the dark ages. However, SSDs aren't without drawbacks and complications. Advances in the last few years include the creation of innovative solutions for performance, reliability, and cost. Understanding these advancements and why they came about is key to understanding how to leverage SSDs in your environment.

The challenges of SSD
Modern NAND solid-state drives are subject to two primary challenges around which other design decisions revolve.

The first is that SSDs have limited write endurance: Each cell can be rewritten only so many times before it becomes unreliable. Although traditional magnetic disk technology also has reliability concerns, those for SSD are more certain. For example, memory cells in standard consumer-grade SSDs typically fail after only 3,000 to 10,000 write cycles. This might be acceptable for a laptop that will typically see a low duty cycle, but it presents clear problems when faced with the heavy write-based duty cycles typical of enterprise storage workloads.

The second challenge is that these memory cells must be erased before they can be written. Given that the erase operation takes a substantial amount of time, it's important to performance that the SSD have enough pre-erased cells available to absorb the writes. If the incoming write load outstrips the SSD's ability to keep blocks erased, performance will plummet -- a phenomenon known as the write cliff.

Depending on the type of SSD in use (more on that later), there are two ways to combat these problems:

  1. Overprovision the SSD with extra, unadvertised memory cells that can be used as more heavily trafficked cells burn out.
  2. Try to spread the writes among the cells so that all cells are written a similar number of times -- a technique called wear-leveling.

When combined, these two approaches can act as a bulwark against both the write-endurance and write-cliff challenges. Spreading writes evenly across an amount of memory that might be 25 to 100 percent larger than the advertised size of the disk allows the entire device to last longer and gives it more time to erase unused cells so that there are always enough to absorb writes.

To address these two primary challenges, three primary flavors of SSDs have emerged, each with its own pros and cons. Critically, however, not all storage vendors are clear about which kind of SSD they're using. That's one reason you'll often see enormous pricing gulfs between two vendors selling similarly sized "all flash" arrays. It's that much more important for you to understand the underlying technology if you plan to get into SSDs in a big way.

Option 1: Write-intensive SLC
The first kind of SSD is the single-layer cell, aka the SLC. These SSDs use a separate cell of transistors to store each bit of data. This results in a much higher overall number of transistors than other designs, thus making it the most expensive option. However, because fewer transistors are used in each cell, SLC SSDs are substantially more durable than other SSDs and their program-erase cycle is notably shorter. These factors combine to make SLC the lowest-capacity and highest-performing -- as well as most expensive and most durable -- SSD you can buy. Because of their high cost, SLC SSDs are almost exclusively used in enterprise storage environments.

Option 2: Read-intensive MLC
Next up is the multilayer cell (MLC) SSD. As the name suggests, MLC SSDs store more than one bit of data (typically two, but more recently three and four) in the same cell. This allows more data to be stored with fewer transistors; as a result, MLC SSDs are the least expensive form of SSD from a capacity standpoint. However, any time a cell is written, the entire cell must be erased and reprogrammed -- an operation that takes considerably more time than it does on an SLC SSD and that stresses a larger number of transistors in the process.

Bottom line: MLC SSDs are the least expensive, highest-capacity, lowest-performing SSDs from a write perspective (they're roughly comparable to SLC in read performance), as well as the least durable SSDs. Because of their low cost, they're a great choice for consumer applications, but their limited write performance and write endurance typically made them a bad choice for enterprise storage arrays.

Option 3: Multi-use eMLC
The last and newest flavor of SSD is the enterprise multilayer cell, or eMLC. The eMLC represents the industry's attempt to blend the cost benefits of MLC with the performance and endurance characteristics of SLC. eMLC SSDs typically include two bits per cell, but are designed to yeild a lower error rate than typical MLC SSDs. They're also usually equipped with very large unadvertised storage reserves to assist in wear-leveling and to help avoid the write cliff during high write loads. What results is an SSD that can shoulder a relatively demanding enterprise storage workload but doesn't come at the same premium as an SLC SSD.

Putting it all together
If you think about it, the comparison between SLC, MLC, and eMLC is not at all unlike the comparison between a 15,000rpm SAS disk, 7,200rpm NL-SAS disk, and 10,000rpm SAS disk, respectively. The first is small, expensive, durable, and fast; the second is large, cheap, less durable, and slow; and the third attempts to cut a neat balance between the two. Given that comparison, it might seem like the middle-of-the-road option, the eMLC, is the right answer.

However, just as the 10,000rpm SAS disk hasn't always been the right answer, the eMLC won't always be, either. At the end of the day, it truly depends on what kinds of workload you have. If you have a workload that demands a high degree of write performance, you'll probably be happiest with SLC, despite its cost. If you have an application that doesn't require very much write performance but needs blazing read performance, MLC may be a great option despite its lower write performance and durability. If you have a jumble of different types of data with differing requirements, eMLC could be the right option.

It's also worth noting that the industry isn't done innovating yet. For example, Dell recently announced upcoming firmware enhancements to its Compellent storage line that will see it leverage an adapted version of its Data Progression tiering software to use both SLC and MLC SSDs in the same array -- ideally allowing you to have the best of both worlds.

This article, "What you need to know about today's SSDs," originally appeared at InfoWorld.com. Read more of Matt Prigge's Information Overload blog and follow the latest developments in storage at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.

Copyright © 2013 IDG Communications, Inc.