Flash is truly disruptive. We've been using hard disks for almost 60 years, and both storage system design and storage management have revolved around them. Fundamentally, with hard disks we can assume capacity is cheap and performance is expensive, but in a flash world, the equation is inverted: Capacity is expensive and performance is cheap. This change is simple to articulate, but it has non-obvious implications in the way we design storage.
Cost per workload
When designing for cost, a storage architect is fundamentally concerned about the cost per workload. If my requirement is to provide storage to support an Exchange server, what matters is the total cost of the storage to support it.
With all current storage media (flash included), you're limited in the amount of performance you can get for a given amount of storage. When you purchase a solid-state drive or disk drive, you get a fixed amount of capacity and performance. While there are faster and slower disk drives and flash drives, they sit within a fairly small range. For enterprise disk storage, performance is measured in IOPS (IOs per second). At a very high level:
Workload cost = max (cost to buy necessary IO, cost to buy necessary capacity)
In other words, the cost of storage for a given workload is either the cost to buy the necessary IO or the cost to buy the necessary capacity, whichever is greater. For a given combination of storage and workload, your cost will be determined by only one of these factors. If the cost for the workload's performance is low, then the capacity cost will be most important; if the cost for the workload's capacity is low, then the performance cost will be most important.
Disks are limited by IOPS
In disk systems, IOPS are very important because while we think of disk as cheap, disk IOPS are actually very expensive. Disk space is extremely cheap, and it's getting cheaper every year. But the number of IOPS that a single disk can provide is dependent on mechanical properties and has improved at a very slow rate. A modern disk drive may support 2TB of data, but only 100 IOPS.
Because disk IOPS are extremely expensive, many enterprise workloads are actually sized not for capacity, but for performance. While a workload may need only a few drives' worth of capacity, it may need a huge number of drives for performance. Put another way: In our workload cost equation, capacity becomes irrelevant and the cost is driven by performance.
Flash is limited by capacity
Flash inverts this equation. While flash capacity is much more expensive than disk, flash IOPS are actually very cheap. This is because flash no longer has the mechanical systems that have limited disk performance. A modern SSD may support 256GB but provide 20,000 IOPS (although flash performance is actually best measured in MBps rather than IOPS).
That means for a similar price, the SSD delivers one-eighth of the data, but 200 times the IO of the disk drive. In other words, the IOPS/GB of this flash drive is 1,600 times larger than the IOPS/GB of the disk. To put that into context, the fastest disks today are only three to four times faster than the slowest disks today.
This means that in flash-based storage, even for many intensive enterprise workloads, the number of IOPS is no longer the dominant factor in system cost; it is now dominated by capacity. Because of this shift, in many cases flash-based systems (especially systems mixing flash and disk) can actually be cheaper than disk-based systems.
Toward a capacity-oriented world
As storage designers and architects, we are always looking for ways to reduce the cost of doing business. In systems built around disk, we have concentrated on improving performance, since this was most likely to impact cost. But as we transition to a flash-based world, we are changing our focus from performance-oriented optimization to capacity-oriented optimization.
Some changes are obvious and immediate, but many are less obvious. For example, compression and deduplication both reduce the effective size of data by removing redundant portions of the data. These two technologies are "table stakes" for modern flash-based storage; they are clear wins in density and cost.
Other changes will happen gradually as we fully understand the impact on the modern data center. This shift will remove some optimizations that are no longer necessary and are painful to manage, and it will introduce optimizations that are much more relevant today than in the past.
Hybrid storage: Going for the best of both worlds
One obvious way to improve the cost of flash is to combine flash and disk, as we combine RAM and disk today. By keeping hot data in flash and putting cold data on disk, a hybrid array can potentially combine the per IOPS cost of flash and the per-gigabyte cost of disk, leading to a reduced overall cost.
As with RAM and disk, this works only if some of your data is accessed frequently, and a lot of it is not. But this pattern continues to be true for most workloads, as it has in the past. Because well-designed hybrid arrays have similar properties to all-flash systems, I'll refer to both of them as "flash-based storage."
There will always be workloads that work best in all-flash arrays, exactly as some workloads are best done entirely in RAM today. Workloads with large working sets (they access a large set of data frequently) are likely to be a good fit for all-flash platforms because they do not lend themselves to a hybrid model. Workloads that have extremely good compression or deduplication are also likely to be good all-flash candidates, since high compaction mediates the high-capacity cost of flash.
The death of custom striping
Disk drives, because of their mechanical parts, have performance that is difficult to predict and can vary dramatically. This unpredictability extends to the choice of striping (RAID-5, RAID-6, and so on). Choosing the wrong kind of striping for a given workload can have a huge impact on performance, and the correct choice is not the same for all data.
For this reason, administrators of disk-based systems are forced to make tough choices about what striping to use for new data sets. They can change these decisions only by physically moving all of the data to a different kind of striping, making these decisions very difficult to change.
In a flash world, performance is plentiful and much more predictable, so striping has far less of an impact on workload performance. This means that modern storage designers have the flexibility to mix data together and store it with a common striping, thus removing difficult work from administrators.
From physical isolation to quality of service
The unpredictability of disk drive performance extends to workload interactions. It is difficult to completely understand how one workload will impact another on a drive. For this reason, disk systems are designed assuming the administrator will physically separate workloads from one another. By ensuring that two workloads don't share physical drives, an administrator can ensure they do not impact one another.
In contrast, flash drives have much more predictable performance when used correctly. The impact of one workload on another is much easier to understand and control from the software layer. This opens the possibility of true quality of service where the system keeps workloads from interfering with one another without requiring them to live on different drives.
By using a single striping scheme across entire arrays and decoupling policies from physical placement, modern storage systems can allow users to change the policies on data without having to physically move the data to a different location. This changes the essential question from "Where should this data live?" to "What properties should apply to this data?"
This shift can remove work from administrators, who today are required to make a number of tough manual mapping decisions up front for a new workload. Instead, modern systems can provide flexible policies rather than requiring up-front placement. This transition is especially important in cloud environments, where workloads with various requirements must be dynamically allocated.
Density: The incredible shrinking data center
Because flash dramatically improves the IO density (more performance in the same drive form factor) and mitigates the need to physically separate workloads, data center designers can create storage that is extremely dense by carefully combining high-performance flash and high-capacity disk.
In this new world, designers can replace entire racks worth of equipment with a few rack units. Combined with virtualization, flash-based storage is leading to a new wave of data center consolidation, where multiple data centers collapse down into one.
This consolidation will certainly have a number of interesting impacts on the industry. It is hard to know exactly what it will bring, but it seems reasonable to assume that we will see a rise in pod-based architectures where companies scale in discrete units that include storage, networking, and compute. It may very well lead to a rise in remote office technologies, perhaps even to alternatives to public cloud for small businesses. For now one thing is clear: It is certainly going to save those who run data centers a lot of money.
Brandon Salmon, Office of the CTO, @Tintri, has been at Tintri since 2009. He is a systems guy who loves to think about user experience, which he picked up from his doctoral work at Carnegie Mellon on distributed file systems for the home. He designed and implemented Tintri's initial algorithms for moving data between flash and disk, and he has worked on a number of areas since, most recently cloud technologies.
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to email@example.com.