How to choose an all-flash storage array that scales

Now that controller CPU and memory are the bottlenecks, efficient metadata management becomes the key to performance and scalability

architectural blueprint of contemporary buildings 000008261434
Credit: iStockphoto

As the cost of flash continues to plummet, it becomes increasingly clear that legacy storage arrays designed for disk drives are becoming outdated and all-flash arrays offer a better mixture of performance, cost, and flexibility for active data. Exceptions should still be made for workloads that are gated on sequential bandwidth, like video, but as a general rule, flash has reached a critical point where it's cheaper for frequently accessed data.

The decline of the cost of flash is currently driven primarily by 3D NAND and triple-level cell (TLC) NAND. These technologies introduce new ways of increasing storage density and reducing cost without the need to further shrink the cell size. These approaches guarantee that the density of SSDs will continue to grow rapidly, and the cost of flash media will continue to drop rapidly, for the next five to seven years. Today, 2TB SSDs are becoming the norm, and 4TB and larger SSDs are right around the corner.

When looking at the design of any all-flash array, it’s clear that the performance and scalability bottleneck of any system is no longer the storage media; instead, it's the processing power and memory. The proper way to build an all-flash array -- one that’s more price competitive than an array built from hard drives -- is by introducing the key software features of smart provisioning and inline data reduction. However, by introducing advanced data reduction capabilities, such as deduplication and compression, we also introduce the challenge of managing more metadata, and hence more processing overhead. Thus the efficient use of controller processing power and the ability to scale out processing resources will dictate the performance and scalability of the storage system.

A software-defined, scale-out architecture allows you to scale out capacity and performance linearly by adding more nodes. Such an architecture typically includes a virtualization layer that uncouples the hardware from the software that manages system provisioning as well as data storage. The virtualization layer also enables the system to quickly deploy new technologies.

This means that a product using a software-defined architecture can take advantage of the most optimized hardware elements as they become available, and your existing scale-out systems can expand in the most optimized way while preserving the old investment. When it comes to all-flash storage arrays, a good software-defined, scale-out architecture enables quick deployment of new cost-efficient flash technologies into an existing all-flash array. Contemporary flash technology enhancements driven by 3D and TLC NAND bring down the cost of SSD storage. A key element is the ability of the all-flash array technology to deploy the high-density SSDs within the same architecture and allow the business to scale the storage systems in a cost-effective manner.

Efficient metadata management and an architecture that is optimized for low write amplification are the key architectural properties that enable an all-flash array to scale capacity and increase the density of addressable data per controller. Even with optimized metadata management, the amount of metadata will grow linearly with respect to the density of the system and the data reduction ratio. Therefore, an architecture that is ready for the fast growth in flash density, with a much slower growth in DRAM density, should not assume that all the metadata can fit into DRAM, as this limits the ability to scale up capacity and limits the maximum data reduction of the system.

There are two unique elements at the core of Kaminario’s metadata efficiency:

  • Adaptive block size architecture versus a content address mapping architecture. An adaptive block size architecture uses far fewer pointers when compared to the pointers generated by a content address mapping architecture. With adaptive block size architecture, a typical pointer in the Kaminario array points to 32KB of data, versus the 4KB or 8KB of data typical with content address mapping architectures used by other all-flash, scale-out products. An adaptive block size architecture also delivers higher performance per controller, as less processing is done per a typical I/O operation.
  • For deduplication, Kaminario uses a weak hash, then compares the data. This is a different method than using a cryptographic hash, which is commonly used by some of the other vendors. The size of a typical cryptographic hash signature is four times larger than a weak hash signature. Thus, the use of a weak hash means four times less metadata per unique data in the system. Some vendors claim that using a cryptographic hash without comparing the data is risky for the integrity of the data. I prefer not to enter this debate, except to say the real value of using a weak hash is a four-fold reduction of the metadata footprint.

An adaptive block size architecture using a weak hash for deduplication results in a highly efficient metadata footprint. But even with all of these critical optimizations, the amount of metadata will be linear to the density of the system and the data reduction ratio. Therefore, an architecture that assumes all metadata will fit into DRAM will be limited in the ability to scale up capacity and limited in the maximum data reduction of the system. By contrast, Kaminario allows the metadata size to exceed the DRAM size. Because of this, the Kaminario architecture can scale up far beyond architectures that assume all the metadata is in memory. This makes scaling the array more cost-efficient manner and makes the Kaminario architecture an excellent fit for new flash technologies such as 3D NAND and TLC.

All-flash arrays can provide significantly better performance than legacy storage arrays, as some all-flash systems will run out of capacity before performance levels decrease. When planning for growth, it’s important to retain flexibility and be able to scale performance as more applications are serviced by the all-flash array. Storage architectures should deliver the flexibility to start with a system that meets current needs, while also providing the ability to scale up or scale out as necessary. A storage architecture that meets the needs of both today and tomorrow ensures that the maximum amount of data that can be managed by each controller can scale to very high capacities. It also ensures that the architecture maintains a low metadata footprint, with no assumption that all metadata always resides in the memory.

Some all-flash array vendors rely on Moore’s Law for their scalability, with frequent controller upgrades for improving performance. In practice, however, Moore’s Law is becoming a dead end. Computing power simply cannot maintain its rapid exponential increase using standard silicon technology (Intel has admitted this). Controller upgrades create inconvenience and extra cost, forcing customers to refresh the controllers too frequently without gaining sufficient performance improvement.

A true scale-out storage architecture characterized by efficient metadata management offers a much better hedge against future business needs, certain data growth, and inevitable technology improvements. The density of flash is increasing even faster than Moore’s Law, while DRAM density and affordability is increasing at a much slower rate. Any modern all-flash storage solution should be built on the assumption of a lower percentage of controller DRAM per attached SSD capacity.

Reducing the time for deploying new cost-efficient technologies and for meeting new business needs is a significant challenge for any IT shop. But with a flexible and agile architecture, it’s an achievable one. Is your storage infrastructure ready for tomorrow’s challenges? 

Shachar Fienblit is the chief technology officer at Kaminario. 

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to

From CIO: 8 Free Online Courses to Grow Your Tech Skills
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies