How to stress-test primary storage


Become An Insider

Sign up now and get free access to hundreds of Insider articles, guides, reviews, interviews, blogs, and other premium content from the best tech brands on the Internet: CIO, CSO, Computerworld, InfoWorld, IT World and Network World Learn more.

You've unpacked your eval unit -- now here's how to put together a test plan and kick it around the block before you buy

The performance of primary storage is more likely to affect the performance of your applications than the network, server CPUs, or the size and speed of server memory. That's because storage tends to be the biggest performance bottleneck, especially for applications that rely heavily on a large databases.

That makes testing crucial. Before you buy, you need to know how well your applications perform on the specific storage hardware you're eyeing. As I noted last week, most vendors will provide a loaner for you to test-drive.

Data Explosion iGuide

Unfortunately, testing storage is not always a straightforward process. It requires a solid understanding of how your applications use storage and how the storage device you're evaluating functions under the hood. Each situation is different; no single test or benchmark can give everyone the answer they're looking for, but you can take some basic evaluative steps to ensure your storage is up to the task.

Knowing what to test

This constraint is one of the main reasons SSD (solid-state disk) arrays are becoming more popular for transaction-intensive applications. However, SSDs cost so much more than conventional disk that their costs can only be justified for high-end applications.

For the most part, we're stuck with disk heads zipping around various physical points on disk platters, working with very small chunks of data over and over. Worse, writes tend to take longer than reads, so write-heavy loads can really whack performance. To give you an idea of how big of a deal this is, I ran a brief, relatively unscientific test on my poor three-year-old laptop.

First, I configured a test that would sequentially read 4KB chunks from a 1GB file. It was able to do this about 3,560 times a second (3,560 IOPS) with an average latency of about 0.25 millisecond per transaction. Pretty good, right? Not so fast.

I reconfigured it to perform that same test, but 30 percent of the time, it would write to the disk instead of reading from it. Now, I only get about 750 disk transactions per second with an average latency of about 1.3 milliseconds per transaction. Worse, but not as bad as it's going to get: I reconfigured it to perform that same 70/30 percent read/write split entirely randomly within that 1GB file, so the disk head would seek all over the place. The result: 80 IOPS and an average latency of about 12 milliseconds. That's nearly 45 times fewer transactions and 48 times more latency per transaction than the sequential read-only test.

There's the rub: Most database platforms create storage workloads that resemble the most challenging of those three tests. To improve the performance of disk subsystems under these types of loads, the most common approach is to spread the work across many physical disks in an array. Generally speaking, the more spindles dedicated to a workload, the faster it will perform.

Iometer allows you to construct fairly complicated tests involving multiple worker processes all simulating different kinds of I/O workloads at the same time. You can even link worker processes on different servers on a network to a single management console to thoroughly test shared SAN storage.

For example, you can configure an eight-threaded test profile, one of which would perform sequential 64KB writes to one LUN while the other seven perform randomized 4KB reads and writes on a different LUN (essentially mimicking the transaction log and database workloads of Microsoft Exchange). Further, you can deploy that same configuration on several servers at once and sum the results from all of them in the same management console -- very useful.

Bonnie++. Bonnie++ is an incredibly simple Unix-based tool originally developed as simply "Bonnie" by Tim Bray and then rewritten in C++ and heavily extended by Russell Coker. Bonnie++ can do many of the same tests that Iometer can, but Bonnie++'s differentiating factor is that it can simulate a file system load as well as a simple block-level disk load.

On Linux/BSD platforms, you have lots of choices about what kinds of file systems to use, ranging from the Linux EXT3 file system through ReiserFS and various ZFS implementations. Each has its own pluses and minuses, and which you use is determined by what kinds of files you're dealing with (creating lots of tiny files, fewer huge files, and so on). 

Microsoft Jetstress 2010. Microsoft's Jetstress 2010 is a great example of a purpose-specific disk testing tool. Instead of generating random disk I/O as Iometer does, Jetstress actually implements a realistic approximation of the database back end of a Microsoft Exchange 2010 mailbox server and applies a realistic load against it. Instead of telling you how many IOPS you did and leaving it up to you to extrapolate how that will impact your application performance, Jetstress will tell you how many Exchange users of a given usage profile you can expect to support with the storage you have. Even if you're not running Microsoft Exchange, this can be an exceptionally good way of kicking your storage around with a realistic application usage pattern.

Take snapshots. If your storage device supports snapshots, take some. In fact, take a bunch -- as many as you can ever see yourself using. See if that affects the overall performance (specifically, the small-block transactional write performance). Many storage arrays that support snapshots use what's called a copy-on-write snapshot algorithm. This means that a separate area of the available disk resources is set aside to store changed data. Anytime you write to a volume while a snapshot is in place, the array must first read the data that will be replaced, write it into the snapshot area, then overwrite the original data with the new data. This can substantially degrade write performance in some circumstances.

Use a very large test file. Unless you're working with very small databases and/or files, it's best to use a test file size that is significantly larger than whatever cache might be present on your storage array or array controller. This will allow your testing to fully circumvent most of the benefit that the cache would have granted and gives you a good worst-case idea of how well the disks will perform.

Degrade the array. If you're testing a disk array that uses some form of RAID (I hope you are), try yanking out a disk or two to degrade the array and trigger an array reconstruction onto a hot spare. Array leveling and reconstruction events can substantially degrade overall I/O performance. This is a very good thing to know if your applications are sensitive enough to disk performance bottlenecks that they will bog down when performance drops temporarily. The whole idea of RAID is to allow you to continue unscathed when you lose a disk. It's important to know what the performance cost of losing a disk might be, so you can provision extra storage performance headroom to accommodate it if need be.

Allow the array to fill. Some virtualized SAN platforms use internal disk allocation mechanisms that depend on a certain amount of free array capacity to maintain high levels of performance, often requiring that you keep at least 10 percent of the array empty. To see the repercussions of failing to do this, allow your disk array to fill to the point where it is almost entirely allocated and observe the effects. If your disk array suffers a performance penalty when it's full, then you know to avoid that situation at all costs.

Test, test, and test again

To continue reading, please begin the free registration process or sign in to your Insider account by entering your email address:
From CIO: 8 Free Online Courses to Grow Your Tech Skills
View Comments
You Might Like
Join the discussion
Be the first to comment on this article. Our Commenting Policies