Back in the days before 10Gbps Ethernet was available, the fact that IP-based storage like iSCSI and NFS were stuck at 1Gbps Ethernet speed was used to prove that 2Gbps and 4Gbps Fibre Channel still reigned supreme. Today, with the wide availability and plummeting cost of 10GbE networking hardware, many now argue -- ironically enough -- that the reverse is true.
Yes, 10GbE can play a tremendously beneficial role in storage networks for a variety of reasons. But focusing solely on the wire rates can be extremely misleading, especially when you compare Ethernet to Fibre Channel, which at a fundamental level are two vastly different protocols. That raises a simple question: When do you really need 10GbE rather 1GbE?
When you take a good, hard look at it, how much do these raw throughput stats actually matter? What does operating at 10Gbps speeds offer you that multiple 1Gbps links do not? Answering that question might seem pretty obvious, but it's not quite as straightforward as it seems.
Pushing bits
Yes, the laws of physics still hold: You can push roughly 10 times the bits through a 10GbE pipe than you can through a 1GbE pipe. But let's take a step back and actually look at those numbers.
In real life, you'll get about 110MBps of iSCSI throughput through a single 1Gbps link -- maybe double that with a correctly configured, load-balanced pair. Some 10GbE-based test beds I've seen can push about 1,150MBps over a single link.
That's more than 1GBps. I'm sure there are many excellent examples out there, but how many of us can really say that we have a legitimate application that needs to move (or is even capable of moving) close to that much data that quickly? Outside of the rarified air surrounding large enterprise, government, and academia, I'd wager that they are few and far between.
Generally speaking, the most common storage challenge you're faced with applies not to raw data throughput, but rather to dealing with a multitude of extremely small, entirely random I/O operations. These types of workloads are common to database applications across the board and are the most difficult workload for a traditional spinning disk to service.
This is due to the rotational speed limitations of spinning platters and the speed at which disk heads can seek a specific point. In the end, a relatively heavy disk load of perhaps 20,000 4KB IOPS (I/Os per second) might only add up to around 80MBps of raw throughput -- well within the capabilities of a single 1GbE link. However, that same load would require more than 110 15K RPM disks due to disk latency limitations. The storage bottleneck for these kinds of workloads is most often found in the disk subsystem itself, not the storage interconnect.
To be sure, superfast SSDs (solid-state disks) have started to shift that bottleneck back toward the storage interconnect and even to the server itself. But at the moment, the high price point and relatively low capacity of SSDs makes them unattractive for all but the most highly transactional applications and well-padded budgets. If you face a significant SSD implementation, a 10GbE interconnect may be necessary to fully utilize its potential.
Otherwise, outside of niche applications like high-rate video and imaging, you'll rarely see production disk workloads that eat bandwidth to the point where 10GbE is required. But one key area deserves mention: the ongoing struggle to protect our ever-growing mountains of data with backups.
Unlike database applications, backups generally move very large amounts of data sequentially -- often to high-performance tape drives, which can easily accept more than 1Gbps of sustained throughput. If you have a requirement to back up tens of terabytes of data within a nightly backup window, running a number of these tape drives in parallel may be the only way you can accomplish that goal. If you're in this boat, a 10GbE interconnect might be necessary if for no other reason than to ensure that your SAN can adequately feed your backup infrastructure.
Shrinking latency
Another area where a 10GbE interconnect can improve performance over 1GbE is in the area of latency. But this differential may not be quite as consequential as you imagine. Disk issues aside, there are two components to link latency that have to be considered: propagation and serialization.
Propagation is the speed at which data crosses a given media (fiber, copper, and so on). Implementing 10GbE over 1GbE using the same media does absolutely nothing to affect this -- electrical or optical signals get from one end of the cable to the other just as fast no matter how much data you're jamming down the pipe. Serialization, on the other hand, is the speed at which you can get a given amount of data into that pipe (or how "wide" the pipe is). In that respect, 10GbE is 10 times as fast.
That improvement in serialization time only tells a very small part of the latency story. A tremendous component of link-layer latency is introduced by the interfaces and systems on each end of the connection. At the end of the day, you may find that that the 135µs round trip time for a given packet over a well-tuned 1GbE link might only fall to 75µs on a 10GbE link -- most of the remaining overhead is in the devices on either end, not the link itself.
While 10GbE certainly sports lower latency than 1GbE, the difference generally isn't large enough to make a noticeable impact on storage performance -- especially if you're addressing traditional spinning disks. Shaving off 60 microseconds of link latency when your storage may take more than 100 times as long to respond won't achieve much. Again, if you're making heavy use of SSDs, the ratio of link latency to storage latency becomes much larger and may be worth the extra investment.
Manageability and convergence
The one area where 10GbE really shines is in ease of management. While the hardware and software exists to implement MPIO (Multi-Path IO) over a number of 1GbE links, it can become a pain to configure, monitor, and manage properly. Even the cabling can become a pain -- a 1GbE SAN might have eight or more 1GbE links between two redundant controllers, while a 10GbE SAN will generally have a max of four for an active/passive controller architecture.
Even if you don't choose to utilize 10GbE on the storage device itself, you should strongly consider using it on the server side -- especially if you're virtualizing. An iSCSI-attached virtual host server that's using 1GbE networking will usually burn up at least six 1GbE ports each -- maybe two for host management, two for VM communication, and another two for iSCSI access. You can easily replace all six of those interfaces with a pair of redundant 10GbE interfaces over which you run everything and significantly decrease your port consumption and cable count while increasing your overall available bandwidth.
Moreover, if you're using a blade architecture, you can take that model even further through the use of intelligently converged networking. HP's Virtual Connect module for their c-Class blade offerings is a good example of this (though certainly not the only one). Using the VC modules, you can create multiple "Flex NICs" on your blades' built-in 10GbE interfaces, each with their own bandwidth limits and network settings. An entire blade chassis might require only two or four external 10GbE links into the rest of your physically switched network -- an incredible management and cost benefit.
Boiling it down
Using the broadest of one-size-fits-all generalizations, 10GbE-attached IP storage can be faster and easier to manage than 1GbE-attached IP storage, but it's still significantly more expensive and the chances are that you probably don't actually need it (but you'll like it if you get it).
Of course, a few years from now, that statement will be as quaint as what I might have said a few years ago before my laptop had a gig interface: "Nobody needs 1GbE outside of the core network backbone." Or a few years before that: "Nobody needs 100Mbps outside of the core network backbone." Or many, many years before that: "LocalTalk is just fine -- Ethernet is way too expensive." Anyone with more crotchety and self-dating quotes than those is welcome to include them in the comments below.
This article, "The myth of 10GbE IP storage," originally appeared at InfoWorld.com. Read more of Matt Prigge's Information Overload blog and follow the latest developments in storage at InfoWorld.com.