Not long ago, most primary storage platforms were likely to support only a single storage protocol, generally either iSCSI or Fibre Channel. But now the increasing popularity of deduplication, wide availability of 10Gbps Ethernet, and lure of low-latency network convergence made possible by Fibre Channel over Ethernet have given the industry potent motivation to offer as much choice as it can.
As a result, most major enterprise storage platforms today can support all of these block-level storage protocols, as well as file-based NAS functionality. Generally speaking, the more flexibility a storage platform can offer, the more likely it will be to survive the dramatic changes that are sweeping through the data center today -- from the explosion in corporate data to the drive toward high-density virtualization and private cloud infrastructures that depend upon converged networking.
However good this flexibility may be, the ability to mix and match storage protocols comes with potential liabilities, including added complexity, compatibility problems, and training challenges.
The joy of multiprotocol platforms
Most modern storage platforms ship with built-in connectivity -- either Ethernet or Fibre Channel -- and then allow you to add interface cards to support FCoE, FC, or iSCSI. This gives you the flexibility to upgrade an existing iSCSI or FC-only solution and maintain the same infrastructure, while also putting you in a good position to move toward a different protocol as your needs (and technology) evolve.
As any storage admin will tell you, it's not always easy to get one type of storage fabric to perform to its full potential. The tweaking process almost always involves incredibly specific adjustments to storage hardware, storage software, and operating systems. For example, the guide for integrating VMware vSphere with a legacy FC-only SAN is a good 12 pages long, with recommended MPIO tweaks, timeouts, and firmware combinations -- you name it. Throw in multiple protocols, and things may rapidly become a magnitude more complex.
Compatibility is also an increasingly common problem. A simple matrix that shows operating system support for a given storage platform won't cut it. A new axis must show the protocols supported, the various kinds of multipathing allowed on those OSes, which DCBx switches they might support, and so on. As a result, it's much easier to assume that a given configuration will work (or perform) well when, in fact, it won't. Carefully studying these increasingly complex support matrices becomes more important than it has ever been.
Worse, whether you can use the neat backup and disaster recovery features present in today's storage may depend on your choice of protocols. For example, many of the application-specific (Microsoft Exchange, SQL, Oracle) plug-ins that allow primary storage to take application-consistent snapshots have very specific protocol requirements. Supporting such add-ons becomes even more complicated when server virtualization is in the mix.
While most multiprotocol storage platforms can support a mix of block-level protocols without too much trouble, things can get a bit sticky when the vendor has stapled file-level protocols on top. Most traditional SAN vendors approached this issue in the past by developing stand-alone NAS gateways that would work well with their SAN platforms.
Today, the success of fully integrated solutions (for which NetApp is best known) has prompted these vendors to more tightly integrate their NAS products into the SAN -- sometimes natively, but usually through heavy management interface integration. This isn't always bad, but buyers expecting a fully integrated solution may be surprised to find that they actually bought a SAN with a few Windows Storage Server boxes stuffed into the rack above it.