FCoE catches fire at SNW

Storage Networking World came and went this week, and judging from the hype, FCoE (Fibre Channel over Ethernet) is coming out of the conference with a head full of steam.

Understandably so, I may add, because this technology, which only a year ago was little more than a jot on a napkin, is poised to make its mark on the storage market.

Of course, running FC transport over Ethernet has never been impossible from a technical perspective. Politics, as this rather animated discussion proves, have played a large role in ensuring the two technologies have grown up in isolation. Fast-forward 12 months to this recent StorageMojo post and you'll see that the debate over FCoE hasn't slowed down a bit. And it's not surprising, given how much is at stake for both customers and vendors.

Obviously, some iSCSI vendors don't like the idea of FC invading the Ethernet space with a protocol that promises to be much more effective than alternative technologies, such as iFCP. After all, this would nullify, or at least weaken, their traditional advantages -- lower-cost connectivity to servers, for example.

By using a unified transport protocol, customers can access any storage volume from any server, assuming they have the proper gear. Vendors such as Emulex and QLogic are already shipping CNAs (converged network adapters) -- cards that, in essence, connect to an Ethernet wire but speak FC, allowing them to communicate with FC storage arrays.

There is, however, a gap in this picture. At some point, the FC protocol must jump from the Ethernet wire to the storage fabric. And here's where switch vendors come into the picture. On Tuesday, Cisco announced the availability of the Nexus 5000, a new switch especially designed for FCoE. We'll see how the switch measures up when I get a chance to review it, but to give an indication of the stakes, Cisco anticipates $14 billion over the next five years in FCoE-related revenue. Cisco partners will secure $4 billion of that succulent pie, the estimate goes.

Whether that mark is feasible depends entirely on how deeply customers buy into the promise of FCoE. Two areas where FCoE will make the most sense are server and datacenter consolidation. In these instances, the savings of running FC over Ethernet will likely far outweigh the costs. Other reasons for buying into FCoE -- to simplify wiring at the server, for example, or to extend the reach of the storage fabric across the WAN -- would require case-by-case analysis before committing to the technology.

Emulex, QLogic, VMware, and Dell are among the many vendors partnering with Cisco on FCoE. Intel is also jumping on this promising new market and will soon deliver FCoE adapters and chips integrated on server motherboards.

Of course, despite all this buzz, there is no reason to dismiss iSCSI. For customers who do not have a large investment in FC storage, iSCSI is still a very viable and less expensive proposition. More importantly, some so-called iSCSI solutions offer benefits that go beyond cheap and easy-to-use transport between servers and storage. Think, for example, of Intransa or Isilon: Their multinode, resilient architectures are unrivaled in the FC space. I would argue that customers buy those solutions because of their architectures, with iSCSI being just the icing on the cake.

Nevertheless, it would be naive to ignore that FCoE changes the rules of the storage game. Once the initial dust settles and we get used to the fact that both protocols can run over 10-Gig Ethernet, customers will be less distracted by debates over which transport is better and will instead focus on the storage solution that better serves their business. This alone should make the new protocol a welcome addition to the storage discussion.

Technorati Tags: Cisco,Brocade,Intransa,Isilon,Emulex,QLogic,Dell,Intel,VMware,fibre channel,Ethernet,iSCSI,transport protocol,server virtualization,data center consolidation,storage,switch,fabric,10G

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies