A few years ago, if I had said I was running a "converged network," you might have assumed I had just installed a shiny new network-attached VoIP phone system. Today, convergence has a completely different meaning.
In conventional enterprise data centers, there are at least two networks: one built on Ethernet that allows users to access their applications on servers and a second one, often built on Fibre Channel, that enables those servers to access mountains of data on a storage network. Both of these networks are huge capital investments with their own specialized hardware. They have vastly different management tools and require completely different skill sets to build and maintain.
Wouldn't it be more cost-efficient to have just one network? That's the promise of converged networking: one highly scalable, high-performance network with consistent management tools that can handle both Ethernet and storage traffic.
This kind of convergence has been possible with IP-based storage protocols such as iSCSI for quite some time, but until recently it has never been a particularly viable solution for large enterprises. At first this was because 1Gbps Ethernet couldn't handle the loads that enterprises throw at their 4Gbps and 8Gbps Fibre Channel-based storage networks. Now that the majority of large enterprises have upgraded to 10Gbps Ethernet, you'd think the problem had solved itself -- except the needs of convergence go beyond having a really fast pipe.
One of Fibre Channel's strongest features is that it is an assured delivery protocol -- meaning that, in a healthy network, no Fibre Channel frame is ever lost in transit. Ethernet was never designed to work this way. Instead, Ethernet networks have typically depended upon the Layer 3 and 4 protocols (such as TCP/IP) to recognize and adapt to network congestion and packet loss. Using these high-level protocols to implement flow control and error correction is both complex and expensive from a latency perspective.