In a recent review, I wrote about consolidating FC and Ethernet networks using FCoE (Fibre Channel over Ethernet) and Cisco's new Nexus 5000 switch. As the review showed, the solution merges the two transport protocols easily, allowing FC frames to navigate through a 10 Gig Ethernet connection without giving up features or performance.
Add a CNA (converged network adapter) to a server, and you can give your applications easy access to FC storage attached to a converged network. Essentially a 10 Gig Ethernet NIC with a split FC personality, the CNA plays both roles at once, which brings several financial and practical benefits.
But what wasn't explicitly addressed in the review is FCoE's effect on server virtualization. For example, what happens when you bring together VMware ESX, VMotion, and FCoE? Is there any advantage over the traditional FC approach?
Emulex made its Demo lab available to me to help answer those questions. The lab included three VMware ESX servers connected to a fabric network using either old-fashioned FC cards or the new CNA.
This annotated topology map (please click here for a larger image) taken with Cisco Fabric Manager, shows my test bed. On the left are the FCoE players, an ESX server with an Emulex LP21000 CNA, and connected to it, the Cisco Nexus 5000 switch.
On the right side are the FC devices with two ESX servers, each mounting Emulex LPe11000 adapters, and connected to the Cisco MDS switch.
The yellow line shows the path from the test VM (middle) to the MDS switch and then to its storage devices.
One side note: The Emulex adapters implement NPIV (N_Port ID virtualization), a useful feature that provides each VM with a virtual WWN (worldwide name). NPIV is a much more flexible alternative to having only one WWN assigned to the ESX server, and shared by all its VMs.
Using VMotion, it was a snap to move the test VM from the original server to the CNA-equipped ESX server, while maintaining access to its LUNs -- handy, should you want to perform maintenance on the original ESX server or balance the load across the two. Once again, the server on the left doesn't have native FC adapters, only a CNA.
I had intended to post a movie clip of the test VM moving to the new ESX server using VMotion, but the action lasted only a few seconds. Not much of a movie, but that's how long it took to drag the test VM from one ESX server to another, working on the VMware Infrastructure Client (the green arrow, top left on the image link, indicates the direction of the move).
Instead of showing such a short clip, here's what the test bed layout looked like after the move, again captured with Fabric Manager. As expected, the test VM is now attached to the first server on the left, and reaches its storage target going first through the Nexus 5000. However, the test VM and the applications (I had Iometer and a movie clip running) remained unaffected by the change. The beauty of the FCoE-plus-VMware approach is that nothing has to be changed on the storage side or on the application server for VMotion to work.
If you are wondering how difficult it is to manage the CNA, the answer is not very. As we are on Emulex's turf, the powerful features of its flagship management application, HBAnywhere, still apply, including remote management.
How much will deploying this little marvel cost? Well, for Nexus 5000 pricing, please refer to the review. As for the CNA, the OEMs ultimately set the price, so I did not get a straight figure from Emulex. However, they did describe the ballpark as "less than the total cost of a Fibre Channel HBA and a 10G Ethernet NIC combined."
Indeed, you should be able to save money on adapters, given that a single CNA (two if you need high availability or multipathing) can take on both loads; therefore, you don't need FC adapters on every server. This also means fewer wires and fewer connections to the storage fabric, hence a less expensive and easier-to-support layout.
Whatever you save, those benefits are not much compared with the exceptional flexibility that the FCoE/VMotion combo brings to the datacenter. VMotion made moving a VM from one server to another as easy as dragging and dropping, provided that all other conditions were met. FCoE devices such as the Emulex CNA and the Nexus 5000 provide that level of network virtualization that removes most of the obstacles to a smooth VMotion. It's a match made in admin heaven.