Instead of showing such a short clip, here's what the test bed layout looked like after the move, again captured with Fabric Manager. As expected, the test VM is now attached to the first server on the left, and reaches its storage target going first through the Nexus 5000. However, the test VM and the applications (I had Iometer and a movie clip running) remained unaffected by the change. The beauty of the FCoE-plus-VMware approach is that nothing has to be changed on the storage side or on the application server for VMotion to work.
If you are wondering how difficult it is to manage the CNA, the answer is not very. As we are on Emulex's turf, the powerful features of its flagship management application, HBAnywhere, still apply, including remote management.
How much will deploying this little marvel cost? Well, for Nexus 5000 pricing, please refer to the review. As for the CNA, the OEMs ultimately set the price, so I did not get a straight figure from Emulex. However, they did describe the ballpark as "less than the total cost of a Fibre Channel HBA and a 10G Ethernet NIC combined."
Indeed, you should be able to save money on adapters, given that a single CNA (two if you need high availability or multipathing) can take on both loads; therefore, you don't need FC adapters on every server. This also means fewer wires and fewer connections to the storage fabric, hence a less expensive and easier-to-support layout.
Whatever you save, those benefits are not much compared with the exceptional flexibility that the FCoE/VMotion combo brings to the datacenter. VMotion made moving a VM from one server to another as easy as dragging and dropping, provided that all other conditions were met. FCoE devices such as the Emulex CNA and the Nexus 5000 provide that level of network virtualization that removes most of the obstacles to a smooth VMotion. It's a match made in admin heaven.