What the converged data center really means

A big step in the relentless march toward data center efficiency, network convergence will shake up traditional IT roles

In the good old days, it was pretty easy to figure out who was who in most IT departments. You had some folks who did servers, some who ran the network, some who focused on storage, and generally a much larger group dedicated to applications and development.

This siloed arrangement has functioned pretty well. Employees assigned to their respective pieces of the IT infrastructure puzzle were as skilled and experienced in their responsibility as possible. If a new piece of storage or server tech came out, the network guy didn't need to know much about it, just as long as he was aware of how many and what kind of ports the server folks were going to request.

The downside of such compartmentalization is that it tends to breed inefficiencies and the occasional battle of wills. I have worked in environments where a server admin tasked with rolling out a new machine had to file a work order and wait two days to get a pair of network ports configured -- only to file a separate work order to provision a few SAN volumes.

For better or worse, the era of the silo is coming to an end. Convergence is upon us, thanks to the proliferation of server virtualization and the rise of IP storage. Many servers now ship standard with built-in 10GbE CNAs (converged network adapters). Sticking with the old server, storage, and network model simply doesn't fit anymore.

Triumph of the converged network

As the converged data center network evolves, it's becoming harder and harder to determine where the network begins and ends. A huge chunk of the configuration of server virtualization or SAN architecture could easily be considered network configuration these days.

To take a real-life example, imagine a data center composed of a combination of Cisco network gear, EMC Clariion storage, and (just to mix it up) HP Proliant c-Class blade server hardware. In the old model, the Cisco gear would form the network fabric, the EMC storage gear together with dedicated FC switches would form the storage fabric, and the HP blade chassis would be equipped with separate network and FC interconnects and adapters to link to both fabrics individually.

This design is great from an administrative control perspective because it allows for clear demarcation of the three separate management groups. It's also staggeringly inefficient. Investments have to be made in costly 10Gb Ethernet network hardware and management tools at the same time that almost identical investments are being made in 8Gb Fiber Channel storage networking hardware and tools -- all of which come with their own management, training, and support costs.

Today, that same architecture could involve Cisco Nexus switching gear, to which both the HP blade and EMC Clariion hardware could attach. The blades can now simply use their built-in dual 10GbE CNAs to reach both storage and the network. Likewise, the Clariion, even if it doesn't support FCoE (which many now do), can jack straight into native FC ports on the Nexus switches. The Nexus is given the job of running both logical fabrics simultaneously.

When IT jobs collide

What you're left with is a situation where the people managing the network need to have a much higher degree of understanding about storage and server tech than ever before. Meanwhile, the storage and server resources are left more and more dependent on the network folks to do much of anything. That's not just limited to initial provisioning, either. It runs the gamut from initial setup through performance monitoring and troubleshooting.

You can see this struggle born out in the tools provided by converged network vendors. Often, there will be management tools for the exact same piece of unified network hardware that are specifically tuned to the requirements of storage admins versus network admins (this is true in the Nexus world with Datacenter Network Manager handling network tasks and Fabric Manager handling storage tasks).

This separation doesn't reflect some inherent difference between handling FCoE/FC provisioning and provisioning straight-up Ethernet services -- they both essentially become network traffic that shares the same resources and hardware. The choice between Ethernet, FCoE, or native FC is really one of configuration and/or what kind of fiber module to stuff into a slot. Instead, it reflects an effort to allow the two administrative roles to remain as separate as possible no matter how inexorably intertwined they've become.

There are some cases, particularly on the network-to-server handoff, where this administrative demarcation can be treated much more gracefully. Sticking with the Cisco world, the Cisco Nexus 1000V virtual switches are a great example. The 1000V is composed completely of software integrated into the VMware vSphere hypervisor. It provides the network admin with a switchlike interface with which he can configure individual VMs as if they were attached to a real Cisco switch -- it's even able to be managed by Cisco's DCNM package as if it were a piece of hardware. At the same time, the virtualization/server administrator maintains visibility of the network admin's work and can still manage the attachment of virtual machines through VMware vCenter without having to delve into the Nexus layer to make changes.

This isn't really the case on the storage side, however. There, you increasingly see great VMware vCenter plug-ins that allow you to almost completely manage your SAN from within the virtualization layer. Dell recently released a fairly decent example of this for its EqualLogic iSCSI storage platform (the Host Integration Toolkit for VMware).

In this instance, a VMware appliance integrates storage-related alarms and event logging (covering everything from snapshot reserve consumption and replication) directly into the vCenter console. You can take SAN-side snapshots of virtual machines and even provision new storage. Being able to drive the virtualization and storage environment from one pane of glass is undeniably great from an ease-of-management perspective, but only serves to further blur the lines of administrative control over storage hardware.

Meet the new "data center administrator"

I have no doubt the converged network is here to stay. Traditionally siloed IT departments have a lot of changes in store for them. The time is not far off when staff will no longer be dedicated to storage, networking, or server admin tasks.

The generalized moniker of "data center administrator" is much more fitting, but brings with it a much higher requirement for generalized knowledge of everything that happens in the data center. Likewise, making it all work without realizing one day that you're entirely dependent on Bob, the solitary soul who actually knows how every piece of the puzzle works, will be a real challenge of human resource management and training.

Here's hoping we're all up to the challenge.

This article, "What the converged data center really means," originally appeared at InfoWorld.com. Read more of Matt Prigge's Information Overload blog and follow the latest developments in storage at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.

Copyright © 2011 IDG Communications, Inc.