What the software-defined data center really means

As network virtualization matures, the software-defined data center will establish an open-ended environment for innovation

You know the story of how the Internet was created: The military wanted a redundant "network of networks" and figured out how to do it with a new protocol using existing networking equipment.

Something nearly as historic is happening now, again using existing infrastructure: the software-defined data center.

[ Last November InfoWorld chose software-defined networking as one of the top 10 emerging enterprise technologies. Find out more. | InfoWorld's experts take you through what you need to know about the cloud. Download our "Private Cloud Deep Dive," our "Cloud Security Deep Dive," our "Cloud Storage Deep Dive," and our "Cloud Services Deep Dive." ]

Just as the world changed when isolated networks became the Internet, computing is about to make a quantum leap to "data centers" abstracted from hardware that may reside in multiple physical locations. This pervasive abstraction will enable us to connect, aggregate, and configure computing resources in unprecedented ways.

A totally virtual world
The key enabler of the software-defined data center is virtualization. We can now virtualize and pool the three key components of computing: servers, storage, and networking. At the same time we are reaching a critical mass of sophistication in being able to slice, dice, and compose those pooled virtual resources.

The least mature technology to enable the software-defined data center has been network virtualization. But work is under way at Arista, Cisco, Microsoft, and VMware -- the last getting a boost from the acquisition of Nicira -- to allow virtual networks to be provisioned, extended, and even moved within and across physical networks as quickly and easily as we now create and migrate virtual servers.

What does it mean to be able to create software-defined data centers? Imagine if, based on the requirements of key applications, you could wave a mouse and provision a data center to match, configuring pooled resources to meet those requirements point by point. Multiple software-defined data centers could use overlapping physical infrastructure so that each tenant could have its own virtual network with its own authentication and authorization scheme, without the availability and scalability limitations of conventional VLANs.

Evolving standards
An early use case of this type of software-defined infrastructure surfaced last week, when eBay went public about its implementation of OpenStack and the Nicira Network Virtualization Platform (NVP). But for network virtualization to proliferate, standards must take root. There are two competing standards for network virtualization: VXLAN and NVGRE. The OpenFlow protocol stack, which establishes a standardized interface for controlling network switches, supports VXLAN and also enjoys the backing of most network equipment vendors.

Another important piece of the puzzle is Quantum, the evolving networking component of the open source OpenStack project. Quantum provides an application-level abstraction of network resources and features an API for plugging in virtual switches, such as Cisco's Nexus line or the open source Open vSwitch. This fall will see the first release of OpenStack to include Quantum as well as an improved version of the Compute (Nova) component.

1 2 Page 1
Page 1 of 2