If certain server and virtualization vendors get their way, end-user companies will be buying many fewer individual servers in a few years, and many more integrated packages of infrastructure.
Virtualization has allowed many companies to reduce the number of their physical servers, but increased demand for compute power, I/O capacity and storage, according to an April report from IDC.
[ Doing server virtualization right is not so simple. InfoWorld's expert contributors show you how to get it right in this 24-page "Server Virtualization Deep Dive" PDF guide. ]
Cisco Systems and Hewlett-Packard responded by creating "converged infrastructure" packages that include servers, storage and networking components attached to a backplane that makes the whole package one big chunk of compute power that can be divided easily among virtual or physical servers.
[ Who's better at disaster recovery: VMware or Microsoft? Debate has broken out. ]
That converged approach is a huge advantage, for reasons that are technical, financial and, in some cases, surprisingly mundane, according to Jim Levesque, the systems programmer who manages the virtualized infrastructure for the LA Dept. of Water and Power (LADWP).
The most efficient way to pack servers into a data center right now is using blades fixed in a chassis that generates tremendous heat, uses a lot of energy and is a nightmare to install or reconfigure because the back is a spaghetti-fight of wiring that's unbelievably labor-intensive, according to Levesque.
Virtual I/O servers can make that simpler because they connect to each blade directly, rather than requiring each to have an HBA or NIC installed. Levesque cut networking hardware costs, increased per-server bandwidth and reduced support time for LADWP's 300 physical servers and 350 VMs using virtual I/O servers from Xsigo Systems.
Other user advantages
Virtual I/O is just one example of the functions that can be removed from a server design in systems designed specifically for virtual infrastructures, however.
"In traditional server design, each block or component had all the bits and pieces required to operate as a standalone unit," according to Craig Thompson, VP of product marketing for I/O server vendor Aprius. "When we look at the direction of the OEMs, the server is quickly becoming a CPU, memory and a couple of high-bandwidth I/O ports that connect to shared resources on a network fabric of some sort."
Forrester analyst John Rymer calls the concept "distributed virtualization" when applied to application servers. The goal: Deliver high performance by virtualizing everything an application needs and provide it dynamically when it's needed.
In the hardware world, it's harder to convince end users the technical challenge is worth the effort, if they don't understand the advantages of virtual I/O, Thompson says. Neither Aprius nor Xsigo has made as much progress as it wanted to along those lines, company spokespeople said.
Packaging virtual I/O and innovative server design has been much easier for HP and Cisco, whose pre-fab virtual infrastructures allow customers to buy chunks of servers in the number they want and install them collectively, rather than building up one chassis at a time.