In search of the jack-of-all-trades

As converged, virtualization-based infrastructures become more common, who holds the keys to the infrastructure?

As server virtualization becomes ubiquitous, it's having a profound impact on the way we organize roles and responsibilities in IT. Virtualization's tight integration with all levels of the infrastructure renders the old paradigm of siloed server, network, and storage administrators obsolete. Nowhere is this more obvious than when planning for the capability to fail over to a secondary site.

The falling cost of replication-ready SAN hardware combined with increasingly feature-rich virtual site failover tools such as VMware's SRM (Site Recovery Manager) has made the prospect of constructing a warm site appealing to a much broader audience. In the past, setting up a warm site might mean re-engineering the production site infrastructure to allow for SAN replication and to capture application state from physical machines that might not already be on a SAN.

[ Also on Learn how data deduplication can slow the explosive growth of data with Keith Schultz's Deep Dive Report. | Looking to revise your storage strategy? See InfoWorld's iGuide to the Enterprise Data Explosion. ]

In environments where server virtualization is already implemented -- and on a SAN that can support replication -- implementing a warm site is often a simple issue of being able to afford the extra hardware, software, and WAN bandwidth. That relative ease of deployment, combined with lowering the odds of downtime if a sitewide failure occurs, has led enterprises that never would have considered building a warm site to jump right in.

Data Explosion iGuide

As more enterprises pursue warm site deployment, they often have trouble figuring out who to assign the project to. To be sure, any project that involves the implementation of a new secondary datacenter complete with server, storage, and networking gear is going to require the attention of experts in all of those fields. But in organizations where there are still dedicated server, network, and storage specialists, the result is often less than ideal.

On its face, virtualization would appear to be a server-based technology. To a large extent, it is: The hypervisor is there to virtualize servers, after all. But to do that, it also implements its own virtual networking stack and is increasingly integrated with the storage it runs on. It's certainly possible to build a server virtualization environment by drawing on the skills of networking, server, and storage specialists. You might make the same assumption for a warm site: The network techs come in and lay the WAN and LAN, the storage techs implement a SAN and configure replication, and then the server techs come in and get the secondary virtualization cluster built. Right?

1 2 Page 1