Many new concepts have arisen with the advent of large-scale virtualization technologies, such as the ability to seamlessly migrate running virtual machines among physical hosts and even across data centers -- not to mention transitioning entire running data centers between real-world locations.
These newfound abilities are quickly dispensing with long-held best practices in disaster recovery and disaster planning. At the same time, they're heavily modifying the underlying network and server architectures that have served IT so very well for many years.
[ Get virtualization right with InfoWorld's 24-page "Server Virtualization Deep Dive" PDF guide. | Track the latest trends in virtualization in InfoWorld's Virtualization Report newsletter. | Pick up expert networking how-to advice from InfoWorld's PDF special report and Technology: Networking newsletter. ]
In the not so distant past, a business harboring a keen interest in business continuity with a traditional data center would have a hot site and a warm or cold site located some distance away. The warm or cold site would generally house a subset of the services running in the main location, but would have enough horsepower to maintain critical services in the event of a major outage. Data would be replicated as well as possible given budgetary constraints and bandwidth availability, and while it would be feasible to maintain business operations using the disaster-recovery site, it would still be a major fire drill for IT to stabilize all elements in the face of a disaster.
Now, new virtualization technologies make it possible to dynamically shift an entire data center from one location to another without taking down a single server. Given enough bandwidth between sites and the use of newer virtualization management tools, a few clicks of a mouse can result in hundreds or thousands of VMs being relocated to a site 200 miles away without missing a beat. It allows businesses many more options, such as the ability to evacuate critical systems prior to a forecasted weather event, ensuring that no matter what happens to the site, business can continue.
Back to school
However, this kind of agility comes at a price. That price is a thorough renovation of traditional networking concepts. EMC VMware's new release is a prime example of this. While we've had technologies like VXLAN for a while, the new features in vSphere 5.1 eliminate a significant number of what would otherwise be external functions. Firewalls, load balancing, VLANs, routing -- they're now part of the hypervisor network stack, and they're capable enough that in many deployments it will no longer be necessary to maintain separate hardware appliances for those functions.
The new load balancer isn't quite to the level of an F5 box, but it provides a significant number of fundamental features that will allow admins to dispense with external gear. Likewise, the new firewall features and management tools go a long way toward removing the need for those devices.