As I write this on a cold winter Friday afternoon, most of the Northeastern United States is abuzz in preparations for an incoming blizzard. By the time this posts, we'll all know whether the "snowmageddon" Storm Nemo hype spun by the weather folks was true. In the meantime, the forecast of multiple feet of snow combined with high winds has created the real possibility of long-lasting power outages and tough travel. As a result, IT directors throughout the region are scrambling to make sure they're ready.
Of course, the lead-up to a big storm is always the time people seem to realize they've forgotten a detail or two. As a consultant and adviser, I've received more than a few panicked calls over the years asking whether I happen to have a few extra servers (or even an enterprise-class SAN!) kicking around "just in case." We all know that planning for a disaster when you're staring it in the face is probably too late, but what if there's a different way to approach the question of disaster preparedness in IT?
In the old days, the idea of transitioning mission-critical workloads to a data center that's not in the path of a storm might have been a capability that only a few of the very largest enterprises could afford to implement (and even then for relatively small portions of their infrastructure). However, as storage and virtualization technologies have continued to advance and cloud infrastructure offerings mature, the ability to completely sidestep disasters rather than weathering the storm has become available to many more organizations.
Not all that long ago, if you wanted to make sure your IT operations could avoid a major weather event, you'd need a hot site. Typically, that meant full-scale data center facilities, a copy of nearly every piece of equipment you run at the primary data center, complicated failover procedures, and substantial network facilities to support cross-site data replication and post-failover user access.