RPO and RTO
Businesses disasters are classified in three categories by Tower Group: natural, such as hurricanes and earthquakes; technological failures; and human, either on purpose or by accident. But no matter what causes a disaster, the nature of how best to recover is constantly being reexamined, Nelsestuen said.
"Companies are asking: 'How can we change our technology infrastructure to make it more recoverable and dynamic?' When failure occurs, your data is still preserved up to that point," he said.
Disaster recovery and business continuity today are often thought of in terms of recovery point objectives (RPO) and recovery time objectives (RTO). In other words, how much data is a company willing to lose if its systems go down.
For example, a company that synchronously replicates all backups to separate data centers that are actively up and running 24/7 has created an architecture with a tight RPO and RTO. A firm that allows data to be replicated off site asynchronously or backed up only to tape, expects it will lose some of the data being transmitted at the time of failure and assumes it will take longer to restore systems.
"The whole concept before was we have a production data center and then we have the disaster recovery site and that will take 24 to 72 hours to set up and get going," Nelsestuen said. "Now they're looking at making internal backups between the two. There are many institutions running data in multiple data centers throughout the day now."
Virtualization has allowed firms to be more dynamic in their recoveries because of self-healing systems and automated failover capabilities; when one server or data center goes down, another with the same data can come up almost instantly.
"It's a lot more dynamic now with the ability to...install backups and roll it back to any point in time," Nelsestuen said. "I've even seen some institutions look at creating a paper trail, so that if all else fails -- get out a slide rule and piece of paper."
Geographic distances were rarely considered prior to 9/11. Most companies were comfortable replicating data intercampus or to a facility within a few miles of a primary data center. A few firms, such as Nasdaq, actually replicated data out of state. Even so, some still get it wrong, Nelsestuen said.
"I know a company that has data centers in Florida and Galveston, Texas, which means a single hurricane could take both of the sites down," he said.
Cloud services, or application and storage service providers, are nothing new. Even before 9/11, companies such as Storage Networks were offering to store business data in an offsite facility that could be accessed remotely in times of disaster.
Today, a combination of public and private cloud services offer a more robust protection scheme where the most critical business data - that which is needed to keep revenue coming in - is replicated to a service provider or stored in a corporate cloud accessible from any location.
Public clouds are particularly advantageous for small-to medium-sized businesses because the services offer enterprise-class disaster recovery capabilities at a cost that's affordable. But, experts warn companies not to hog the bandwidth. The more data they want to recover, the more it'll cost. So they should store only what's needed to get the business running again -- not up to full speed.
Another bit of advice: When choosing a cloud service provider, companies should make sure the provider is on a different power grid.