While this scenario may seem like a comedy of errors, it is no laughing matter. This messaging disaster did in fact occur and was enabled by the belief that fault tolerance was the primary savior and backup was only necessary for a Hail Mary situation. Databases will crash, hotfixes will be missed, equipment will fail, and human error will always play a part in data integrity. Riddled with failure points, the true source of this disaster was complacency with the backup regime.
What is the takeaway from all of this? "Regardless of your [disaster recovery] solution, in the end, it all comes down to having a reliable backup," said Moe Hoskins.
In a world of disaster recovery sites, high availability, and virtualization, you might easily become complacent on your backup tools. And perhaps IT is moving further into a world where backups may become a thing of the past. But not yet!
Have your own disaster recovery nightmare to relate? By all means add it to the comments section below for others to laugh (or cry) and -- I hope -- benefit from.