Generally speaking, you should never expect anything you don't test regularly to work properly. This is true across all kinds of technologies, but the need for regular testing is often overlooked. Would you expect a car you parked in a barn two years ago to start today? If it did, you'd feel lucky. IT systems are no different. You shouldn't count on a successful site failover, to take one important example, if you haven't tested or maintained the systems that make it work.
As critical as testing is, it's often overlooked in favor of the never-ending backlog of seemingly more critical tasks. Forgoing testing completely is obviously dangerous, but it's also dangerous to test your systems in ways that don't meaningfully reflect how the systems would be used when they are really needed. Here are seven things you can do to make your testing count -- and to ensure that the confidence you have in your systems and procedures is well founded.
Testing rule No. 1: Perform real-world tests
The very first step to take is to ensure your tests are as close to real-world circumstances as possible. For example, if you're attempting to test your capability to perform a site failover, be sure to completely isolate yourself from the primary site just as if it has been rendered completely inaccessible. You may find that certain parts of your procedures (such as passwords or the procedures themselves!) are either located in or depend upon things at the primary site.
The best way to do this is by staging a test at a time when the production environment can be disabled for the purpose, but few of us have user communities and management that will support that idea. Instead, you will probably need to invest some time in being absolutely sure you're not depending upon the functionality of the infrastructure or services you might be trying to recover.