It's 1:30 in the morning. By some miracle, you were able to get approval for a four-hour downtime window to complete a long list of overdue patching and network maintenance. Even better, you're done a half-hour early. Life is good!
As you're about to email the third shift to let them know they can get back in ahead of schedule, you remember it: that one setting you always knew was wrong and wanted to fix -- and, you thought, shouldn't cause any service disruption -- so you never got around to correcting it. Little do you know that "fix" is going to be your undoing.
It doesn't matter what it is. For me, it's been an incorrectly set spanning tree bridge priority or UPS software configured with an inadequate shutdown delay. Either way, half a second after hitting Enter or clicking Apply, your terminal freezes, pings go unanswered, and panic sets in: You've brought down the entire network, and you have no idea why. You thought you finished 30 minutes ahead of schedule, but now that half-hour may not be enough time to run around with a laptop and console cable to figure out what happened, much less fix it.
You can't avoid situations like this all the time. Bad things happen when you least expect it -- the old adage "If it ain't broke, don't fix it" applies to IT as much as it does to any other field. Nonetheless, you can build safegaurds into your network that will drastically reduce the time it takes to fix problems when they arise.