"It was our data from the IBM mainframe," he says. "To my horror, I realized that instead of specifying output to magnetic tape, I specified output to punch cards. I can't remember my JCL very well any more, but as I recall, it was the difference between specifying '=0' versus '=1.' I was absolutely humiliated."
It gets worse. A few days after the entire staff got involved clearing enough floor space for the mountain of boxes, the bill arrived. The cost of a punch-card backup job was nearly $1,000 (and remember, we're talking about 1982 dollars here).
"I had blown our budget out of the water, killed a forest, and still failed to back up our data onto tape," says Guggenheim, who's now Dr. David Guggenheim, Ph.D., president of 1planet 1ocean, and a senior fellow at The Ocean Foundation. "I've spent my career since then doing environmental work, so hopefully I paid penance for the dead trees."
Lessons learned? 1. Little mistakes can cause huge problems, so keep checking until it hurts. 2. Immediately own up to your errors; humility is a great teacher. 3. Take the time to appreciate the humor of a colossal screw-up, says Guggenheim. "It does wonders for the sting."
True IT confession No. 5: Unplug at your own risk
Back in the mid-'90s, Jan Aleman was interim IT manager for a major telecom company in the Netherlands. He was called in to replace a CTO who'd left under less-than-voluntary circumstances. Before the ex-CTO got canned, though, he'd ordered a $300,000 IBM failover system for the company's mission-critical billing engine.
"A very good IBM salesman had sold them this overpriced hardware, assuring them that if the primary system failed it would rollover seamlessly to the secondary one," says Aleman. "He said it was completely redundant, that nothing could go wrong. I said, 'All right, let's see if it actually works.'"
So Aleman yanked the power plug for the primary system out of the wall, right in front of the IBM salesman. All the company's core systems went dark. The critical billing engine was down for the rest of the afternoon. The phone switches still worked, but nobody in the back office could get anything done.
Though the failover system was installed and running, nobody had bothered to test it. So the next thing Aleman did was institute biweekly tests of the system on weekends.
"I unplugged the company," says Aleman, who is now CEO of Servoy, a developer of hybrid (SaaS and on-premises) software. "Needless to say, they were not very happy, but nothing bad ever happened to me. I'm still not sure how I managed to pull that off."
Lessons learned? 1. Always test systems before you bet the company on them (repeat as needed). 2. Think twice before you yank that power cord.
True IT confession No. 6: Never let another be the master of your domains
Back around 2003 or so, "Fred" (not his real name) was the IT manager for a regional cable company in the Midwest. At the time, the company had about 35,000 subscribers. To boost its business services, it decided to become a domain name reseller for Network Solutions.