The rock-bottom bid price should've raised a red flag. The database issues should've made our company reconsider. Instead, we soldiered on, but no Band-Aid could save this IT project. The one upside: I only watched this IT disaster from the sidelines.
I once worked in IT for a very large company that did everything from engineering and design to manufacturing and sales. It used a lot of computing power, primarily from two vendors, with about 75 percent of the business going to one vendor and the remaining 25 percent to the other.
[ For more real-life IT tales, check out the slideshow "Step away from the button! 6 touchy tech disasters." | Pick up a $50 American Express Gift Cheque if we publish your tech story: Send it to email@example.com. | Get your weekly dose of workplace shenanigans by following Off the Record on Twitter and subscribing to the anonymous Off the Record newsletter. ]
The sales organization launched a new initiative to improve communications with its zone offices. The plan called for a minicomputer in each zone office and some very large computers at HQ.
Our company put out a request for a pilot project, and the vendor that had 25 percent of our business won with a remarkably low price of $50,000. It saw lots of potential and was willing to underwrite the cost of the pilot as a marketing expense. It sounded good to us and we'd been happy with the business thus far, so our project team signed on.
Strikes one, two, and three
However, the problems started early on. The vendor had fancy report-writing software it believed could be adapted to our needs to get a system up and running in short order. But it became obvious very quickly that the report writer would not meet our requirements, and suddenly a small army of the vendor's programmers showed up to write conventional code.
After months of slogging through code and performing reviews of completed screens, reports, email, and so on, the system was pronounced ready. To begin, though, the database had to be loaded. We watched as the vendor's techs ran into problem after problem.
The hardware was terribly unreliable, and the techs determined it would take many reels of tape (this was back in the day) and a couple of weeks to load the database. In addition, the database checkpoint/restart routines were unstable.
The vendor decided to load two reels of tape, then do an image copy of the database. It would then load two more, and do another image backup. If there was a hardware failure, the vendor could go back to the last image copy, restore, and pick up from there without losing too much time.
Well, the system went down a lot. The backups took a long time. The restores that were required when the hardware failed took a long time. And the databases had so many indices, loading took forever. In fact, from start to finish, it took six weeks to do an initial database load. The problem was that at this point the database was six weeks out of date.