Over the years, I've seen firsthand how expensive it becomes when there is no planning or coherent strategy within an organization. Extreme attempts to cut corners and costs will always create a severe detriment to all departments -- and the operations areas are no exception. I've seen this management mentality in many companies, but one experience stands out as being the most extreme case I've ever witnessed.
My story begins August 2001. I was a new hire at a large engineering and consulting firm. A week and a half later, I experienced my first layoff when our firm was purchased by another large engineering and consulting firm. This left 300 engineering staff looking for work a few weeks before Sept. 11. After that day, the tech job market disappeared in my area.
[ Want to cash in on your IT experiences? Send your story to email@example.com. If we publish it, we'll send you a $50 American Express gift cheque. ]
Soon afterward, I interviewed with a local textile manufacturing firm. The owners dodged many of my questions and were very vague about the position -- only describing it as a mixture of IT support and manufacturing software development. Even though warning bells were clanging loudly in my head, due to market conditions I accepted.
I will never forget those first days especially.
The IT department had always been a one-man shop, and my predecessor had been a technical school graduate with very little IT experience. When he resigned, he did not leave the password or any admin information, so I was unable to log into the system for two days. We had to pay him for two hours' consulting time to get the passwords.
As the only IT staff person, my on-call time extended 24/7, with no overtime pay and no backup person, and I was advised to consider selling my house and moving closer to the plant so that I could be on site quickly in case of a problem. I was told this is how it was always done with their IT staff.
Now about the physical environment: The "datacenter" was located in a house behind the main facility. It seems that the company had contaminated the property years earlier with dyes from their manufacturing process and bought out the house's owner to avoid legal action, then moved part of the day-to-day operations into it. The living room housed the accounting areas and the bedrooms were turned into offices.
The kitchen was the only area left big enough to house the IT equipment. The lone server was situated under a large hole in the roof. To avoid expensive repairs, the hole was never repaired, but as a compromise, plastic was eventually placed over it to avoid wetting the server during our frequent southern rain showers. The power strip purchased from the local Wal-Mart that sustained the server and associated terminals was plugged into the same outlet as the refrigerator and microwave. Outages and server crashes were common.
[ Tired of being told to do more with less? Participate in InfoWorld's Slow IT movement: Rant on our wailing wall. Read the Slow IT manifesto. Trade Slow IT tips and techniques in our discussion group. Get Slow IT shirts, mugs, and more goodies. ]
True to form, the software was also a mess. A few years earlier, management had decided to replace a PDP-based manufacturing system with a new state-of-the-art MRP system. Since my predecessor was primarily the network and PC administrator, the managers decided to figure it out on their own. After much analysis and research, they accepted the lowest bid, then bought the smallest and least expensive server available. For a plant of over 100 terminals, they purchased a 10-user license for the system. Soon after installation, they determined that the software did not meet their needs since it did not work the same way as the previous PDP system had. And, of course, the managers decided that the software had to change (not the process).
So they purchased the source code (Version 1.0.0) and hired their accountant, who dabbled in computers on the side, as a consultant to rewrite the commercial system. Since the "consultant" did not know the platform, the proprietary system programming language, or the company's manufacturing process, he created a system full of work-arounds, hacks, and bugs. The software crashed constantly.
Additionally, because it was originally developed in another country, not all of the error messages and text were in English (maybe that fix was in Version 1.1). I was expected to learn another language at my expense so that I could support the system. And since the source had been modified, we were not eligible for upgrades, technical support, patches, or bug fixes.
I soon discovered that, in addition to my network/system administration/PC support duties, I was expected to take over this system in its current state (so management could save the expense of the highly paid "consultant") and rewrite it in what spare time I had available. Of course there was no documentation, and the "consultant" was in no mood to provide information. The constant crashes created a mountain of tech support calls -- to me.
The whole experience became a day-by-day crisis resolution situation. Needless to say, I left the minute a new job became available.
Supergeeks fess up to some of the dumbest things they've ever done -- and the lessons they learned as a result
IT heroes toil away unsung in miserable conditions -- unsung, that is, until they make a colossally stupid mistake
Idiot-proof your enterprise with these 10 hard-luck lessons of boneheaded IT miscues
The trick to nipping IT miscues is testing, testing, testing, as these hard-luck lessons in boneheaded quality assurance attest
Are you a script kiddie or a hacker hero? Take our quiz to find out
More dirty tech deeds, done dirt cheap
Somebody's got to do them -- and hopefully that somebody isn't you