What do you mean, that server's live?

In this IT tale, a poorly managed project turns into chaos

I once took a job working in a support position for a well-known health insurance company. Our task was to move operations off the mainframe to a series of Linux servers. It was a six-month contract.

And it was chaos.

The middleware converted medical claims into insurance claims format so that payment could occur. It was written by a third party and underwent constant modification as a work in progress. Our limitations with the middleware included giving suggestions and supporting the end product as it was left from day to day.

[ Got amazing IT tales, real-life experiences, lessons learned the hard way, or war stories from the trenches? Submit it to InfoWorld's Off the Record blog. If we publish your story, we'll send you a $50 American Express gift card. ]

The middleware was written on a Linux system, and certain file ownership and permissions were required for access and execution. The group doing the middleware had an employee we'll call "Steve." He was the one doing most of the planning, and he seemed more interested in "power" than in making the middleware work.

Previously, the staff hired by the health insurance company had been Windows and mainframe programmers who had little to no Unix/Linux experience and no clear sight as to what was to happen when they tried to correct the major planning decisions created by Steve. They and the management kept insisting that fixing the problems later would be easy, while in the meantime, problem upon problem piled up. When those of us who had more experience in the matter offered suggestions to them, to management, or to the vendor, we were shot down.

Steve owned all files and directories concerned with claims processing, and all team members were given the log-in credentials of "Steve" in order to complete day-to-day tasks necessary for getting needed output from the middleware. Initially this may have been tolerable, but soon after I arrived, the management decided to go live with the development system -- long before it was ready -- and this became a real problem. We had to glue together the product, since we were processing very large amounts of money between hospitals and insurance companies daily.

One of the main servers was still called "test" when I started. I discovered that "test" was a live production system one afternoon when I rebooted it. However, I was logged in as "Steve" along with about 25 other developers, and the audit trail went dead immediately.

After six months, Steve still had superpowers on the systems, and management thought it would be too much trouble to correct the file ownership and permissions problems.

I was very happy to drive away on Day 180 after my contract had been completed. They'd tried to get me to stay on and offered an increase that would have nearly doubled my contract pay. But sanity is not worth a little money in the bank working on a project that felt doomed to fail.

I remained in intermittent contact with some of the employees. About a month after I left, two were let go for failing to ensure that a dial-up connection to processing insurance claims was working properly. Claims were being submitted for a couple weeks, but they were never moved from the queue for processing. It came to the management's attention when they started getting calls from the claim submitters looking for thousands of dollars of insurance money.

One of the employees told me that the dial-up had gone live without anyone informing his group of that fact. It had never been tested and did not work.

Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies