Epic failures: 11 infamous software bugs

Celebrate 'Debugging Day' by remembering these monster computer problems from the past

1 2 Page 2
Page 2 of 2

The month before the crash, AT&T tweaked the code to speed up the process. The trouble was, things were too fast. The first server to overload sent two messages, one of which hit the second server just as it was resetting. The second server assumed that there was a fault in its CCS7 internal logic and reset itself. It put up its own "do not disturb" sign and passed the problem on to a third switch.

The third switch also got overwhelmed and reset itself, and so the problem cascaded through the whole system. All 114 switches in the system kept resetting themselves, until engineers reduced the message load on the whole system and the wave of resets finally broke.

In the meantime, AT&T lost an estimated $60 million in long-distance charges from calls that didn't go through. The company took a further financial hit a few weeks later when it knocked a third off its regular long-distance rates on Valentine's Day to make amends with customers.

Windows Genuine Disadvantage

Introduced in 2006, Windows Genuine Advantage was never a popular initiative with Microsoft's customers. Consumers had trouble seeing the advantages: It did nothing to help the security or stability of a legitimate Windows installation. All it did was help Microsoft root out software piracy.

In that task, it was as vigilant as, well, a vigilante. In fact, in late-August 2007, it found piracy everywhere it looked -- even among thousands of legitimate Windows customers.

On Friday, Aug. 24, someone on the WGA team accidentally installed bug-filled preproduction software on the WGA servers. The team quickly rolled back to a tested release of the software, but they didn't check that their fix actually addressed the problem. It didn't. So for 19 hours, until around 3 p.m. the following day, the server flagged thousands of WGA clients across the globe as illegal.

Windows XP customers were told they were running pirated software. Windows Vista customers were slapped harder: They had features turned off, including the eye candy Aero theme and support for ReadyBoost virtual RAM drives.

The first official response to complaints didn't help much: Disgruntled patrons were advised to try to revalidate on Tuesday. But even when the problem was fixed, mid-Saturday afternoon, Vista clients still had to revalidate their Windows installations before they could ReadyBoost their way back into Aero.

OK, so this was a relatively mild issue in engineering terms, and strictly speaking, it was caused by human error. But the error in question was deploying buggy, untested software, and when you factor in the number of people affected, the level of anger induced and the knock-on effect of bad publicity, it was more severe than it seems at first glance.

Grievous bodily bugs

Not all bugs can be laughed off. Some of them are fatal. Medical and military software can be especially dangerous when not properly tested, as shown with these fatal flaws.

Patriot missile mistiming

During the first Persian Gulf war, Iraqi-fired Scud missiles were the most threatening airborne enemies to U.S. troops. Once one of these speeding death rockets launched, the U.S.'s best defense was to intercept it with an antiballistic Patriot missile. The Patriot worked a bit like a shotgun, getting within range of an oncoming missile before blasting out a cloud of 1,000 pellets to detonate its warhead.

A Patriot needed to deploy its pellets between 5 and 10 meters from an oncoming missile for the best results. This requires split-second timing, which is always tricky with two objects moving very fast toward each other. Even the Patriot's most prominent booster, then-President George H.W. Bush, conceded that one Scud (out of 42 fired) got past the Patriot. The single failure the president acknowledged was at a U.S. base in Dhahran, Saudi Arabia, on Feb. 25, 1991, and it cost 28 soldiers their lives. The fault was traced to a software error.

The Patriot's trajectory calculations revolved around the timing of radar pulses, and they had to be modified to deal with the high speed of modern missiles. A subroutine was introduced to convert clock time more accurately into floating-point figures for calculation. It was a neat kludge, but the programmers did not put the call to the subroutine everywhere it was needed. High-speed trajectories based on one accurately timed radar pulse and one less-precise time increased the chances of poorly timed deployment.

Apparently, the issue was known, and a temporary fix was in place: Reboot the system every so often to reset the clocks. Unfortunately, the term "every so often" wasn't defined, and that was the problem in late February at Dhahran. The system had been running for 100 hours, and the clocks were off by about a third of a second. A Scud travels half a kilometer in that time, so there was no chance the Patriot could have intercepted it.

On a side note, some experts did dispute the president's claims of a more than 97 percent success rate for Patriots vs. Scuds, so it's possible that this bug caused more (but less high-profile) damage than the incident at Dhahran.

Therac-25 Medical Accelerator disaster

Radiation therapy is a handy tool in the fight against some contained forms of cancer: Beams of electrons zap the bad stuff, and the body disposes of the dead matter. It has a strong success rate, but it depends on accurate aim and focus. That's something that the medical world leaves to machinery. Unfortunately for six patients between 1985 and 1986, the Therac-25 was the machine in question.

The Therac-25 handled two types of therapy: a low-powered direct electron beam and a megavolt X-ray mode, which required shielding and filters and an ion chamber to keep the dangerous beams safely on target. The trouble was that the software that powered the unit was repurposed from the previous model, and it wasn't adequately tested.

If the operators changed the mode of the device too quickly, a race condition occurred: Two sets of instructions were sent, and the first one to arrive set the mode. In six documented cases, this meant that megavolt X-rays were sent, unfiltered and unshielded, toward patients requiring direct electron therapy. At least two of them screamed in pain and tried to run from the room. All of them suffered radiation poisoning, which claimed several lives.

The Therac-25, which was recalled in 1987, has become an object lesson in what can go wrong with powerful medical machinery. The code didn't cause overdoses in earlier Therac models because hardware constraints prevented them. Reusing code on a new system without thorough testing is a programming no-no, with good reason.

The new system did deliver error messages during race-condition events, but the codes were cryptic, undocumented and easily overridden -- which is what operators did. With adequate documentation and training, the overdoses would never have happened. Additionally, a smaller bug that set up flag variables occasionally caused arithmetic overflows that bypassed safety checks.

Multidata Systems/Cobalt-60 overdoses

Unfortunately, the Therac-25 disaster wasn't the last software-related radiation therapy failure. Twenty-five years after the Therac-25 incident, a Cobalt-60 machine in Panama's National Cancer Institute overdosed more than two-dozen patients with gamma radiation.

As with the Therac-25, the Cobalt-60 system was an accident waiting to happen. Unlike the Therac-25, the Cobalt-60 was an old, overused and undermaintained piece of hardware. The software that ran it was an aftermarket program from Multidata Systems, because the Panamanian hospital could not afford what the machine's manufacturer, Theratronics, charged.

Two of the technicians who operated the Cobalt-60 had quit, leaving the rest to work 16-hour days to keep up with treatments. Very sick patients would sometimes wait four to six hours a day for scheduled treatments.

Overworked and tired technicians requested some software maintenance, but management overlooked their requests. Somewhere along the line, the technicians hit upon a more efficient way to line up the shields that defined the radiation's target. It wasn't in the manual, but it seemed to work. Unfortunately, if you lined up the shields in a particular order, an obscure bug in the Multidata software meant that the patients were overirradiated. Because of massive overwork and undersupervision, the process went on for seven months.

By the time Multidata Systems issued an advisory about a "data entry sequence that creates a self-intersecting shape outline" in mid-2001, it was too late for many patients. The exact death toll is hard to calculate -- these were very sick patients even before their treatment -- but it's a tragic mess-up by any measure.

Osprey aircraft crash

Two weeks before Christmas in 2000, a U.S. Marine Corps Osprey, a hybrid airplane and helicopter, suffered a hydraulic system fault that should have been remedied without loss of life. A hydraulic line broke in one of the two engine cases as the Osprey was shifting from airplane to helicopter mode for landing.

According to the Marine Corps major general who presented reports during the investigation of the incident, the trouble was "compounded by a computer software anomaly." The flight-control computer stopped the rotation of the engine pods when it detected the hydraulic failure.

The pilots went through the normal procedure and pressed the primary reset button to re-engage the pods. At this point, both prop rotors went through "significant pitch and thrust changes," which led to a stall. The plane crashed into a marsh and killed all four Marines onboard.

The nature of the software flaw is still hard to track down: Boeing and Bell Helicopter made the Osprey, and Boeing's spokesman said only that changes were made in the software. Requests for details were referred to the government, and as of now, the explanation has not been forthcoming.

End-of-the-world bugs

Remember how the world descended into nuclear oblivion on Sept. 23, 1983? No? Well, thank your lucky stars -- this is a tale of bugs so major they could have brought the entire world to a standstill.

It was all averted by the common sense of one individual, who ignored the Soviet early-warning system's faulty reports of incoming missiles and didn't launch a counterattack on the United States.

The warning system set off klaxons at half past midnight on that September morning. Apparently, the U.S. had launched five nuclear missiles toward what the U.S. president had taken to calling "the Evil Empire."

At the time, Lt. Col. Stanislaus Petrov reasoned his way to a decision not to respond: The USSR was in a shouting match with the U.S. about a Soviet attack on Korean Air Lines Flight 007 three weeks earlier, but it was only a rhetorical battle at that stage. Besides, if the U.S. wanted to attack the Soviet Union, would it really launch only five missiles?

Petrov ordered his men to stand down, and 15 minutes later, radar outposts confirmed that there were no incoming missiles. The decision took less than five minutes, it was confirmed within half an hour, and the world remained at peace.

When the early-warning system was later analyzed, it was found to have more bugs than a suburban compost heap -- which meant that although Stanislaus Petrov had saved the world, he'd made a serious error of judgment: He had shown up the incompetence of Soviet programmers.

This was not good for morale, or for the lieutenant colonel. He was cold-shouldered into an early retirement and was largely unsung until May 21, 2004, when a San Francisco-based organization called the Association of World Citizens bestowed its highest honor -- world citizenship -- and a financial reward on him.

The bug that never was: Black Monday's dark secret

It is a truth universally acknowledged (by people who don't know bugs) that the end of the 1980s stock boom, Black Monday of 1987, was precipitated by buggy software. It was Wall Street's greatest ever loss in a single day: The Dow Jones Industrial Average plummeted 508 points, 22.6 percent of its total value, and the S&P 500 dropped 20.4 percent. And it was all the fault of bugs in the computer models.

Except that it wasn't.

Program trading was relatively new and harder to understand back then, and people with diminished pension funds were anxious to find a scapegoat they could really lay the blame on. It was easier to point to a faulty program than to understand overvaluation, lack of liquidity, international disputes about exchange rates, and the market's notoriously bipolar psychology. So the computers became the bad guys.

Of course, program trading did contribute to the precipitous fall of American markets. The software contained strategy models for handling portfolio insurance, and it was there that the problems of Monday, Oct. 19, 1987, really lay. Portfolio insurance derivatives are tied to the condition of the market. After things nose-dived in Hong Kong and Europe, the sun rose on a Wall Street ready to react: The writers of derivatives sold on every down-tick, and plummeting values triggered a cascade of selling.

But the trading programs just did as they were instructed. The fact that they sold as the financial markets collapsed around them wasn't a bug, it was a feature -- just not a well-thought-out one.

Now it's your turn -- tell us your bug tales in the reader comments.

Matt Lake is familiar with quality control systems and auditing, but he is also writing a science book that includes a subchapter on entomology, making him a bug connoisseur in more ways than one.

This story, "Epic failures: 11 infamous software bugs" was originally published by Computerworld.

Copyright © 2010 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
How to choose a low-code development platform