Having started my working life as a software developer, I know a bit about epic bugs. Let's just say I've had my share and leave it at that. At very least, I can say I never caused any vehicles to crash or any companies to fail.
So, from ComputerWorld, Epic failures: 11 infamous software bugs.
Instead, this story is about outright programming errors that caused key failures in their own right.
Have I missed anything important? Consider this a call for nominations for the biggest bugs of all time. These are my suggestions; if you have any honorable mentions, bring 'em on. The worst anyone can do is swat them.
The list includes:
- The Mars Climate Orbiter doesn't orbit
- Mariner 1's five-minute flight
- Forty seconds of Ariane-5
- Pentium chips fail math
- Call waiting ... and waiting ... and waiting
- Windows Genuine Disadvantage
- Patriot missile mistiming
- Therac-25 Medical Accelerator disaster
- Multidata Systems/Cobalt-60 overdoses
- Osprey aircraft crash
- End-of-the-world bugs
Here's one with details:
Pentium chips fail math
In 1994, an entire line of CPUs by market leader Intel simply couldn't do their math. The Pentium floating-point flaw ensured that no matter what software you used, your results stood a chance of being inaccurate past the eighth decimal point. The problem lay in a faulty math coprocessor, also known as a floating-point unit. The result was a small possibility of tiny errors in hardcore calculations, but it was a costly PR debacle for Intel.
How did the first generation of Pentiums go wrong? Intel's laudable idea was to triple the execution speed of floating-point calculations by ditching the previous-generation 486 processor's clunky shift-and-subtract algorithm and substituting a lookup-table approach in the Pentium. So far, so smart. The lookup table consisted of 1,066 table entries, downloaded into the programmable logic array of the chip. But only 1,061 entries made it onto the first-generation Pentiums; five got lost on the way.
When the floating-point unit accessed any of the empty cells, it would get a zero response instead of the real answer. A zero response from one cell didn't actually return an answer of zero: A few obscure calculations returned slight errors typically around the tenth decimal digit, so the error passed by quality control and into production.
What did that mean for the lay user? Not much. With this kind of bug, there's a 1-in-360 billion chance that miscalculations could reach as high as the fourth decimal place. More likely, with odds of 1-to-9 billion against, was that any errors would happen in the 9th or 10th decimal digit.