Computing Calamities: Monumental Computing Disasters

Computing Calamities: Monumental Computing Disasters

by Robert L. Glass

View All Available Formats & Editions

Many great advances in technology have resulted from risky experimentation, but it's critical to remember and study the spectacular failures that also resulted from some of those risks. Failures can be mundane, like the typical complaints of software projects that are behind schedule and over budget, while others can be much more extravagant. In Computing Calamities,… See more details below


Many great advances in technology have resulted from risky experimentation, but it's critical to remember and study the spectacular failures that also resulted from some of those risks. Failures can be mundane, like the typical complaints of software projects that are behind schedule and over budget, while others can be much more extravagant. In Computing Calamities, Robert L. Glass has collected war stories from around the industry. Laugh at these mistakes, and learn from them. Someone else's failure could be the foundation of your success.

Editorial Reviews

Collects stories of software failures and other failures related to computers in the business world from newspapers, computing periodicals, and other sources. Meant to instruct managers in strategies to avoid, the stories come from such companies as Atari, Wang, Seiko, AT&T, and Citicorp. Annotation c. by Book News, Inc., Portland, Or.

Product Details

Prentice Hall Professional Technical Reference
Publication date:
Product dimensions:
6.00(w) x 9.00(h) x (d)

Read an Excerpt

PREFACE: Introduction: What's So Great About Failure?
Do computing companies and projects fail more often than other companies and projects? Sometimes it feels like it. There, on the nightly news, is yet another tale of woe, a story related to computing that shows some stupid computing system doing something that no sensible human being would ever have done. There, in the computing literature, are the findings of yet another survey saying that 57%, or 68%, or 95% of computing projects fail. There, on the Nightly Business Report, is the story of a computing company whose stock just did a nose-dive into obscurity.

But there is something, I want to say, badly wrong with that picture. For every stupid computer trick on the nightly news, there are a hundred interactions that all of us have with a variety of computing systems on a daily basis where nothing whatsoever goes wrong. For every survey finding showing that an enormous number of computing projects fail, there are systems up and running and doing precisely what they are supposed to do, in companies ranging from the Fortune 500 to the not-so-Fortunate 5,000,000. For every stock nose-dive on NBR, there are stratospheric rises of some other computing company's stock. Something is wrong with that picture, indeed!

To be honest, I don't quite understand what's going on. I read those surveys by quite reputable companies that say things like "84% of US IT projects fail to meet original expectations and 94% of those started have to be restarted" (the Standish Group, published in 1997), and 85% of UK companies "reported problems with systems projects that were either late, over budget, or that fail todeliver planned benefits" (Coopers & Lybrand's Computer Assurance Services risk management group, also published in 1997). And I see lots of other data points that come out telling a roughly similar story.

But the more of those stories I read, the more perplexed I get. For one thing, although lots of these stories quote high percentages of computing project failure, there's very little consistency in the numbers. For example, Standish, which gave us the 84% and 94% figures I quoted above, gave us another set of data in 1995 and 1996; they said (for 1995) that 31% of projects failed and 53% were "challenged," and (for 1996) 40% failed and 33% were "challenged." Other figures from other sources tell a similarly inconsistent story. I could find you data saying that the correct percentage of failed computer projects is 46%, or 62%, or 81%, or any number you'd care to name. And the dilemma here is, those figures simply can't all be right!

I'm a professional in the computing field. I've participated in a lot of projects over the years. Some of them failed. A lot more of them didn't. I very strongly believe that those failure numbers, the ones I've been quoting to you above, are very sincerely obtained-and very wrong!

Some of my best friends in college (ah, those days have long been gone!) were ministerial students. When they wanted to say something nice about the trial sermon that one of them had just given but there was little to compliment, they would say "Well, you were sincere!" That's the sense in which I use the word "sincere" to describe my computing colleagues who generate or quote those failure statistics! I don't think any of them are being insincere; I just think their numbers are relatively worthless.

Let me tell you a story about computing failure stories. Several years ago, it was popular for people who studied the field of computing to speak of a "software crisis." When they named that crisis, what they really meant was that software had the reputation of being "always behind schedule, over budget, and unreliable." To support their declaration of a crisis, they would often quote numbers from a study by a US Government watchdog agency, the Government Accounting Office (GAO), which showed that a very high percentage of government software projects they tracked had failed.

The use of these GAO numbers persisted for several years. Then a colleague of mine got curious about the GAO report, and read the original study. He discovered, much to his surprise (and what should have been the chagrin of many computing professionals), that the study was being misinterpreted. The GAO had found a high percentage of failed projects, but the projects it had tracked were already in trouble when the GAO began studying them! In other words, the simple and clear message of the GAO study was that a large percentage of software projects that got into trouble never rose above that trouble. And anyone who used that study to try to show anything else-for example, that a large percentage of all software projects failed-had misunderstood the data.

Call me crazy, but I think a lot of that computing project failure data, when the smoke all clears away, will be of this kind. Misused numbers. Failures to use agreed-upon definitions of terms. Gathering information from biased sources. Making up numbers to fit a preconceived notion.

So, given all of that, what the heck am I doing writing a book about computing calamities? Doesn't a book telling computing failure stories tend to reinforce the notion that computing failure is common? Shouldn't I be trying to write a book about computing successes, instead, to prove the point I apparently so deeply believe in?

There are several reasons why I have chosen to write this book on failure:

  1. Failure is a far stronger learning experience than success. I hope that these stories of failure will contain some lessons learned of value to you.
  2. Failure is just plain fun! I have no idea why we human beings laugh at pies in the face and pratfalls down a flight of stairs, but it seems-for better or worse-to be part of our humanity. I think you're going to enjoy reading the things I've written.
  3. Success is transient. I once added a success story to an earlier book I wrote about failure. It told the story of the Intel 286 chip, and what a great design it was. Ten years or so have now passed since I put that book together. Every single one of the other stories in that book is still up-to-date-none of the failed projects I wrote about ever came back to life! But the 286 is ancient history. And that story dates the book like nothing else I said in it.
  4. I'm a failure nut! This isn't, as I hinted above, the first book I've written about computing failure. I once wrote a column for Computerworld, way back in the 1960s, which consisted largely of computing failure stories. (The column used disguised names for the people and places whose stories I told, and I wrote it under an assumed name, Miles Benson!) I collected those columns into a couple of self-published books in the 1970s. I also gathered stories about early computing companies and projects that failed, both during the mainframe era and the early microcomputer era, and published those books during the 1980s. And, in fact, I wrote a predecessor to this book, Software Runaways, which Prentice-Hall published in 1998 (actually, the fall of 1997-like car companies, publishers label books that come out in the fall with the model year still to come!), and which has sold quite handsomely (perhaps that's why you're reading this one!)

The question that began this chapter of the book was "What's so great about failure?" I think I've given you at least an implicit answer to that question. Failure is a learning experience. Failure is fun. Success is transient, and failure usually isn't. Failure never quits happening.

Many of the failure stories I have published over the years were written by someone else. I take a hunter-gatherer approach to these stories, reading quite a bit of the computing literature and a slice of the more general literature, looking for in-depth reports by careful journalists who have pursued a computing failure story and come up with a well-told, fact-based, human-interest-focused tale. Nearly all of the stories in this book are from sources other than me, from publications ranging from Computerworld to the Wall Street Journal. (The source of each story is found at the beginning of the story.)

I love gathering and telling these failure stories. Not because they prove a point-as you have already seen, I don't believe they do-but because I personally find them fascinating and fun.

I hope you do, too!

Robert L. Glass
Summer, 1998

Read More

Customer Reviews

Average Review:

Write a Review

and post it to your social network


Most Helpful Customer Reviews

See all customer reviews >