Read an Excerpt
Programming as if People Mattered
Friendly Programs, Software Engineering, and Other Noble Delusions
By Nathaniel S. Borenstein
PRINCETON UNIVERSITY PRESS Copyright © 1991 Princeton University Press
All rights reserved.
The Hostile Beast
We men of today are insatiably curious about ourselves and desperately in need of reassurance. Beneath our boisterous self-confidence is fear—a growing fear of the future we are in the process of creating.
We have all heard, far too many times, about the depth, breadth, and profound importance of the computer revolution. Computers, we are told, will soon change almost every aspect of human life. These changes are generally perceived as being for the better, although there is occasional disagreement. But, for better or worse, most of us need only to look around to see proof of the computer revolution throughout our daily lives.
The computer revolution has happened in two stages. The early stage created computers and introduced them to a specialized elite. The second stage, which began in the 1970s with the introduction of the microcomputer, has brought them into our daily lives. Unfortunately, the problems created by the first part of the revolution were (and still are) so immense that many of the researchers who lived through it have scarcely noticed the second wave, in which computers met the common man.
The earliest computers were tyrants—cruel and fussy beasts that demanded, as their due, near-worship from the humans who dealt with them. They indulged in a bacchanalia of electric power, and demanded banquets of flawless stacks of punched cards served up by white-coated technicians before they would deign to answer the questions posed by the puny humans.
As it turned out, most people never joined the cult, and were never initiated into the esoteric mysteries of computation. Indeed, to this day, most human beings on planet Earth have no idea what a GOTO statement is, much less why it might be considered evil or even harmful. Yet this was the topic of the debate of the century for the computer elite—a controversy that still simmers after nearly two decades (Dijkstra 1968 and 1987).
As economic factors have encouraged the spread of computer applications from the laboratory to the office and the home, it has forced the computer to adopt a more accommodating posture. The governing notion today is that computers are tools, objects that can be used by ordinary people for their own ends. The resulting emphasis on simple, flexible, or (misguidedly) "friendly" user interfaces to computers has placed new and unprecedented demands on software engineers.
Imagine, for example, telling a construction engineer that his new bridge had to be "flexible" enough to meet the needs of its users. Perhaps to accommodate people who fear one type of bridge but not another, the bridge might have to be capable of changing instantly from a suspension bridge into a truss bridge, or vice versa. The idea is ludicrous only because of the physical limitations involved in bridges, which are of course subject to the constraints of physical laws.
For better or worse, software is free of nearly all such fundamental limitations. Bound only by logic, software can perform amazing feats of self-modification, of customization to the needs of individual users, and much more. But the cost of such efforts is often very high, especially when it produces (as it usually seems to) poorly structured software systems with high maintenance costs.
The solution of such problems is properly the province of a discipline that has come to be known as software engineering. Software engineering is young by the standards of the larger engineering community, having its origins in a conference in West Germany in 1968 (Naur and Randell 1968). But in the computer world, 1968 is ancient history, and the established software engineering practices seem increasingly irrelevant to the reality of user-centered computing. Software engineering has straggled so valiantly, and so single-mindedly, with the incredible problems of creating large software systems that it has for the most part failed to acknowledge the new problems that have been introduced by the demand for better user-interface technology.
The rules of the software engineering game have changed, and unfortunately they have gotten even harder. In particular, the reality of computing today is an increased focus on user interfaces, an area in which the hard-won lessons of the last twenty years of software engineering research are not merely inadequate, but may in some cases actually create more trouble than they can prevent or cure.
Unfortunately, this book does not offer any panaceas, any more than classical software engineering has been able to do so (Brooks 1987). Instead, it seeks to clarify the nature of the problem, and to offer some tentative steps in the direction of a solution. Some of these steps are controversial from the perspective of classical software engineering, although many are established dogma from other perspectives. It should be made clear from the outset that although this book sets forth various claims, it makes no pretense of proving them. The evidence that would definitively confirm or refute these claims does not, for the most part, exist. It is my hope that this book will help to stimulate further discussion and careful, rigorous experimentation.
It should also be understood that most of the particular claims in this book are not new, but have been around in various forms for many years. What I have attempted to do, rather, is to bring together some of the bits of knowledge to be found in various noncommunicating academic disciplines, and to organize these bits as a more coherent vision of how human-centered software can be regularly and reliably built.
What Software Engineering Is For
From a researcher's perspective, it might seem that the last thing software engineering needs in the 1990s is a revolution against orthodoxy. Revolutions disrupt established bodies of theory and practical knowledge, both of which are in remarkably short supply in the world of software engineering. There are really only a few aspects of software engineering that are yet well-enough entrenched to be considered orthodoxy. To understand why one would pick on this poor orphan of engineering, one must first consider the goals and achievements of software engineering as it enters the 1990s.
The basic goal of the discipline is a simple one: to make software better. "Better" is in this case usually expanded into a list of more concrete goals, many of them closely related, some in perpetual conflict. Software is better if it is more efficient, more reliable, less expensive, more easily maintained, more easily used, more easily transported into other environments, and so on. Each of these concrete goals may itself be expanded into conflicting goals. "More efficient" may refer to time efficiency (speed) or space efficiency (memory use). "More reliable" may refer to lack of bugs or completeness of design. Software engineering was born amid a widely perceived but ill-defined "software crisis," and the lack of clarity about the nature of the crisis is reflected in the lack of a coherent goal for the discipline that seeks to solve the crisis.
Such confusion is not, in general, fatal. Engineering is, in a very real sense, the science of intelligent trade-offs. Bridge designers strive toward totally reliable bridges, but they also try to make them as inexpensive as possible, and the two goals inevitably conflict at times. An intelligent and conscientous bridge designer knows, in general, how to resolve such trade-offs, and can rely on a centuries-old accumulation of experience, knowledge, and science. Still it will help, as we proceed to look at what software engineering has accomplished, to bear in mind the fundamental confusion of its goals.
What Software Engineering Has and Has Not Achieved
After quite a few false starts, software engineering is not without significant accomplishments. The staples of the discipline are formal design methodologies and improved development tools. Its impact is clearest in the way the largest of software projects are built, and in the good and bad aspects of the quintessential 1980s programming language, Ada.
Large software projects are the bread and butter of software engineering. Any programmer worth his salt can find some way to keep a horrible five-thousand-line program running; the real test comes with programs of five hundred thousand or a few million lines of code. It is here that software engineering has focused its efforts.
The quick summary of the news on large programs is unglamorous, but useful: you can't cut any corners, and you have to plan everything in advance. This is the basic message behind the plethora of software engineering methodologies that have come and gone in recent years. Indeed, there is evidence, both formal and anecdotal, to suggest that quite a few of these methodologies have, when applied to large projects, yielded significant benefits in the form of more reliable and maintainable software. The resulting programs are still huge, expensive, and nowhere near reliable enough to support the requirements of, for example, the Strategic Defense Initiative (Parnas 1985), but they still tend to look good in comparison to large projects constructed without the benefit of such methodologies.
Where small- and medium-scale software projects are involved, the answer is less clear. Anecdotally, there appears to be a dramatic rift between the software engineering "fanatics," who are rumored to write detailed requirements specifications, design specifications, and implementation specfications for a simple bubble-sort subroutine, and the "freethinkers" who consider all such formalism to be nothing more than bureaucratic red tape and obstructionist nonsense. Experimental results have not really substantiated the benefits of applying rigorous formalism to smaller projects, but this could simply reflect the difficulty of obtaining statistically significant results given the smaller dimensions of the projects themselves.
Formal methods notwithstanding, it is probably in the area of tools that software engineering has made the most headway. Software engineers have pioneered the development of programming environments—integrated systems to support programmers and make their efforts more productive and coherent. In addition, software engineering has left its mark indelibly on the most important new programming language of recent years, the Ada language. Ada, indeed, is a veritable smorgasbord of software engineering delights. It has elaborate features to support machine independence, including such exotica as compile-time inquiries into a machine's arithmetic precision. With its separation of modules into package specifications and implementations, it takes modular code development to its logical conclusion. Unusual features such as tasks and generic procedures add significant resources to the programmer's toolbox. Finally, by establishing specifications for program development environments in tandem with the specifications for the language itself, the Ada designers established the importance of such environments once and for all in the minds of many who were previously nonbelievers.
Software engineering has accomplished far more than can be surveyed here, but even this brief summary should make it clear that the discipline has made significant contributions. It has not, alas, come at all close to achieving its ultimate goals. Today the "software crisis" is as serious as ever, although we do seem to have gotten rather used to it.
When an effort begins to design a major new user interface, though, managers still get ulcers. The two most predictable results of such projects seem to be a reasonably reliable piece of software that users hate, or well-liked software that nobody understands well enough to maintain. On the subject of why this happens, software engineers are conspicuously silent. Meanwhile, many of those who fancy themselves user-interface specialists are rarely even interested in the question, maintaining that software engineering is essentially irrelevant to the art of user-interface design.
Why Humans Make Software Messy
From the perspective of the software builder, things are a lot easier if you can keep people out of the picture. Software engineers know—often from years of experience in persuading huge IBM mainframes to perform incredibly complex tasks—the right way to build software. The right way, it turns out, is to design everything in advance, write out careful specifications for every step of the process, and subject those specification documents to review by a small army of your peers before you write a single line of code. Although this description may sound facetious, it is essentially the way things are currently done in large software projects, and it is, by and large, a very good thing that they are done that way, for one very simple reason: it works.
Yes, one can get software to work using a less structured approach, but it generally won't work as well or as long or be as easy to maintain. Moreover, the engineering approach produces software that is almost guaranteed to be more stable: given the mountains of specification documents, it is relatively easy to deflect fundamental criticisms of running software with replies like, "you should have pointed that out when we were discussing the design specs." Because the specifications met with approval from all the interested parties, all those parties are reasonably likely to be happy with the way the resulting software works, assuming it does work at all. At the very least, they may feel a bit embarassed about complaining at too late a stage, since this will reflect badly on their own role in the earlier design process.
But this is precisely where user interfaces begin to mess things up. User-interface software should, as its primary mission, make the computer easy and pleasant for humans to use. In most cases, the target humans are relatively unsophisticated, and certainly not a part of the design team. Thus you can't, for example, wave them away by telling them they should have complained when the design specifications were being circulated, because they probably don't even know what design specifications are.
This wouldn't be a problem if designers were, in general, good at anticipating what users will want their interfaces to be like. General experience tends to indicate, however, that they aren't. A number of explanations have been offered for the remarkable frequency with which user-interface designers misjudge the wants and needs of their audiences. Jim Morris, the first director of the Andrew Project, has suggested that the problem may lie in the type of people designing user interfaces. The highly rational "left-brained" computer programmers may tend to produce interfaces that the more intuitive, artistic, "right-brained" people despise. This would be an encouraging answer, because we could then let "artistic" people design programs while the more rational folks write them. But there is little evidence that this would work. The Andrew Project (to be described shortly), for example, experimented substantially with consultants from such diverse academic departments as art, design, and English. Excellent results were achieved, but generally only after several iterations—the artists were not, in general, any better able than the programmers to design things "right" the first time around, but they were particularly adept at finding flaws in the prototypes once they were built.
A more discouraging answer, but perhaps a more accurate one, is that people are fundamentally unpredictable. Even the best designers frequently have to make massive changes to satisfy their users. If it were possible to predict human taste in software user interfaces reliably, it would probably be equally possible to predict tastes in music, literature, and new television shows. As modern marketing research has revealed, this is not an easy thing to do.
But if no amount of preimplementation design reviews is going to guarantee the production of a good, well-liked user interface, where does that leave the designers of large systems? For many of them, the answer is intuitively clear. "I wouldn't touch user-interface programming with a ten-foot pole" is a position frequently found among professional software engineers, who can then happily feel superior about the clean, successful systems they build, which happen not to have substantial user-interface components. Naturally, this doesn't endear the software engineers to the self-styled "software artists" who currently do a great deal of the world's successful user-interface design.
Excerpted from Programming as if People Mattered by Nathaniel S. Borenstein. Copyright © 1991 Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.