The Digital Evolution
To a species that seeks to communicate, offering instantaneous
global connectivity is like wiring the pleasure center of a rat's brain
to a bar that the rat then presses over and over until it drops from
exhaustion. I hit bottom when I found myself in Corsica, at the
beach, in a pay phone booth, connecting my laptop with an
acoustic modem coupler to take care of some perfectly mundane
e-mail. That's when I threw away my pager that delivered e-mail to
me as I traveled, disconnected my cell phone that let my laptop
download e-mail anywhere, and began answering e-mail just once
a day. These technologies were like the rat's pleasure bar, just capable
enough to provide instant communication gratification, but
not intelligent enough to help me manage that communication.
The Digital Revolution has promised a future of boundless opportunity
accessed at the speed of light, of a globally wired world,
of unlimited entertainment and education within everyone's reach.
The digital reality is something less than that.
The World Wide Web touches the rather limited subset of human
experience spent sitting alone staring at a screen. The way we
browse the Web, clicking with a mouse, is like what a child does sitting
in a shopping cart at a supermarket, pointing at things of interest,
perpetually straining to reach treats that are just out of
reach. Children at play present a much more compelling metaphor
for interacting with information, using all of their senses to explore
and create, in groups as well as alone. Anyone who was ever a child
understands these skills, but computers don't.
The march of technical progress threatens to turn the simple
pleasure of reading a nicely bound book, or writing with a fountain
pen, into an antitechnical act of defiance. To keep up, your book
should be on a CD-ROM, and your writing should be done with a
stylus on a computer's input tablet. But the reality is that books or
fountain pens really do perform their intended jobs better than
their digital descendants.
This doesn't mean we should throw out our computers; it means
we should expect much more from them. As radical as it may
sound, it actually is increasingly possible to ask that new technology
work as well as what it presumes to replace. We've only recently
been able to explain why the printing on a sheet of paper
looks so much better than the same text on a computer screen, and
are just beginning to glimpse how we can use electronic inks to turn
paper itself into a display so that the contents of a book can change.
Unless this challenge is taken seriously, the connected digital
world will remain full of barriers. A computer with a keyboard and
mouse can be used by only one person at a time, helping you communicate
with someone on the other side of the world but not with
someone in the same room. Inexpensive computers still cost as
much as a used car, and are much more difficult to understand how
to operate, dividing society into an information-rich upper class
and an information-poor underclass. A compact disc player faithfully
reproduces the artistic creations of a very small group of people
and turns everyone else into passive consumers. We live in a
three-dimensional world, but displays and printers restrict information
to two-dimensional surfaces. A desktop computer requires
a desk, and a laptop computer requires a lap, forcing you to sit still.
Either you can take a walk, or you can use a computer.
These problems can all be fixed by dismantling the real barrier,
the one between digital information and our physical world. We're
made out of atoms, and will continue to be for the foreseeable future.
All of the bits in the world are of no use unless they can meet
us out here on our terms. The very notion that computers can create
a virtual reality requires an awkward linguistic construction to
refer to "real," reality, what's left outside the computer. That's backward.
Rather than replace our world, we should first look to machines
to enhance it.
The Digital Revolution is an incomplete story. There is a disconnect
between the breathless pronouncements of cyber gurus and the
experience of ordinary people left perpetually upgrading hardware
to meet the demands of new software, or wondering where their
files have gone, or trying to understand why they can't connect to
the network. The revolution so far has been for the computers, not
Digital data of all kinds, whether an e-mail message or a movie,
is encoded as a string of 0's and 1's because of a remarkable discovery
by Claude Shannon and John von Neumann in the 1940s.
Prior to their work, it was obvious that engineered systems degraded
with time and use. A tape recording sounds worse after it is
duplicated, a photocopy is less satisfactory than an original, a telephone
call becomes more garbled the farther it has to travel. They
showed that this is not so for a digital representation. Errors still
occur in digital systems, but instead of continuously degrading the
performance there is a threshold below which errors can be corrected
with near certainty. This means that you can send a message
to the next room, or to the next planet, and be confident that it will
arrive in the form in which you sent it. In fact, our understanding
of how to correct errors has improved so quickly over time that it
has enabled deep-space probes to continue sending data at the same
rate even though their signals have been growing steadily weaker.
The same digital error correction argument applies to manipulating
and storing information. A computer can be made out of imperfect
components yet faithfully execute a sequence of instructions
for as long as is needed. Data can be saved to a medium that is sure
to degrade, but be kept in perpetuity as long as it is periodically
copied with error correction. Although no one knows how long a
single CD will last, our society can leave a permanent legacy that
will outlast any stone tablet as long as someone or something is
around to do a bit of regular housekeeping.
In the 1980s at centers such as MIT's Media Lab, people realized
that the implications of a digital representation go far beyond reliability:
content can transcend its physical representation. At the
time ruinous battles were being fought over formats for electronic
media such as videotapes (Sony's Betamax vs. Matsushita's VHS)
and high-definition television (the United States vs. Europe vs.
Japan). These were analog formats, embodied in the custom circuitry
needed to decode them. Because hardware sales were tied to
the format, control over the standards was seen as the path to economic
These heated arguments missed the simple observation that a
stream of digital data could equally well represent a video, or some
text, or a computer program. Rather than decide in advance which
format to use, data can be sent with the instructions for a computer
to decode them so that anyone is free to define a new standard
without needing new hardware. The consumers who were supposed
to be buying high-definition television sets from the national
champions are instead buying PCs to access the Web.
The protocols of the Internet now serve to eliminate rather than
enforce divisions among types of media. Computers can be connected
without regard to who made them or where they are, and information
can be connected without needing to artificially separate
sights and sounds, content and context. The one minor remaining
incompatibility is with people.
Billions of dollars are spent developing powerful processors that
are put into dumb boxes that have changed little from the earliest
days of computing. Yet for so many people and so many applications,
the limitation is the box, not the processor. Most computers
are nearly blind, deaf, and dumb. These inert machines channel the
richness of human communication through a keyboard and a
mouse. The speed of the computer is increasingly much less of a
concern than the difficulty in telling it what you want it to do, or
in understanding what it has done, or in using it where you want
to go rather than where it can go.
More transistors are being put on a chip, more lines of code are
being added to programs, more computers are appearing on the
desks in an office, but computers are not getting easier to use and
workers are not becoming more productive. An army of engineers
is continuing to solve these legacy problems from the early days of
computing, when what's needed is better integration between computers
and the rest of the world.
The essential division in the industry between hardware and software
represents the organization of computing from the system designer's
point of view, not the user's. In successful mature
technologies it's not possible to isolate the form and the function.
The logical design and the mechanical design of a pen or a piano
bind their mechanism with their user interface so closely that it's
possible to use them without thinking of them as technology, or
even thinking of them at all.
Invisibility is the missing goal in computing. Information technology
is at an awkward developmental stage where it is adept at
communicating its needs and those of other people, but not yet able
to anticipate yours. From here we can either unplug everything and
go back to an agrarian society--an intriguing but unrealistic option--or
we can bring so much technology so close to people that
it can finally disappear. Beyond seeking to make computers ubiquitous,
we should try to make them unobtrusive.
A VCR insistently flashing 12:00 is annoying, but notice that it
doesn't know that it is a VCR, that the job of a VCR is to tell the
time rather than ask it, that there are atomic clocks available on the
Internet that can give it the exact time, or even that you might have
left the room and have no interest in what it thinks the time is. As
we increasingly cohabit with machines, we are doomed to be frustrated
by our creations if they lack the rudimentary abilities we take
for granted--having an identity, knowing something about our environment,
and being able to communicate. In return, these machines
need to be designed with the presumption that it is their job
to do what we want, not the converse. Like my computer, my children
started life by insisting that I provide their inputs and collect
their outputs, but unlike my computer they are now learning how
to do these things for themselves.
Fixing the division of labor between people and machines depends
on understanding what each does best. One day I came into
my lab and found all my students clustered around a PC. From the
looks on their faces I could tell that this was one of those times
when the world had changed. This was when it had just become
possible to add a camera and microphone to a computer and see
similarly equipped people elsewhere. There was a reverential awe as
live faces would pop up on the screen. An early Internet cartoon
showed two dogs typing at a computer, one commenting to the
other that the great thing about the Internet was that no one could
tell that they were dogs. Now we could.
And the dogs were right. The next day the system was turned off,
and it hasn't been used since. The ghostly faces on the screen
couldn't compete with the resolution, refresh rate, three-dimensionality,
color fidelity, and relevance of the people outside of the
screen. My students found that the people around them were generally
much more interesting than the ones on the screen.
It's no accident that one of the first Web celebrities was a thing,
not a person. A camera trained on the coffeepot in the Computer
Science department at Cambridge University was connected to a
Web server that could show you the state of the coffeepot, timely
information of great value (if you happen to be in the department
and want a cup of coffee).
Even better would be for the coffee machine to check when you
want coffee, rather than the other way around. If your coffee cup
could measure the temperature and volume of your coffee, and relay
that information to a computer that kept track of how much
coffee you drank at what times, it could do a pretty good job of
guessing when you would be coming by for a refill. This does not
require sticking a PC into the cup; it is possible to embed simple
materials in it that can be sensed from a distance. The data handling
is also simple; a lifetime record of coffee drinking is dwarfed
by the amount of data in a few moments of video. And it does not
require a breakthrough in artificial intelligence to predict coffee
A grad student in the Media Lab, Joseph Kaye, instrumented our
coffee machine and found the unsurprising result that there's a big
peak in consumption in the morning after people get in, and another
one in the afternoon after lunch. He went further to add electronic
tags so that the coffee machine could recognize individual
coffee cups and thereby give you the kind of coffee you prefer,
along with relevant news retrieved from a server while you wait.
No one of these steps is revolutionary, but taken together their implications
are. You get what you want--a fresh cup of coffee--without
having to attend to the details. The machines are
communicating with each other so that you don't have to.
For all of the coverage of the growth of the Internet and the
World Wide Web, a far bigger change is coming as the number of
things using the Net dwarfs the number of people. The real promise
of connecting computers is to free people, by embedding the means
to solve problems in the things around us.
There's some precedent for this kind of organization. The recurring
lesson we've learned from the study of biology is that hard
problems are solved by the interaction of many simple systems.
Biology is very much a bottom-up design that gets updated without
a lot of central planning. This does not mean that progress is steady.
Far from it; just ask a dinosaur (or a mainframe). Here we come to
the real problem with the Digital Revolution. Revolutions break
things. Sometimes that's a necessary thing to do, but continuously
discarding the old and heralding the new is not a sustainable way
Very few mature technologies become obsolete. Contrary to predictions,
radio did not kill off newspapers, because it's not possible
to skim through a radio program and flip back and forth to parts
of interest. Television similarly has not eliminated radio, because
there are plenty of times when our ears are more available than our
eyes. CD-ROMs have made little dent in book publishing because
of the many virtues of a printed book. Far more interesting than declaring
a revolution is to ask how to capture the essence of what
works well in the present in order to improve the future.
The real challenge is to figure out how to create systems with
many components that can work together and change. We've had a
digital revolution; we now need digital evolution. That's what this
book is about: what happens when the digital world merges with
the physical world, why we've come to the current state-of-the-art
(and state-of-the-artless), and how to free bits from the confines of
computers. The machines have had their turn; now it's ours.
... are things that think?
books that can change into other books
musical instruments that help beginners
engage and virtuosi do more
shoes that communicate through
printers that output working things
instead of static objects
money that contains behavior
as well as value