Read an Excerpt
CHAPTER 1
The Discrete Revolution
1.1 My Golden Age of Garbage
What is usually called the "computer revolution" is really about much more — it's about a radical conversion of our view of the world from continuous to discrete. As for your author, my entrance into this world couldn't have been timed better to observe the apparently sudden transformation. I arrived in 1939, a few months before Hitler invaded Poland. At that time the stage had been set, rather subtly and gradually, for the development of things digital, and the pressure of the ensuing war years propelled us all, not so subtly and not so gradually, into what we now know as the Digital Age. This book is about the most basic ideas and principles behind the change. Why did the world change in such a fundamental way from analog to digital, and where might we humans — a species itself built along both analog and digital lines — be headed?
I apologize for the rather dark beginning, but it's a fact that the dirty fingers of war have never failed to leave their prints on the annals of what we term "progress." The dawn of the computer age is closely linked to decryption efforts in World War II, as well as to the development of the atomic bomb.
On August 6, 1945, I was only dimly aware of the fact that I was in New Jersey and not Japan, where bombardier Thomas Ferebee was watching Hiroshima's Aioi Bridge in the crosshairs of his Norden bombsight. The bombsight, which subsequently released the first uranium-fission atomic bomb and began the end of World War II, was an analog computer. It solved the equations of motion that determined the path of the bomb, using things like cams and gears, a gyroscope, and a telescope, all mechanical devices. But it was a computer nevertheless, although applying the term to a mess of moving steel parts might surprise some people today. Well into the 1950s there were two kinds of computers: analog and digital. In fact, analog computers of the electronic sort were the only way to solve certain kinds of complicated problems, and were, in a handful of situations, very useful. Electronic analog computers were programmed by plugging wires into a patch panel, which was like a telephone switchboard (you may have seen one in an old movie), and by the time any interesting problem was running, the patch panel was a rat's nest.
But before the mid-twentieth century everything was analog; digital just hadn't been invented. The most important piece of information technology I knew as a child was the radio, very analog at the time, and it was my remarkable piece of good fortune when the postwar engines of production turned to consumer goods, and consumers bought new, streamlined, plastic radios. Garbage night meant that the monstrous mahogany console radios of the 1930s could often be found curbside — with booming bass, hardly any treble because of the limitations of AM broadcasting, and all manner of interesting electronic parts inside. That was how I learned to love the glow of vacuum tubes and the aroma of hot rosin-core solder congealing around the twisted leads of condensers (as capacitors were called), resistors, coils, and other more exotic components. Sometimes it was an autopsy that I performed on these found radios, but often it was a vivi-section, since many of them worked, or could be made to work, excellently. Some of these lucky finds even had shortwave bands, and garbage night turned out to be my gateway to the world at large.
It was all analog. When television came, that, too, was all analog. So were telephones. There just wasn't anything else.
Nostalgia and the Aesthetics of Technology 1.2
Video and audio signals fly in and out of our brains all day long, and devices that process those signals — radio, television, recorded film and music players, telephones — were all digitized in the latter half of the twentieth century; that is, within my lifetime. One consequence is that the devices we use every day for what is now called digital signal processing have more or less converged to the same, rather dull-looking machine — essentially a small chip behind a screen, in a plastic case, occasionally with a couple of wires hanging out. In contrast, in the good old days radios were radios, television sets were television sets, cameras cameras, telephones telephones. You could tell what a device did by looking at it. And sometimes you would need an elephant to make it portable: the Stromberg Carlson console radio I lugged home with the help of my friends was crafted with a sturdy wooden cabinet, housing a loudspeaker with a huge electromagnet, a large lit dial, and hefty knobs that gave the operator the feeling of controlling an important piece of equipment — to a child, and perhaps to a grown-up as well, a spaceship.
My favorite effect was the magic eye tuning indicator, usually a 6E5 vacuum tube that had a fluorescent screen at its end, visible in a circular hole on the front panel of the radio. It glowed green with a dark crescent that contracted in proportion to the signal strength. Carefully tuning a station to reduce the crescent to a narrow slit was a joyful experience, especially in a dark room where the eerie glow did seem magical for sure. Punching in the frequency (or URL) of a radio station just does not provide the same tactile and visual pleasure. If your childhood came after such electronic apparatus, you don't know what I'm talking about; such is the nature of nostalgia. No doubt the iPhone will stimulate similar feelings fifty years from now, when signals may very well go directly to our brains without the need for any beautiful little intermediary machines.
Of course there is a lively market for retro style and retro devices; certain cults have grown around the disappearance of, for example, shellac, vinyl, and analog tape recordings, or film cameras and the once pervasive technology of chemical-based photography. It's common to hear that vacuum-tube amplifiers have a "warmer" sound, although it's not certain how much of the warmth is due to distortion from the inherent nonlinearity of the vacuum-tube analog technology, or the psychological glow from the hot tubes themselves.
Sometimes the nostalgic longing approaches the mystical. Water Lily Acoustics produces superb recordings of Indian classical music, and they go through great pains to keep the sound recording free of the digital taint until the very last step in the process. For example, the booklet for a compact disc recording of Ustad Imrat Khan offers the following assurance:
This is a pure analog recording done exclusively with custom-built vacuum-tube electronics. The microphone set-up was the classic Blumlein arrangement. No noise reduction, equalization, compression, or limiting of any sort was used in the making of this recording.
The booklet goes on to describe the microphones (which use tubes), recorder (Ampex MR70, half-inch, two-track, 15-inch-per-second tape, using vacuum tubes called nuvistors), and so on.
Spiritual values aside, a good analog sound recording, or, for that matter, a good analog photograph taken with film and printed well, can be, technically, a lot better than a bad digital recording or a bad digital photograph. We have much more to say about the ultimate and practical limitations of analog and digital technology as we go along.
Some Terminology 1.3
So far, we've been using the terms digital and analog rather loosely. Before going further, we need to clarify this terminology. For our purposes, digital means that a signal of interest is being represented by a sequence or array of numbers; analog means that a signal is represented by the value of some continuously variable quantity. This variable can be the voltage or current in an electrical circuit, say, or the brightness of a scene at some point, or temperature, pressure, velocity, and so on, as long as its value is continuously variable. All the possible values of a digital signal can be counted, and there is a definite gap between them; those of an analog variable cannot be counted, and there is no definite gap between them. Generally, we use discrete (actually "discrete-valued") to mean digital and continuous (actually "continuous-valued") to mean analog, although this overlooks some distinctions that are not important at this point.
When you buy a wristwatch or a clock, for example, you have a choice between an "analog display" and a "digital display." This is exactly the sense in which we use the terms — but take note of the fact that we refer to the display and not the internal mechanism of the timekeeper. A clock with an analog display has hands that can move continuously, whereas a digital display shows numbers that change discontinuously, which is another way to say suddenly. The hands of a clock actually represent time by the rotational position of gears. These days, the usual clock with an analog display has an internal timekeeping mechanism that is digital (except for old-fashioned windup clocks). But at one point there were the opposite kinds of clocks, with analog mechanisms and digital displays — usually using gears and cams to flip displays with numbers printed on them.
On the morning of "Pi Day" (March 14) of 2015, there was a moment a bit after 9:26 and 53 seconds when the time could be written 3.14159265358979 ...; that is, p. The moment was fleeting to say the least; it was infinitesimally brief. And it will never occur again. Ever. If you were watching the hands of a clock with an analog display, you might have tried to take a photo at the exact moment of p, but the photo would have taken some finite time, and you would have necessarily blurred the second hand. That is an inevitable consequence of measuring an analog quantity of any kind.
Very commonly, audio and video signals are represented by voltages, either in a computer, smartphone, copper cable, or some kind of electrical circuit like those in an amplifier. This is the usual way that such signals are recorded by microphones and video cameras, and the resulting signals are transmitted and reproduced using voltages in electrical circuits. A microphone converts a sound pressure wave in the air to a time-varying voltage. A video camera converts a light image into an array of time-varying voltages. These audio and video signals usually start their lives out as analog signals and are converted to digital form after their initial capture, assuming that they are going to be processed in some way in digital form.
The device that converts an analog signal to digital form is called, naturally, an analog-to-digital converter (A-to-D converter), and the opposite operation is performed by a digital-to-analog converter (D-to-A converter). Thus, for example, the light-sensitive screen in a digital camera is really an A-to-D converter, whereas your computer monitor is really a D-to-A converter.
I'll try to be clear about what I mean when we use the Terms digital, analog, discrete, and continuous, but I should mention some possible sources of confusion. First, it often happens that it is time itself that is thought of as discrete or continuous, rather than the values of a signal. When there is any possible confusion, I will state explicitly that time is being considered. Second, there is the awkward fact that standard mathematical terminology uses the term continuous in a slightly different way. Mathematically speaking, a curve is "continuous" if it does not jump suddenly from one value to another but rather changes "smoothly." The reader who has studied calculus will be aware of this alternate interpretation, but will not be confused by it.
Finally, the term discrete is used by physicists in another sense. A most important example of this usage comes up when we ask the question, "What is light?" The question has puzzled scientists for centuries. Sometimes light behaves like waves; this is evident when we observe diffraction rings, for example. If we aim a narrow light beam (say, from a laser) through a pinhole, and project the result on a screen, we get concentric rings that die out in intensity as we travel from the center. It turns out that this result is easy to explain if we treat light as a wave but very difficult to explain if we treat light as particles. On the other hand, if we aim a light at a detector and gradually decrease its intensity, eventually the light does not become dimmer and dimmer without limit. At some point the light begins to arrive in chunks: Click! ... Click! You can hear such clicks if you receive the light with a sensitive detection device connected to an amplifier and speaker. This experiment and many others provide evidence that light consists of particles; a wave would fade out, diminishing in intensity indefinitely. The particle of light, called a photon, is indivisible. There is no such thing as half a photon or half a click. A click occurs or it doesn't. All the clicks are the same. In such cases we say that light is discrete; it occurs as discrete particles.
All chunks of matter — atoms, molecules, electrons, protons, and so on — also behave in this same seemingly paradoxical way. The puzzle, sometimes known as wave-particle duality, was ultimately explained after a great deal of hard work by some very smart people about a hundred years ago. The explanation is called quantum mechanics, which not only revolutionized physics but changed the way we think about the world.
Quantum mechanics, and physics in general, plays an important part in our story, and we return to it often. It is the science of the very small. As put by Jean-Louis Basdevant, "Bill Gates, the richest man in the world, made his fortune because he was able to use [micro- and nanotechnologies]; quantum mechanics accounts for at least 30% of each of his dollars."
More about quantum mechanics later. We next turn to the fundamental role of physical noise in limiting the performance of analog devices, and the way in which digital devices circumvent the problem.
CHAPTER 2
What's Wrong with Analog? 2
Signals and Noise 2.1
We use the words signal and noise in everyday speech. Opinionated music lovers mean one thing when they speak of noise, insomniacs something different, stock traders yet something else.
Scientists and engineers use the words in the following specialized way: A signal is the part of what we perceive that carries information to us. Noise is what we perceive that carries no information, but rather tends to obscure the signal. Over the past few decades this slightly more technical usage has diffused into general usage, most notably in economics reportage. For example, reporters on the Federal Reserve Bank's policy pronouncements speak of its signal-tonoise ratio.
Noise is unavoidable in our world, and almost always undesirable, so it is very interesting to think about the differences in how it affects analog and digital signals. Consider, for example, the (analog) voltage in an audio amplifier that represents the (analog) sound pressure wave at the microphone capturing a concert. At some particular time the value of that analog signal might be, say, 1.05674 ... volts. We can write down that value only to a certain number of decimal places, but in theory the digits can go on indefinitely. Mathematically speaking, numbers like this, representing analog quantities, are called real numbers. Noise in the amplifier changes this value as the signal propagates through the circuit. At some point it may be changed to 1.05656 ... volts, say, due to the addition of a noise voltage of -0.00018 ... volts. I am being careful to indicate that these numbers are real and cannot usually be written neatly with a finite number of digits. The important point is that an analog signal gets blurrier as it gets corrupted by noise, and in general this is an irreversible process.
Contrast this with the situation of a particular piece (bit) of a digital signal, which can, at any particular point in a computer, take on only the values 0 or 1. If the noise at some point is really gigantic, a 0 might get changed to a 1, or a 1 to a 0. But otherwise its value will be, insofar as we are allowed to assign values to the bit of the digital signal, exactly 0 or 1. We have more to say about this crucial difference in the next chapter. It is enough to note here that there is a certain threshold below which noise will have no effect at all. It is rare to have an opportunity to use the word "perfect" with complete accuracy, but if we ensure that the noise in a digital machine is below that threshold, the operation of our device will be literally perfect.
Reproduction and Storage 2.2
We have now come to the first important problem with analog signals and the world of analog gadgets. Every time an analog signal is stored, retrieved, transmitted, amplified, or processed in any way, it is unavoidably corrupted by noise. We can be very careful about reducing the amount of noise, but it cannot be reduced to zero. Furthermore, its effects are irreversible and accumulate as we continue to process a signal.
(Continues…)
Excerpted from "The Discrete Charm of the Machine"
by .
Copyright © 2019 Princeton University Press.
Excerpted by permission of PRINCETON UNIVERSITY PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.