 Shopping Bag ( 0 items )

All (18) from $1.99

New (5) from $8.95

Used (13) from $1.99
More About This Textbook
Overview
Most people are baffled by how computers work and assume that they will never understand them. What they don’t realize—and what Daniel Hillis’s short book brilliantly demonstrates—is that computers’ seemingly complex operations can be broken down into a few simple parts that perform the same simple procedures over and over again. Computer wizard Hillis offers an easytofollow explanation of how data is processed that makes the operations of a computer seem as straightforward as those of a bicycle.Avoiding technobabble or discussions of advanced hardware, the lucid explanations and colorful anecdotes in The Pattern on the Stone go straight to the heart of what computers really do. Hillis proceeds from an outline of basic logic to clear descriptions of programming languages, algorithms, and memory. He then takes readers in simple steps up to the most exciting developments in computing today—quantum computing, parallel computing, neural networks, and selforganizing systems.Written clearly and succinctly by one of the world’s leading computer scientists, The Pattern on the Stone is an indispensable guide to understanding the workings of that most ubiquitous and important of machines: the computer.
Editorial Reviews
Booknews
The author demonstrates that the computer's seemingly complex operations can be broken down into a few simple parts that perform the same simple procedures over and over again. He provides an outline of basic logic and proceeds to clear descriptions of programming languages, algorithms and memory. He then takes readers in simple steps up to the most exciting developments in computing today quantum and parallel computing, neural networks, and selforganizing systems. Annotation c. by Book News, Inc., Portland, Or.From The Critics
If youve ever tried to figure out how computers work, youve undoubtedly been told how simple they are. When Daniel Hillis tells you computers are simple, hes quite persuasive.Hillis and some friends once built a working computer out of tens of thousands of wooden Tinkertoys. Their device is now on display at the Computer Museum in Boston.
The Pattern on the Stone illustrates basic computing concepts with line drawings of Tinkertoys in various positions a surprisingly helpful approach.
Hillis once drove a fire engine to work and he currently works for Disney, but theres nothing childish about his prose. His pagelong explanation about the superiority of analog over digital measurements makes analog wristwatches seem like evidence of denial. The gentle way he dislodges such misplaced nostalgia makes him a perfect counselor for your inner Unabomber.
The books gradual pace, lowtech design and gentle title are meant to bring hope to those who feel swamped by a tidal wave of computerwrought change. And the approach succeeds, by showing the reader how humans, not magicians, discovered a few basic principles and built these amazing machines.
Bill Brazell
Kirkus Reviews
Here's a straightforward answer to the question every parent has been asked, and few can answer: How do computers really work? Hillis, the head of Disney's Imagineering Works, begins by describing a stone etched in a complex pattern, which can be asked questions in a strange language and give profound and useful answers. It sounds like witchcraft, but it is a literally accurate description of a computer chip. As he goes on to show, the internal workings of a computer can be broken down into simple components. His first chapter introduces the reader to the rudiments of Boolean logic and simple electrical circuits. These ideas can be used to build simple computers, such as the author's own early design of a machine to play tictactoe, or another made from Tinker Toys. The next step in complexity is the development of specific logical functions—And, Or, Invert—that form the basis of almost all computing functions. These concepts are illustrated by the game Rock, Paper, Scissors, converted to digital form. Programming is illustrated with the famous "turtle" programs from the Logo computing language, designed to teach children. In similar manner, Hillis introduces the reader step by step to Turing machines, algorithms, encryption, and other advanced concepts. All this is done without discussions of stateoftheart hardware or engineering problems; in fact, the author encourages the reader to think in terms of "black box" modulees that can be combined to perform a desired task. One need not know what's in the box as long as one knows its ultimate function. The final chapters look at issues on the frontiers of computing: machines that learn and adapt, possibly even (intime) machines that can be said to "think." All this is done elegantly and entertainingly, without a whiff of condescension toward the nontechnical reader.Product Details
Related Subjects
Meet the Author
Daniel Hillis holds some forty patents, sits on the scientific advisory board of the Santa Fe Institute, and is a fellow of the Association of Computing. His many awards include the Hopper Award, the Spirit of American Creativity Award, and the Ramanujan Award. Hillis was named the first Disney Fellow and became vice president of research and development at the Walt Disney Company in 1996.
Read an Excerpt
Chapter 4: How Universal Are Turing Machines?
...Quantum ComputingAs noted earlier, the pseudorandom number sequences produced by computers look random, but there is an underlying algorithm that generates them. If you know how a sequence is generated, it is necessarily predictable and not random. If ever we needed an inherently unpredictable randomnumber sequence, we would have to augment our universal machine with a nondeterministic device for generating randomness.
One might imagine such a randomnessgenerating device as being a kind of electronic roulette wheel, but, as we have seen, such a device is not truly random because of the laws of physics. The only way we know how to achieve genuinely unpredictable effects is to rely on quantum mechanics. Unlike the classical physics of the roulette wheel, in which effects are determined by causes, quantum mechanics produces effects that are purely probabilistic. There is no way of predicting, for example, when a given uranium atom will decay into lead. Therefore one could use a Geiger counter to generate truly random data sequencessomething impossible in principle for a universal computer to do.
The laws of quantum mechanics raise a number of questions about universal computers that no one has yet answered. At first glance, it would seem that quantum mechanics fits nicely with digital computers, since the word "quantum" conveys essentially the same notion as the word "digital." Like digital phenomena, quantum phenomena exist only in discrete states. From the quantum point of view, the (apparently) continuous, analog nature of the physical worldthe flow of electricity, for exampleis an illusion caused by our seeing things on a large scale rather than an atomic scale. The good news of quantum mechanics is that at the atomic scale everything is discrete, everything is digital. An electric charge contains a certain number of electrons, and there is no such thing as half an electron. The bad news is that the rules governing how objects interact at this scale are counterintuitive.
For instance, our commonsense notions tell us that one thing cannot be in two places at the same time. In the quantum mechanical world this is not exactly true, because in quantum mechanics nothing can be exactly in any place at all. A single subatomic particle exists everywhere at once, and we are just more likely to observe such a particle at one place than at another. For most purposes, we can think of a particle as being where we observe it to be, but to explain all observed effects we have to acknowledge that the particle is in more than one place. Almost everyone, including many physicists, find this concept difficult to comprehend.
Might we take advantage of quantum effects to build a more powerful type of computer? As of now, this question remains unanswered, but there are suggestions that such a thing is possible. Atoms seem able to compute certain problems easily, such as how they stick togetherproblems that are very difficult to compute on a conventional computer. For instance, when two hydrogen atoms bind to an oxygen atom to form a water molecule, these atoms somehow "compute" that the angle between the two bonds should be 107 degrees. It is possible to approximately calculate this angle from quantum mechanical principles using a digital computer, but it takes a long time, and the more accurate the calculation the longer it takes. Yet every molecule in a glass of water is able to perform this calculation almost instantly. How can a single molecule be so much faster than a digital computer?
The reason it takes the computer so long to calculate this quantum mechanical problem is that the computer would have to take into account an infinite number of possible configurations of the water molecule to produce an exact answer. The calculation must allow for the fact that the atoms comprising the molecule can be in all configurations at once. This is why the computer can only approximate the answer in a finite amount of time. One way of explaining how the water molecule can make the same calculation is to imagine it trying out every possible configuration simultaneouslyin other words, using parallel processing. Could we harness this simultaneous computing capability of quantum mechanical objects to produce a more powerful computer? Nobody knows for sure.
Recently there have been some intriguing hints that we may be able to build a quantum computer that takes advantage of a phenomenon known as entanglement. In a quantum mechanical system, when two particles interact, their fates can become linked in a way utterly unlike anything we see in the classical physical world: when we measure some characteristic of one of them, it affects what we measure in the other, even if the particles are physically separated. Einstein called this effect, which involves no time delay, "spooky action at a distance," and he was famously unhappy with the notion that the world could work that way.
A quantum computer would take advantage of entanglement: a onebit quantum mechanical memory register would store not just a 1 or a 0; it would store a superposition of many i's and many 0's. This is analagous to an atom being in many places at once: a bit that it is in many states (1 or 0) at once. This is different from being in an intermediate state between a 1 and a 0, because each of the superposed 1's and 0's can be entangled with other bits within the quantum computer. When two such quantum bits are combined in a quantum logic block, each of their superposed states can interact in different ways, producing an even richer set of entanglements. The amount of computation that can be accomplished by a single quantum logic block is very large, perhaps even infinite.
The theory behind quantum computing is well established, but there are still problems in putting it to use. For one thing, how can we use all this computation to compute anything useful? The physicist Peter Shor recently discovered a way to use these quantum effectsat least, in principleto do certain important and difficult calculations like factoring large numbers, and his work has renewed interest in quantum computers. But many difficulties are still there. One problem is that the bits in a quantum computer must remain entangled in order for the computation to work, but the smallest of disturbancesa passing cosmic ray, say, or possibly even the inherent noisiness of the vacuum itself can destroy the entanglement. (Yes, in quantum mechanics even a vacuum does strange things.) This loss of entanglement, called decoherence, could turn out to be the Achilles heel of quantum mechanical computers. Moreover, Shor's methods seem to work only on a specific class of computations which can take advantage of a fast operation called a generalized Fourier transform. The problems that fit into this category may well turn out to be easy to compute on a classical Turing machine; if so, Shor's quantum ideas would be equivalent to some program on a conventional computer.
If it does become possible for quantum computers to search an infinite number of possibilities at once, then they would be qualitatively, fundamentally more powerful than conventional computing machines. Most scientists would be surprised if quantum mechanics succeeds in providing a kind of computer more powerful than a Turing machine, but science makes progress through a series of surprises. If you're hoping to be surprised by a new sort of computer, quantum mechanics is a good area to keep an eye on.
This leads us back to the philosophical issues touched on at the beginning of the chapterthat is, the relationship between the computer and the human brain. It is certainly conceivable, as at least one wellknown physicist has speculated (to hoots from most of his colleagues), that the human brain takes advantage of quantum mechanical effects. Yet there is no evidence whatsoever that this is the case. Certainly, the physics of a neuron depends on quantum mechanics, just as the physics of a transistor does, but there is no evidence that neural processing takes place at the quantum mechanical level as opposed to the classical level; that is, there is no evidence that quantum mechanics is necessary to explain human thought. As far as we know, all the relevant computational properties of a neuron can be simulated on a conventional computer. If this is indeed the case, then it is also possible to simulate a network of tens of billions of such neurons, which means, in turn, that the brain can be simulated on a universal machine. Even if it turns out that the brain takes advantage of quantum computation, we will probably learn how to build devices that take advantage of the same effectsin which case it will still be possible to simulate the human brain with a machine.
The theoretical limitations of computers provide no useful dividing line between human beings and machines. As far as we know, the brain is a kind of computer, and thought is just a complex computation. Perhaps this conclusion sounds harsh to you, but in my view it takes nothing away from the wonder or value of human thought. The statement that thought is a complex computation is like the statement sometimes made by biologists that life is a complex chemical reaction: both statements are true, and yet they still may be seen as incomplete. They identify the correct components, but they ignore the mystery. To me, life and thought are both made all the more wonderful by the realization that they emerge from simple, understandable parts. I do not feel diminished by my kinship to Turing's machine...
Table of Contents