The Pattern on the Stone: The Simple Ideas That Make Computers Work

Paperback (Print)
Buy New
Buy New from BN.com
$13.24
Buy Used
Buy Used from BN.com
$10.28
(Save 35%)
Item is in good condition but packaging may have signs of shelf wear/aging or torn packaging.
Condition: Used – Good details
Used and New from Other Sellers
Used and New from Other Sellers
from $1.99
Usually ships in 1-2 business days
(Save 87%)
Other sellers (Paperback)
  • All (18) from $1.99   
  • New (5) from $8.95   
  • Used (13) from $1.99   

Overview


Most people are baffled by how computers work and assume that they will never understand them. What they don’t realize—and what Daniel Hillis’s short book brilliantly demonstrates—is that computers’ seemingly complex operations can be broken down into a few simple parts that perform the same simple procedures over and over again. Computer wizard Hillis offers an easy-to-follow explanation of how data is processed that makes the operations of a computer seem as straightforward as those of a bicycle.Avoiding technobabble or discussions of advanced hardware, the lucid explanations and colorful anecdotes in The Pattern on the Stone go straight to the heart of what computers really do. Hillis proceeds from an outline of basic logic to clear descriptions of programming languages, algorithms, and memory. He then takes readers in simple steps up to the most exciting developments in computing today—quantum computing, parallel computing, neural networks, and self-organizing systems.Written clearly and succinctly by one of the world’s leading computer scientists, The Pattern on the Stone is an indispensable guide to understanding the workings of that most ubiquitous and important of machines: the computer.
Read More Show Less

Editorial Reviews

Booknews
The author demonstrates that the computer's seemingly complex operations can be broken down into a few simple parts that perform the same simple procedures over and over again. He provides an outline of basic logic and proceeds to clear descriptions of programming languages, algorithms and memory. He then takes readers in simple steps up to the most exciting developments in computing today-- quantum and parallel computing, neural networks, and self-organizing systems. Annotation c. by Book News, Inc., Portland, Or.
From The Critics
If youve ever tried to figure out how computers work, youve undoubtedly been told how simple they are. When Daniel Hillis tells you computers are simple, hes quite persuasive.

Hillis and some friends once built a working computer out of tens of thousands of wooden Tinkertoys. Their device is now on display at the Computer Museum in Boston.

The Pattern on the Stone illustrates basic computing concepts with line drawings of Tinkertoys in various positions – a surprisingly helpful approach.

Hillis once drove a fire engine to work and he currently works for Disney, but theres nothing childish about his prose. His page-long explanation about the superiority of analog over digital measurements makes analog wristwatches seem like evidence of denial. The gentle way he dislodges such misplaced nostalgia makes him a perfect counselor for your inner Unabomber.

The books gradual pace, low-tech design and gentle title are meant to bring hope to those who feel swamped by a tidal wave of computer-wrought change. And the approach succeeds, by showing the reader how humans, not magicians, discovered a few basic principles and built these amazing machines.

– Bill Brazell

Kirkus Reviews
Here's a straightforward answer to the question every parent has been asked, and few can answer: How do computers really work? Hillis, the head of Disney's Imagineering Works, begins by describing a stone etched in a complex pattern, which can be asked questions in a strange language and give profound and useful answers. It sounds like witchcraft, but it is a literally accurate description of a computer chip. As he goes on to show, the internal workings of a computer can be broken down into simple components. His first chapter introduces the reader to the rudiments of Boolean logic and simple electrical circuits. These ideas can be used to build simple computers, such as the author's own early design of a machine to play tic-tac-toe, or another made from Tinker Toys. The next step in complexity is the development of specific logical functions—-And, Or, Invert—-that form the basis of almost all computing functions. These concepts are illustrated by the game Rock, Paper, Scissors, converted to digital form. Programming is illustrated with the famous "turtle" programs from the Logo computing language, designed to teach children. In similar manner, Hillis introduces the reader step by step to Turing machines, algorithms, encryption, and other advanced concepts. All this is done without discussions of state-of-the-art hardware or engineering problems; in fact, the author encourages the reader to think in terms of "black box" modulees that can be combined to perform a desired task. One need not know what's in the box as long as one knows its ultimate function. The final chapters look at issues on the frontiers of computing: machines that learn and adapt, possibly even (intime) machines that can be said to "think." All this is done elegantly and entertainingly, without a whiff of condescension toward the nontechnical reader.
Read More Show Less

Product Details

  • ISBN-13: 9780465025961
  • Publisher: Basic Books
  • Publication date: 9/28/1999
  • Series: Science Masters Series
  • Pages: 176
  • Sales rank: 833,158
  • Product dimensions: 5.31 (w) x 8.03 (h) x 0.46 (d)

Meet the Author


Daniel Hillis holds some forty patents, sits on the scientific advisory board of the Santa Fe Institute, and is a fellow of the Association of Computing. His many awards include the Hopper Award, the Spirit of American Creativity Award, and the Ramanujan Award. Hillis was named the first Disney Fellow and became vice president of research and development at the Walt Disney Company in 1996.
Read More Show Less

Read an Excerpt


Chapter 4: How Universal Are Turing Machines?

...Quantum Computing

As noted earlier, the pseudorandom number sequences produced by computers look random, but there is an underlying algorithm that generates them. If you know how a sequence is generated, it is necessarily predictable and not random. If ever we needed an inherently unpredictable random-number sequence, we would have to augment our universal machine with a nondeterministic device for generating randomness.

One might imagine such a randomness-generating device as being a kind of electronic roulette wheel, but, as we have seen, such a device is not truly random because of the laws of physics. The only way we know how to achieve genuinely unpredictable effects is to rely on quantum mechanics. Unlike the classical physics of the roulette wheel, in which effects are determined by causes, quantum mechanics produces effects that are purely probabilistic. There is no way of predicting, for example, when a given uranium atom will decay into lead. Therefore one could use a Geiger counter to generate truly random data sequences-something impossible in principle for a universal computer to do.

The laws of quantum mechanics raise a number of questions about universal computers that no one has yet answered. At first glance, it would seem that quantum mechanics fits nicely with digital computers, since the word "quantum" conveys essentially the same notion as the word "digital." Like digital phenomena, quantum phenomena exist only in discrete states. From the quantum point of view, the (apparently) continuous, analog nature of the physical world-the flow of electricity, for example-is an illusion caused by our seeing things on a large scale rather than an atomic scale. The good news of quantum mechanics is that at the atomic scale everything is discrete, everything is digital. An electric charge contains a certain number of electrons, and there is no such thing as half an electron. The bad news is that the rules governing how objects interact at this scale are counterintuitive.

For instance, our commonsense notions tell us that one thing cannot be in two places at the same time. In the quantum mechanical world this is not exactly true, because in quantum mechanics nothing can be exactly in any place at all. A single subatomic particle exists everywhere at once, and we are just more likely to observe such a particle at one place than at another. For most purposes, we can think of a particle as being where we observe it to be, but to explain all observed effects we have to acknowledge that the particle is in more than one place. Almost everyone, including many physicists, find this concept difficult to comprehend.

Might we take advantage of quantum effects to build a more powerful type of computer? As of now, this question remains unanswered, but there are suggestions that such a thing is possible. Atoms seem able to compute certain problems easily, such as how they stick together-problems that are very difficult to compute on a conventional computer. For instance, when two hydrogen atoms bind to an oxygen atom to form a water molecule, these atoms somehow "compute" that the angle between the two bonds should be 107 degrees. It is possible to approximately calculate this angle from quantum mechanical principles using a digital computer, but it takes a long time, and the more accurate the calculation the longer it takes. Yet every molecule in a glass of water is able to perform this calculation almost instantly. How can a single molecule be so much faster than a digital computer?

The reason it takes the computer so long to calculate this quantum mechanical problem is that the computer would have to take into account an infinite number of possible configurations of the water molecule to produce an exact answer. The calculation must allow for the fact that the atoms comprising the molecule can be in all configurations at once. This is why the computer can only approximate the answer in a finite amount of time. One way of explaining how the water molecule can make the same calculation is to imagine it trying out every possible configuration simultaneously--in other words, using parallel processing. Could we harness this simultaneous computing capability of quantum mechanical objects to produce a more powerful computer? Nobody knows for sure.

Recently there have been some intriguing hints that we may be able to build a quantum computer that takes advantage of a phenomenon known as entanglement. In a quantum mechanical system, when two particles interact, their fates can become linked in a way utterly unlike anything we see in the classical physical world: when we measure some characteristic of one of them, it affects what we measure in the other, even if the particles are physically separated. Einstein called this effect, which involves no time delay, "spooky action at a distance," and he was famously unhappy with the notion that the world could work that way.

A quantum computer would take advantage of entanglement: a one-bit quantum mechanical memory register would store not just a 1 or a 0; it would store a superposition of many i's and many 0's. This is analagous to an atom being in many places at once: a bit that it is in many states (1 or 0) at once. This is different from being in an intermediate state between a 1 and a 0, because each of the superposed 1's and 0's can be entangled with other bits within the quantum computer. When two such quantum bits are combined in a quantum logic block, each of their superposed states can interact in different ways, producing an even richer set of entanglements. The amount of computation that can be accomplished by a single quantum logic block is very large, perhaps even infinite.

The theory behind quantum computing is well established, but there are still problems in putting it to use. For one thing, how can we use all this computation to compute anything useful? The physicist Peter Shor recently discovered a way to use these quantum effects-at least, in principle--to do certain important and difficult calculations like factoring large numbers, and his work has renewed interest in quantum computers. But many difficulties are still there. One problem is that the bits in a quantum computer must remain entangled in order for the computation to work, but the smallest of disturbances--a passing cosmic ray, say, or possibly even the inherent noisiness of the vacuum itself can destroy the entanglement. (Yes, in quantum mechanics even a vacuum does strange things.) This loss of entanglement, called decoherence, could turn out to be the Achilles heel of quantum mechanical computers. Moreover, Shor's methods seem to work only on a specific class of computations which can take advantage of a fast operation called a generalized Fourier transform. The problems that fit into this category may well turn out to be easy to compute on a classical Turing machine; if so, Shor's quantum ideas would be equivalent to some program on a conventional computer.

If it does become possible for quantum computers to search an infinite number of possibilities at once, then they would be qualitatively, fundamentally more powerful than conventional computing machines. Most scientists would be surprised if quantum mechanics succeeds in providing a kind of computer more powerful than a Turing machine, but science makes progress through a series of surprises. If you're hoping to be surprised by a new sort of computer, quantum mechanics is a good area to keep an eye on.

This leads us back to the philosophical issues touched on at the beginning of the chapter-that is, the relationship between the computer and the human brain. It is certainly conceivable, as at least one well-known physicist has speculated (to hoots from most of his colleagues), that the human brain takes advantage of quantum mechanical effects. Yet there is no evidence whatsoever that this is the case. Certainly, the physics of a neuron depends on quantum mechanics, just as the physics of a transistor does, but there is no evidence that neural processing takes place at the quantum mechanical level as opposed to the classical level; that is, there is no evidence that quantum mechanics is necessary to explain human thought. As far as we know, all the relevant computational properties of a neuron can be simulated on a conventional computer. If this is indeed the case, then it is also possible to simulate a network of tens of billions of such neurons, which means, in turn, that the brain can be simulated on a universal machine. Even if it turns out that the brain takes advantage of quantum computation, we will probably learn how to build devices that take advantage of the same effects-in which case it will still be possible to simulate the human brain with a machine.

The theoretical limitations of computers provide no useful dividing line between human beings and machines. As far as we know, the brain is a kind of computer, and thought is just a complex computation. Perhaps this conclusion sounds harsh to you, but in my view it takes nothing away from the wonder or value of human thought. The statement that thought is a complex computation is like the statement sometimes made by biologists that life is a complex chemical reaction: both statements are true, and yet they still may be seen as incomplete. They identify the correct components, but they ignore the mystery. To me, life and thought are both made all the more wonderful by the realization that they emerge from simple, understandable parts. I do not feel diminished by my kinship to Turing's machine...

Read More Show Less

Table of Contents

Preface: Magic in the Stone
1 Nuts and Bolts 1
2 Universal Building Blocks 21
3 Programming 39
4 How Universal Are Turing Machines? 61
5 Algorithms and Heuristics 77
6 Memory: Information and Secret Codes 91
7 Speed: Parallel Computers 107
8 Computers That Learn and Adapt 121
9 Beyond Engineering 137
Further Reading 155
Acknowledgments 157
Index 159
Read More Show Less

Customer Reviews

Average Rating 2
( 1 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(1)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously
Sort by: Showing 1 Customer Reviews
  • Anonymous

    Posted November 5, 2002

    Computers threw the eyes of Daniel Hillis

    Computers¿do you like them. Well W.D. Hillis¿ sure does. He wrote The Patter on the Stone, a book about telling the simple ideas that make computers work. The introduction to the book tells you about Hillis¿ views and opinions of computers and how they work. He then explains how he validates his theories and opinions. This is not an actual story but an informative book about computers seem threw Daniel¿s eyes.

    0 out of 1 people found this review helpful.

    Was this review helpful? Yes  No   Report this review
Sort by: Showing 1 Customer Reviews

If you find inappropriate content, please report it to Barnes & Noble
Why is this product inappropriate?
Comments (optional)