The Quantum Brain: The Search for Freedom and the Next Generation of Manby Jeffrey Satinover
"The Quantum Brain is an adventure in the science of ideas. It is the first book on the brain that combines a grasp of the physics of the microcosm and the technologies of artificial intelligence, neural networks, and self-organizing systems, with a recognition of the transcendant properties that define the mind and differentiate it from matter. Although the
- Editorial Reviews
- Product Details
- Related Subjects
- Read an Excerpt
- What People Are Saying
- Meet the author
"The Quantum Brain is an adventure in the science of ideas. It is the first book on the brain that combines a grasp of the physics of the microcosm and the technologies of artificial intelligence, neural networks, and self-organizing systems, with a recognition of the transcendant properties that define the mind and differentiate it from matter. Although the subject is inherently difficult and novel, Jeffrey Satinover is an inspired guide through the fertile areas of convergence among the pivotal sciences of the age. From such insights will emerge both new technologies and new philosophies and theologies for the twenty-first century."
-George Gilder, Editor, Gilder Technology Report
"Many authors have written about one or two of the topics covered in The Quantum Brain. Jeffrey Satinover's book is unique in trying to tie everything together."
-Michael E. Kellman, Professor of Theoretical Chemistry, University of Oregon
"Thoroughly researched . . . and told as a gripping tale, thanks to Dr. Satinover's . . . gift for the narrative. A marvelous introduction to the most fascinating question the human brain can address: its own working."
-R. Shankar, Professor of Physics and Applied Physics, Yale University
"A thrilling journey through the world of brain research. The author has set new standards for popular science writing by making arcane topics . . . easy to follow. A tapestry of insights."
-Jack Tuszynski, Professor of Physics, University of Alberta
"I wish I had written this visionary book."
-Professor Hugo de Garis, Head, Starbrain Project, Starlab's Artificial Brain Project
- Publication date:
- Product dimensions:
- 6.00(w) x 9.25(h) x 0.70(d)
Read an Excerpt
2. Opening the Mind's EyeIn the seventeenth century, shortly after the invention of the telescope, the great mathematician, computational scientist, and religious philosopher Blaise Pascal undertook a careful study of the motion of the planets. One day he mentioned to his mother that, like the moon, Venus goes through phases. Pascal handed the device to his mother to take a look at the planet, which at that moment happened to be in a crescent phase. Though not herself a scientist, Pascal's mother was an intelligent woman with a keen interest in science. She had never before looked through a telescope and when she did so, she promptly observed that "Yes, of course Venus has phases, but it's the reverse of what it should be."
Pascal was startled. "How do you know that?" Telescopes did indeed invert images, but how would his mother know what the proper phase should look like, having never had her hands on a telescope before-indeed, how could she know that Venus had a phase at all? The phases of Venus were only discovered once telescopes were available.
"Because I can also see it under my naked eye."
Pascal had no particular response to this assertion. He certainly could not see Venus as anything more than a large point of light.
Was her vision better? Was she more observant? Three hundred years later we would find the answer. It was not her vision, it was the human retina. The retina is not really a "part" of the eye at all. It is rather a physical extrusion of the fourth brain itself into the eye. And like other brain matter, the retina therefore is organized not merely to gather light but to intelligently process visual data. It is a vastly powerful, sophisticated, and complex pattern discriminator and classifier. Insofar as one may refer to "machines that think," the retina is an enormously subtle thinking machine-a biological computer of great power and beauty, packed into an impressively small size. Imagine, if you will, a desktop computer about the size and thickness and delicacy of a rosepetal. And what limitations it has are now understood to lie far below the threshold of acuity needed to detect the phases of Venus with the unaided human eye-perhaps with the unaided eye of any terrestrial creature.
For the answer to Pascal's query we can largely credit the efforts of one flamboyant scientist, Frank Rosenblatt. Rosenblatt was a pioneer of neural computation-the construction of electronic devices that process information not according to "top-down" rules of logic but by mimicry of the "bottom-up" wanderings of nature. His prime device-the Perceptron-today sits in the Smithsonian, next to the von Neumann computer that made possible the atom bomb. Of the significance of the latter, very many are at least dimly aware: Every PC in the world was fathered by it; but of the former, very few. Indeed, there remains considerable irritation in some quarters that the Perceptron is there at all. But it is the father of the computer of the future, and it was modeled on the human retina.
Rosenblatt had developed a fierce and precocious fascination with the quest for machine-based intelligence, just then beginning, in the late 1940s. It seemed obvious, in those days, that "artificial intelligence" could best be developed by studying and copying natural intelligence. But the task proved more difficult than anticipated. Within twenty years, the burgeoning science of computation would abandon the biological template to develop the kind of machines that today sit on almost every desktop.
Like many young scientists to follow, Rosenblatt's passionate quest for biologically informed computation was fueled by a fantastically clever and influential article written by two pioneers at the Massachusetts Institute of Technology. In 1943 Warren McCulloch and Walter Pitts published a piece in the Bulletin of Mathematical Biophysics entitled "A Logical Calculus of Ideas Immanent in Nervous Tissue." With mathematical certainty, the article showed that a collection of nerve cells was not only capable of computing, but given how individual neurons behaved, and how they were connected to each other-with a lot of randomness-they would necessarily compute.
In particular, they showed that the "ideas" embodied in a collection of neurons were not explicit, as in high-level human languages, but implicit ("immanent"), carried by the collection of neurons as a whole in much the same way that in matchbox Hexapawn, no individual matchbox embodies the "idea" of the game, but, once trained, the entire assemblage of matchboxes does.' McCulloch and Pitts's paper is little known to the world at large; to the computational science community it is universally known and admired, and has been credited with spurring the entire computer revolution. But the first use to which it was put was in creating a neural network: a densely interconnected set of elementary processing elements that, as a whole, could spontaneously develop powerful intelligence.
Fifteen years after McCulloch and Pitts's seminal paper, Frank Rosenblatt created the Perceptron, a neural network based on the retina. By repeatedly processing information in network fashion-also called distributed or massively parallel processing-a group of even relatively simple neurons can acquire a fantastic capacity for discrimination (as the HER matchboxes acquired strategic ability). This is why the naked eye can in fact detect the crescent shape of Venus.
The elements in a neural network, and the neurons in the retina, operate roughly like this. An incoming signal (say, the local intensity of light) is stimulated by a detector neuron. It transforms the intensity into an electrical signal of a corresponding strength, which it then distributes to many other neurons. There are many such detector neurons, and they all distribute their individualized output to the many other neurons. Each adds up its inputs and similarly converts the net result into a corresponding output. In short, each of many neurons receives many different inputs from which each synthesizes a single output to distribute to many others-hence, "massively parallel."
If that's all there were to it, nothing would happen: Such a system couldn't learn. But the network of neurons has an additional mechanism that is the equivalent of HER jellybeans. The connections between neurons are themselves of varying strength (usually called "weight"): Depending on their weights, the connections either enhance the signal they are transmitting or diminish it. Since at the beginning these weights differ at random, neural nets initially scramble any incoming signal and put out noise. But in a living nervous system, the system itself modifies the weights in light of experience: Connections that frequently carry signals, especially strong ones, are themselves strengthened; connections that infrequently carry signals, or mostly weak ones, are themselves weakened, a mechanism first outlined in 1949 by neuropsychologist Donald O. Hebb in what has become a landmark book, The Organization of Behavior: A Neuropsychological Theory.'
It's almost like a statistical reasoning process: "Hmm. It seems to happen again and again that whenever interest rates are lowered, stock prices go up. From now on, whenever interest rates go down, I'm going to get excited about the stock market, even though I have no theory whatsoever as to why the two events should be connected. They just are-in my mind, at least."
Over time, the network diminishes connections that contribute mostly "noise" and bolsters connections that for the most part "work." Eventually the network "memorizes" the incoming pattern as a specific distribution of varying connection strengths, in the same way that HER "memorized" the strategy of Hexapawn as a distribution of varying matchbox contents. Furthermore, as long as the density of interconnections among neurons is sufficient, the connections themselves can be random: No "wiring diagram" is needed, just as HER needs no logical instructions in strategy.
In an artificial device, we modify the weights by hand, from the outside. Biologically plausible schemes such as "Hebbian learning" were the first step toward an understanding of how local, lower-level systems can influence the global behavior of a composite whole, without external supervision or human tinkerers.
This relatively simple process conforms to the actual structure of networked biological neurons. These typically have multiple short input channels called "dendrites" and a single long output channel called an "axon." The axon then branches out to reach at least one dendrite on many other neurons.
The structure itself suggests that the body of the neuron combines multiple inputs to produce some sort of overall total (or average), which it then puts out. Figure 2-1 shows how artificial processing elements are modeled directly on this biological structure...
What People are saying about this
(George Gilder, author of Telecosm and publisher of the Gilder Technology Report)
(Prof. Hugo de Garis, Head, Starbrain Project, Starlab's Artificial Brain Project)
(Michael E. Kellman, Professor of Theoretical Chemistry, University of Oregon)
(Jack Tuszynski, Professor of Physics, University of Alberta)
(R. Shankar, Professor of Physics and Applied Physics, Yale University)
Meet the Author
JEFFREY SATINOVER is a practicing psychiatrist, past president of the C. G. Jung Foundation, and a former Fellow in Psychiatry and Child Psychiatry at Yale University. He has been a William James Lecturer in Psychology and Religion at Harvard University. His other books include Cracking the Bible Code.
and post it to your social network
Most Helpful Customer Reviews
See all customer reviews >