- Shopping Bag ( 0 items )
COMPLEXITY AND SIMPLICITY
In the early 1950s, the Hungarian-American mathematician John Von Neumann was toying with the idea of machines that make machines. The manufacture of cars and electrical appliances was becoming increasingly automated. It wasn't hard to foresee a day when these products would roll off assembly lines with no human intervention whatsoever. What interested Von Neumann particularly was the notion of a machine that could manufacture itself. It would be a robot, actually. It might wander around a warehouse, collecting all the components needed to make a copy of itself. When it had everything, it would assemble the parts into a new machine. Then both machines would duplicate and there would be four ... and then eight ... and then sixteen ...
Von Neumann wasn't interested in making a race of monster machines; he just wondered if such a thing was possible. Or does the idea of a machine that manufactures itself involve some logical contradiction?
Von Neumann further wondered if a machine could make a machine more complex than itself. Then the machine's descendants would grow ever more elaborate, with no limit to their complexity.
These issues fascinated Von Neumann because they were so fundamental. They were the sorts of questions that any bright child might ask, and yet no mathematician, philosopher, scientist, or engineer of the time could answer them. About all that anyone could say was that all existing machines manufactured machines much simpler than themselves. A labyrinthine factory might make a can opener.
Many of Von Neumann's contemporaries were interested in automatons as well. Several postwar college campuses boasted a professor who had built a wry sort of robot pet in a vacant lab or garage. There was Claude Shannon's "Theseus," a maze-solving electric rodent; Ross Ashby's "machina spora," an automated "fireside cat or dog which only stirs when disturbed," by one account; and W. Grey Walter's restless "tortoise." The tortoise scooted about on motorized wheels, reacting to obstacles in its path but tending to become fouled in carpet shag. When its power ran low, the tortoise refueled from special outlets.
Von Neumann's hunch was that self-reproducing machines are possible. But he suspected that it would be impractical to build one with 1950s technology. He felt that self- reproducing machines must meet or exceed a certain minimum level of complexity. This level of complexity would be difficult to implement with vacuum tubes, relays, and like components. Further, a self-reproducing automaton would have to be a full-fledged robot. It would have to "see" well enough to recognize needed components. It would require a mechanical arm supple enough to grip vacuum tubes without crushing or dropping them, agile enough to work a screwdriver or soldering iron. As much as Von Neumann felt a machine could handle these tasks in principle, it was clear that he would never live to see it.
Inspiration came from an unlikely source. Von Neumann had supervised the design of the computers used for the Manhattan Project. For the Los Alamos scientists, the computers were a novelty. Many played with the computers after hours.
Mathematician Stanislaw M. Ulam liked to invent pattern games for the computers. Given certain fixed rules, the computer would print out ever-changing patterns. Many patterns grew almost as if they were alive. A simple square would evolve into a delicate, corallike growth. Two patterns would "fight" over territory, sometimes leading to mutual annihilation. Ulam developed three-dimensional games too, constructing thickets of colored cubes as prescribed by computer.
Ulam called the patterns "recursively defined geometric objects." Recursive is a mathematical term for a repeated procedure, in this case, the repeated rules by which the computers generated the patterns. Ulam found the growth of patterns to defy analysis. The patterns seem to exist in an abstract world with its own physical laws.
Ulam suggested that Von Neumann "construct" an abstract universe for his analysis of machine reproduction. It would be an imaginary world with self-consistent rules, as in Ulam's computer games. It would be a world complex enough to embrace all the essentials of machine operation but otherwise as simple as possible. The rules governing the world would be a simplified physics. A proof of machine reproduction ought to be easier to devise in such an imaginary world, as all the nonessential points of engineering would be stripped away.
The idea appealed to Von Neumann. He was used to thinking of computers and other machines in terms of circuit or logic diagrams. A circuit diagram is a two-dimensional drawing, yet it can represent any conceivable three-dimensional electronic device. Von Neumann therefore made his imaginary world two-dimensional.
Ulam's games were "cellular" games. Each pattern was composed of square (or sometimes triangular or hexagonal) cells. In effect, the games were played on limitless checkerboards. All growth and change of patterns took place in discrete jumps. From moment to moment, the fate of a given cell depended only on the states of its neighboring cells.
The advantage to the cellular structure is that it allows a much simpler "physics." Basically, Von Neumann wanted to create a world of animated logic diagrams. Without the cellular structure, there would be infinitely many possible connections between components. The rules needed to govern the abstract world would probably be complicated.
So Von Neumann adopted an infinite checkerboard as his universe. Each square cell could be in any of a number of states corresponding roughly to machine components. A "machine" was a pattern of such cells.
Von Neumann could have allowed a distinct cellular state for every possible component of a machine. The fewer the states, the simpler the physics, however. After some juggling, he settled on a cellular array with 29 different states for its cells. Twenty-eight of the states are simple machine components; one is the empty state of unoccupied cells. The state of a cell in the next moment of time depends only on its current state and the states of its four bordering ("orthogonal") neighbors.
Von Neumann's cellular space can be thought of as an exotic, solitaire form of chess. The board is limitless, and each square can be empty or contain one of 28 types of game pieces. The lone player arranges game pieces in an initial pattern. From then on, strict rules determine all successive configurations of the board.
Since the player has no further say, one might as well imagine that the game's rules are automatically carried out from one move to the next. Then the player need only sit back and watch the changing patterns of game pieces that evolve.
What Von Neumann did is this: He proved that there are starting patterns that can reproduce themselves. Start with a self-reproducing pattern, let the rules of the cellular space take their course, and there will eventually be two patterns, and then four, and then eight ...
Von Neumann's pattern, or machine, reproduced in a very general, powerful way. It contained a complete description of its own organization. It used that information to build a new copy of itself. Von Neumann's machine reproduction was more akin to the reproduction of living organisms than to the growth of crystals, for instance. Von Neumann's suspicion that a self-reproducing machine would have to be complicated was right. Even in his simplified checkerboard universe, a self-reproducing pattern required about 200,000 squares.
Since Von Neumann was able to prove that a self-reproducing machine is possible in an imaginary but logical world, no logical contradiction must be inherent in the concept of a self-reproducing machine. Ergo, a self-reproducing machine is possible in our world. No one has yet made a self-reproducing machine, but today no logician—or engineer— doubts that it is possible.
VON NEUMANN'S MACHINE AND BIOLOGY
Not only can a machine manufacture itself, but Von Neumann was also able to show that a machine can build a machine more complex than itself. As it happens, these facts have been of almost no use (thus far, at least) to the designers of machinery. Von Neumann's hypothetical automatons have probably had their greatest impact in biology.
One of the longest running controversies in biology is whether the actions of living organisms can be reduced to chemistry and physics. Living matter is so different from nonliving matter that it seems something more than mere chemistry and physics must be at work. On the other hand, perhaps the organization of living matter is so intricate that the observed properties of living organisms do follow, ultimately, from chemistry and physics.
The latter viewpoint is held by virtually all biologists today, but it is not particularly new. René Descartes believed the human body to be a machine. By that he meant that the body is composed of substances interacting in predictable ways according to the same physical laws that apply to nonliving matter. Descartes felt the body is understandable and predictable, just as a mechanical device is.
Queen Christina of Sweden, a pupil of Descartes, questioned this belief: How can a machine reproduce itself? No known inanimate object reproduced in the way that living organisms do.
Not until Von Neumann did anyone have a good rebuttal to Christina's objection. Many biologists postulated a "life force" to explain reproduction and other properties of living matter. The life force was the reason that living matter is so different from nonliving matter. It supplemented the ordinary physical laws in living matter. Because the life force could never apply to a mechanical device, it was pointless to compare living organisms to machines.
At the simplest level, Von Neumann's self-reproducing machine is nothing like a living organism. It contains no DNA, no protein; it is not even made of atoms. To Von Neumann such considerations were beside the point. Von Neumann's abstract machine demonstrates that self-reproduction is possible solely within the context of physics and logic. No ad hoc life force is necessary.
It became reasonable to see living cells as complex self-reproducing machines. A living cell must perform many tasks besides reproducing itself, of course. So it is not surprising that real cells are much more difficult to understand than Von Neumann's machine.
By coincidence, Watson and Crick's work on DNA synthesis took place concurrently with much of Von Neumann's study of machine reproduction. By about 1951, molecular biologists had identified structural proteins, enzymes, nucleotides, and most other important components of cells. The genetic information of cells was known to be encoded in observable structures (chromosomes) composed of nucleotides and certain proteins. Still, no one had any concrete idea how these components managed to reproduce themselves. It might require some sort of a life force yet.
The discovery of DNA's structure and the genetic code were first steps in understanding how assemblages of molecules implement their own reproduction. It is remarkable that the logical basis of reproduction in living cells is almost identical to that of Von Neumann's machine. There is a part of Von Neumann's machine that corresponds to DNA, another part that corresponds to ribosomes, another part that does the job of certain enzymes. All this tends to confirm that Von Neumann was successful in formulating the simplest truly general form of self-reproduction.
Biologists have adopted Von Neumann's view that the essence of self-reproduction is organization—the ability of a system to contain a complete description of itself and use that information to create new copies. The life force idea was dropped. Biology is now seen as a derivative science whose laws can (in principle) be derived from more fundamental laws of chemistry and physics.
IS PHYSICS DERIVATIVE TOO?
"What really interests me," Albert Einstein once remarked, "is whether God had any choice in the creation of the world." Einstein was wondering if the world and its physical laws are arbitrary or if they are somehow inevitable. To put it another way: Could the universe possibly be any different than it is?
In many ways, the world seems arbitrary. An electron weighs 0.00000000000000000000000000091096 grams. The stars cluster in galaxies, some spiral and some elliptical. No one knows of any obvious reason why an electron couldn't be a little heavier or why stars aren't spaced evenly through space. It is not hard to imagine a world in which a few physical constants are different or a world in which everything is different.
Einstein's statement draws a variety of reactions. One's feelings about the nature or reality of God are largely immaterial. The issue is whether there can be "prior" constraints even on the creation of the world.
One reaction is that there are no such constraints. By definition, an all-powerful God can create the world any way He chooses. Everything about the universe is God's choice. Creation was not limited, not even by logic. God created logic.
On the other hand, it can be argued that even God is bound by logic. If God can lift any weight, then he is expressly prevented from being able to create a weight so heavy that He cannot lift it. But God can do anything that does not involve a logical contradiction.
Einstein evidently sympathized with the latter argument. He realized that it may not be easy to recognize logical contradictions, however. Suppose that by some recondite chain of reasoning it is possible to show that the speed of light must be exactly 299,792.456 kilometers per second. Then the notion of a world in which the speed of light is even slightly different would involve a contradiction.
Conceivably, all the laws of physics could have a justification in pure logic. Then God would have had no freedom whatsoever in determining physics.
Taking this idea further, perhaps the initial conditions of our world were also determined by arcane logic. The big bang had to occur precisely the way it did; the galaxies had to form just the way they did.
In that case, God would have had no choice in creation at all. He could not even have made a world in which forks were placed on the right side of plates or in which New Zealand didn't exist. This would be the only possible world.
Some people dismiss Einstein's statement as too metaphysical—the sort of thing that could be argued endlessly without getting anywhere. In fact, modern physics has much to say about choice in creation.
More and more physical laws have turned out to be derivative. If law B is the inevitable consequence of law A, then only law A is really an independent law of nature. B is a mere logical implication.
The chain of implication can have many links. A biological phenomenon may be explained in terms of chemistry. The chemistry may be explained in terms of atomic physics. The atomic physics may be explained in terms of subatomic physics. Then the laws of subatomic physics suffice to explain the original biology.
This is what the reductionist mode of thought seeks. To understand a phenomenon is to be able to give reasons for it—to see the phenomenon as inevitable rather than arbitrary. Reductionism has been a keystone of Western science. Einstein wondered if reductionism can proceed endlessly or whether it will reach a dead end.
Marquis Pierre Simon de Laplace, one of the founders of the reductionist tradition, believed that everything is knowable. He illustrated his point with a hypothetical cosmic intelligence that might, in principle, know the details of every atom in the cosmos. Given a knowledge of the laws of physics, the being should be able to predict the future and reconstruct the past, Laplace felt: "Nothing would be uncertain and the future, as the past, could be present to its eyes."
Physicists are close to a comprehensive theory of nature in which all the world's phenomena are reduced to just two fundamentally different types of particles interacting by just two types of force. This long-awaited synthesis goes by the name of grand unified theory (GUT). Some physicists contemplate an even broader theory in which everything is reduced to one particle and one force. This is labeled a super-grand unified theory. It is always dangerous to suppose that physicists have or have nearly learned everything there is to know about physics. Laplace's cosmic being was evidently prompted by his belief that eighteenth-century physics was nearly complete; late nineteenth-century physicists expressed similar beliefs just in time to have them demolished by quantum theory and relativity. Nonetheless, physicists are (again) facing the prospect of a comprehensive theory of nature, one that may lie in the completion of current theoretical projects. It is a good time to reexamine the cornerstones of reductionism.
Excerpted from The Recursive Universe by William Poundstone. Copyright © 2013 William Poundstone. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.