1: The Headlong Rush of Time
If our world survives, the next great challenge to watch out for will comeyou heard it here firstwhen the curves of research and development in artificial intelligence, molecular biology, and robotics all converge. Oboy. It will be amazing and unpredictable, and even the biggest of brass, let us devoutly hope, are going to be caught flat-footed. It is certainly something for all good Luddites to look forward to if, God willing, we should live so long.
Thomas Pynchon, New York Times Book Review, 1984
Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.
Vernor Vinge, NASA Vision-21 Symposium, 1993
It rushes at you, the future.
Usually we don't notice that. We are unaware of its gallop. Time might not be a rushing black wall coming at us from the future, but that's surely how it looks when you stare unflinchingly at the year 2050 and beyond, at the strange creatures on the near horizon of time (our own grandchildren, or even ourselves, technologically preserved and enhanced). Call them transhumans or even posthumans.
The initial transition into posthumanity, for people intimately linked to specially designed computerized neural nets, might not wait until 2050. It could happen even earlier. Twenty-forty. Twenty-thirty. Maybe sooner, as Vinge predicted. This is no longer the deep, the inconceivably distant future. These are the dates when quite a few young adults today expect to be packing up their private possessions and leaving the office for the last time, headed for retirement. These are dates when today's babes in arms will be strong adults in the prime of life.
Around 2050, or maybe even 2030, is when a technological Singularity, as it's been termed, is expected to erupt. That, at any rate, is the considered opinion of a number of informed if unusually adventurous scientists. Professor Vinge called this projected event "the technological Singularity," something of a mouthful. I call it "the Spike," an upward jab on the chart of change, a time of upheaval unprecedented in human history.
And, of course, it's a profoundly suspect suggestion. We've heard this sort of thing prophesied quite recently, in literally Apocalyptic religious revelations of millennial End Time and Rapture.
That's not the kind of upheaval I'm describing.
A number of perfectly rational, well-informed, and extremely smart scientists are anticipating a Singularity, a barrier to confident anticipation of future technologies. I prefer the term Spike, because when you chart it on a graph it looks like a Spike! Its exponential curve resembles a spike on a graph of change over time. Here's a picture of it:
As you see, the more the curve grows, the larger is each subsequent bound upward. It takes a long time to double the original value, but the same period again gets you four times farther up the curve, then eight times…so that after just ten doublings, you've risen a thousand times as far, then two thousand, and on it goes. Note this: the time it takes to go from one to two, and then from two to four, is just the same period needed to take that mighty leap from 1000 to 2000. A short time later we're talking a millionfold increase in a single step, and the very next step after that is two millionfold…3
History's slowly rising trajectory of progress over tens of thousands of years, having taken a swift turn upward in recent centuries and decades, quickly roars straight up some time after 2030 and before 2100. That's the Spike. Change in technology and medicine moves off the scale of standard measurements: it goes asymptotic, as a mathematician would say. An asymptote is a curve that bends more and more sharply until it is heading almost straight along one of the axesin this case, up the page into the future.
So the curve of technological change is getting closer and closer to the utterly vertical in a shorter and shorter time. At the limit, which is reached quite quickly (disproving Zeno's ancient paradox about the tortoise beating Achilles if it has a head start), the curve tends toward infinity. It rips through the top of the graph and is never seen again.
At the Spike, we can confidently expect that some form of intelligence (human, silicon, or a blend of the two) will emerge at a posthuman level. At that point, all the standard rules and cultural projections go into the waste-paper basket.
A quick preliminary stroll through the future
Everything you think you know about the future is wrong.
How can that be? Back in the 1970s, Alvin Toffler warned of future shock, the concussion we feel when change slaps us in the back of the head. But aren't we smarter now, in the twenty-first century? We have wild, ambitious expectations of the future, we're not frightened of it. How could it surprise us, now that Star Trek and Star Wars and Terminator movies and The Matrix and a hundred computer role-playing games have domesticated the twenty-fourth century, cyberspace virtual realities, and a galaxy far, far away?
Actually, I blame glitzy mass-market science fiction script writers for misleading us. They got it so wrong. Their enjoyable futures, by and large, are about as plausible as nineteenth-century visions of tomorrow. Those had dirigibles filling the skies and bonneted ladies in crinolines tapping at telegraphs.
Back in the middle of the twentieth century, when the futuristic stories I read as a kid were being written, most people knew "that Buck Rogers stuff" was laughable fantasy, suitable only for children. After all, it talked about atomic power and landing on the moon and time travel and robots that would do your bidding even if you were rude to them. Who could take such nonsense seriously?
Twenty years later, men had walked on the moon, nuclear power was already obsolete in some countries, and computers could be found in any university. Another two decades on, in the nineties, probes sent us vivid images from the solar system's far reaches (and got lost on Mars), immensely powerful but affordable personal computers sat on desks at home as well as work, the human genome was being sequenced, and advanced physics told us that even time travel through spacetime wormholes was not necessarily insane (although it was surely not in the immediate offing).
So popular entertainment belatedly got the message, spurred on by prodigious advances in computerized graphics. Sadly, the movie, television, and game makers still didn't know a quark from a kumquat, a light-year (a unit of interstellar distance) from a picosecond (a very brief time interval). With gusto and cascades of light, they blended made-up technobabble with exhilarating fairy stories, shifting adventure sagas from ancient legends and myth into outer space. It was great fun, but it twisted our sense of the future away from an almost inconceivably strange reality (which is the way it will actually happen) and back into safe childhood, that endless temptation of fantastic art.
Maybe you think I'm about to get all preachy and sanctimonious. You're waiting for the doom and gloom: rising seas and greenhouse nightmare, cloned tyrants, population bomb, monster global megacorporations with their evil genetically engineered foods and monopoly stranglehold on the crop seeds needed by a starving Third World. Wrong. Some of those factors indeed threaten the security of our planet, but not for much longer (unless things go very bad indeed, very quickly). No, what's wrong with most media images of the future isn't their evasion of such threatson the contrary, they play them up to the point of absurdity. What's wrong is their laughably timid conservatism.
The future is going to be a fast, wild ride into strangeness. And many of us will still be there as it happens.
That strangeness is exactly what prevents us from picking out any one clear determinate future. The coming world of the Spike is, strictly, unimaginablebut we can certainly try our best to trace some of the contributing factors, and some of the ways they'll converge (or perhaps block each other). That fact governs my approach in this book. Do not expect a dogmatic manifesto advancing a single thesis. Instead, I'll try to give you a glimpse of many different technologies. I won't attempt the impossible, which is to integrate all those different points of view into one comforting, assured framework. There is no inevitable tomorrow.
All that we know for sure is the almost unstoppable acceleration of science and technology, and the drastic impact it will have upon humanity and our world.
Living in the future right now
This accelerating world of drastic change won't wait until, say, Star Trek's twenty-fourth century, let alone the year 3000. We can expect extraordinary disruptions within the next half century. Many of those changes will probably start to impact well before that. By the end of the twenty-first century, there might well be no humans (as we recognize ourselves) left on the planetbut, paradoxically, nobody alive then will complain about that, any more than we now bewail the loss of Neanderthals.
That sounds rather tasteless, but I mean it literally: many of us will still be here, but we won't be human any longernot the current model, anyway. Our children, and perhaps we as well, will be smarter. We already have experimental hints of how that might occur. In September 1999, molecular biologists at Princeton reported adding a gene to a strain of mice, elevating their production of NR2B protein. The improved brains of these "Doogie mice" used this extra NR2B to enhance brain receptors, helping the animals solve puzzles much faster. A kind of genetic turboaccelerator for mousy intelligence. Human brains, as it happens, use an almost identical protein. It is not far-fetched to suppose that we will learn to tweak or supplement it to increase our own effective intelligence (or that of our children).
Nor will we be the only high-level intelligences on the planet. By the close of the twenty-first century, there will be vast numbers of conscious but artificial minds on earth. How we and our children get along with them as they move out of the labs and into the marketplace will determine the history of life in the solar system, and maybe the universe.
I'm not making this up. Dr. Hans Moravec, a robotics pioneer at Carnegie Mellon University in Pittsburgh, argues in Robot (1999) that we can expect machines equal to human brains within forty years at the latest. Already, primitive robots operate at the level of spiders or lizards. Soon a robot kitten will be running about in Japan, driven by an artificial brain designed and built by Australian Dr. Hugo de Garis. True, it's a vast leap from lizard to monkey and then human, but computers are doubling in speed and memory every year.
This is the hard bit to grasp: with that kind of annual doubling in power, you jump by a factor of 1000 every decade. In twenty years, the same price (adjusted for inflation) will buy you a computer a million times more powerful than your current model. That's "Moore's law," enunciated in 1965 by Gordon E. Moore, one of the founders of the Intel company, which now makes the Pentium chip for your personal computer. Moore originally surmised that the number of components on an integrated circuit (IC) would double each year. If that were to happen, 65,000 transistors would dance on an IC within ten years. That was a little ambitious, but turned out to be close to realitya result nobody could have believed in 1965. Moore's conjecture changed as time passed, first slowing down to "doubling every two years" then speeding back up to "doubling every eighteen months." It remained an astonishing prediction, and an amazing phenomenon.
Moore's law (although of course it isn't really anything like a law of nature) made a disarmingly simple algebra equation, the kind even someone uncomfortable with figures might be able to follow: doubling goes as two raised to the power of N, and currently "N" is conjectured to equal "one and a half years." With this equation you can work out how fast it takes to get to a millionfold increase, say, by following Moore's (revised) law through the following simple steps:
• two to the tenth power equals roughly 1000
• two to the twentieth power equals a million
• and two to the fortieth power equals a thousand billion.
If, to be conservative, a single doubling happens during each two-year period, then every twenty years we get a thousand times as much computational power per dollar as we started with.
At the start of the 2000s, the world's best, immensely expensive supercomputers perform several trillion operations a second. To emulate a human mind, Moravec estimates, we'll need systems a hundred times better. Advanced research machines might meet that benchmark within a decade, or soonerbut it will take another ten or twenty years for the comparable home machine at a notepad's price. Still, around 2040, expect to own a computer with the brain power of a human being. And what will that be like? If software develops at the same pace, we will abruptly find ourselves in a world of alien minds as good as our own.
Will they take our orders and quietly do our bidding? If they're designed right, maybe. But that's not the kicker. That's just the familiar world of third-rate sci-fi movies with clunky or sexy-voiced robots. The key to future change comes from what's called "self-bootstrapping"machines and programs that modify their own design, optimize their functioning, improve themselves in ways that limited human minds can't even start to understand. Dr. de Garis calls such beings "artilects," and even though he's building their predecessors he admits he's scared stiff.
By the end of the twenty-first century, computer maven Ray Kurzweil (in The Age of Spiritual Machines, 1999) expects a merging of machines and humans, allowing us to shift consciousness from place to place. He's got an equally impressive track record, as a leading software designer and specialist in voice-activated systems. His time line for the future is even more hair-raising than Moravec's. In a decade, he tells us, expect desktop machines with the grunt of today's best supercomputers, a trillion operations a second. Forget keyboardswe'll speak to these machines, and they'll speak back in the guise of plausible personalities.
By 2020, a Pentium equivalent will equal a human brain. And now the second great innovation kicks in: molecular nanotechnology (MNT), building things by putting them together atom by atom. I call that "minting," and the wonderful thing is that a mint will be able to replicate itself, using common, cheap chemical feedstocks. Houses and cars will be compiled seamlessly out of diamond (carbon, currently clogging the atmosphere) and sapphire (aluminum), because they will be cheap appropriate materials readily handled by mints. It's not clear, however, if one-size-fits-all universal assemblers will be feasible, at least in the near future; some mints might be specialized to compile carbon compounds, others to piece together aluminum (into sapphire) or tungsten-carbide structures, requiring assembly at a coarser level. These dedicated mints will operate at successively higher temperatures, each requiring a totally different chemistry (feedstock, tool-tips, energy sources). "Whether you can use one level of MNTing to enable the next higher level," notes one commentator, "remains a very open question."
Until recently, all nanotechnology was purely theoretical. A Rand Corporation study declared cautiously: "Extensive molecular manufacturing applications, if they become cost-effective, will probably not occur until well into the far term. However, some products benefiting from research into molecular manufacturing may be developed in the near term. As initial nanomachining, novel chemistry, and protein engineering (or other biotechnologies) are refined, initial products will likely focus on those that substitute for existing high-cost, lower-efficiency products."6 The engineering theory was good, but the evidence was thin. Finally, though, at the end of November 1999, came a definitive breakthrough, harbinger of things to come. Researchers at Cornell University announced in the journal Science that they had successfully assembled molecules one at a time by chemically bonding carbon monoxide molecules to iron atoms. This is a long way from building a beefsteak sandwich in a mint the size of a microwave oven powered by solar cells on your roof (also made for practically nothing by a mint), but it's proof that the concept works.
If that sounds like a magical world, consider Kurzweil's 2030. Now your desktop machine (except that you'll probably be wearing it, or it will be built into you, or you will be absorbed into it) holds the intelligence of one thousand human brains. Machines are plainly people. It might be (horrors!) that smart machines are debating whether, by comparison with their lucid and swift understanding, humans are people! We had better treat our mind children nicely. Minds that good will find little difficulty solving problems that we are already on the verge of unlocking. Cancers will be cured, along with most other ills of the flesh.
Aging, and even routine death itself, might become a thing of the past. In October 1999, Canada's Chromos Molecular Systems announced that an artificial chromosome inserted into mice embryos had been passed down, with its useful extra genes, to the next generation. And in November 1999, the journal Nature reported that Pier Giuseppe Pelicci, at Milan's European Institute of Oncology, had deactivated the p66shc gene in micewhich then lived thirty percent longer than their unaltered kin, without making them sluggish! A drug blocking p66shc in humans might have a similar life-extending effect.
As well, our bodies will be suffused with swarms of medical and other nano maintenance devices. The first of three magisterial volumes detailing how and why medical nanorobots are in mid-range prospect appeared at the end of 1999: Dr. Robert A. Freitas Jr.'s Nanomedicine. Nor will our brains remain unaltered. Many of us will surely adopt the prosthetic advantage of direct links to the global net, and augmentation of our fallible memories and intellectual powers. This won't be a world of Mr. Spock emotionless logic, however. It is far more likely that AIs (artificial intelligences) will develop supple, nuanced emotions of their own, for the same reason we do: to relate to people, and for the sheer joy of it.
The real future, in other words, has already started. Don't expect the simple, gaudy world of Babylon-5 or even eXistenZ. The third millennium will be very much stranger than fiction.
Walking into the future
To get a firmer idea of the reasoning that underlies these apparently reckless claims, consider the ever-accelerating rate at which people have been able to travel during the last three hundred thousand years (or the last three million, if you're willing to accept a generous definition of humankind).
For very much the largest part of that span, we were limited to walking pace, with long rests. Some six thousand years ago we borrowed the lugging power of asses, then the strength and endurance of other large animals, finally coupling small ponies to war chariots in the second millennium b.c. Breeding horses large enough to ride took many centuries more. In other forms of transport, dugout canoes, then boats, and finally ships with sails went as fast as arms could paddle, or winds, captured fairly inefficiently, blow.
Less than two hundred years ago, steam trains sent our ancestors hurtling on rail at twenty or thirty kilometers per hour. Cars made faster speeds commonplace within the living memory of the elderly, especially as roads improved (at prodigious cost, financially and to the shape of the landscape). Prop aircraft flew at a few hundreds of kilometers per hour. Within decades, jets flew ten times that fast, and by the 1960s rockets took astronauts into space at tens of thousands of kilometers per hour. Today, using "virtual presence" on-line simulation systems, we are on the verge of "being there" (in a limited but vivid and interactive sense) at the speed of light. And that's the end of the lineyou can't get faster than the velocity of light.
Mapped on a graph, this progression shows a long flat rise, turning slowly upward, then climbing more sharply, and faster again…and now its dotted projection seems to soar dizzyingly toward a veritable Spike.
The brain on your desk
That same headlong acceleration applies, as we are now uncomfortably aware, with the speed, power, and cheapness of computers. Computer-power-per-dollar currently doubles every eighteen months, or perhaps as swiftly as every year. Growth in computing power is already exponential, maybe hyperexponential.
Starting small, with one or two special highly secret vacuum-tube computers during the Second World War, the computer presence sluggishly increases to bulky, cantankerous devices in a few rich universities, and then some clumsy IBM mainframes in large businesses, and then the big vulnerable tubes get replaced by transistors, by integrated circuits, and before you know where you are it's the late 1970s, early 1980s, and home enthusiasts are buying their first Macs and PCs, and the prices continue to fall, and meanwhile the military and NASA are funding superfast giant machines running in a bath of liquid helium to keep them cool enough to function, and the curve is getting steeper and steeper
It is the fable of the Chessboard brought to life: one grain on the first square, two on the second, four on the thirdand by the time we reach the sixty-fourth square, we groan beneath a deluge of rice.
Computing power that is developing with such acceleration may be able to emulate human intelligence within thirty or forty years. A century, tops.
At that point, if the chart of the Spike is telling us the truth, we (or our children, or our grandchildren) may see machines with twice our capacity within a further eighteen months, then four times our capacity within a further year and a half, and…
Intelligence will have Spiked. It won't be our human intelligence, but we will be borne along into the Spike with it.
Computing power and speed of travel are just two examples of runaway progress. The most exciting prospect, one that convinces scientists who assess the evidence for a coming Spike, is that other disciplines will have Spiked at about the same time: medical research into aging, cloning and genome manipulation, miniaturization of high-tech products until they reach molecular or even atomic scales (nanotechnology), and more.
So the world of the Spike will be marked by
• augmented human abilities, made possible by connecting ourselves to chips and neural networks that are not in themselves aware but can amplify our native abilities…
• human-level Artificial Intelligences (AIs), swiftly followed by hyperintelligent AIs…
• DNA genome control, which gives us the capacity to redesign ourselves and our children, enhancing not just mind but every bodily and emotional pleasure and aptitude…
• nanotechnology machines, including AIs, built from the atom up, including extremely tiny self-replicating devices no larger than molecules…
• extreme physical longevity or even (in effect, barring accident) immortality, due to a blend of: the new understanding and control of our genetic inheritance, including apoptotic "suicide genes" that may limit lifespan by restricting the number of times most cells can be repaired by self-replication; nanotechnological medical repair systems that live inside the body from birth and keep cells rejuvenated and free of disease, including cancers; "backup" copies of our memories maintained in machine storage in case of damage to the brain, or permitting organ or tissue cloning and replacement of lost knowledge and experience in the extreme case of severe physical damage to the body/brain…
• "uploads" or transfers of human minds into computers, so that we can live, work, and play inside their rich and manipulable machine-generated virtual realities…
• possible contact with galactic civilizations that have already gone through the Spike transition, including such extreme prospects as ancient extraterrestrial cultures so powerful that they have long ago restructured the visible universe (or rewritten the laws of quantum mechanics)…
First glimpses of the singularity
The core notion in these forecasts was first described metaphorically as a technological Singularity (although others had anticipated the insight, as we shall see in a moment) by Professor Vernor Vinge, a mathematician in the Department of Mathematical Sciences, San Diego State University. Why this curious and unfamiliar term "singularity"? It's a mathematical point where analysis breaks down, where infinities enter an equation. And at that point, mathematics packs it in.7 A black hole in space is a kind of spacetime example of this rather abstract pathology. Hence, cosmic black holes, those ultimate mysteries with interiors forever beyond our exploratory reach, are also known as "singularities." "The term 'singularity' tied to the notion of radical change is very evocative," Vinge told me, adding: "I used the term 'singularity' in the sense of a place where a model of physical reality fails. (I was also attracted to the term by one of the characteristics of many singularities in General Relativitynamely the unknowability of things close to or in the singularity.)"
For Vinge, accelerating trends in computer sciences converge somewhere between 2030 and 2100 to form a wall of technological novelties blocking the future from us. However hard we try, we cannot plausibly imagine what lies beyond that wall. "My 'technological singularity' is really quite limited," Vinge told me. "I say that it seems plausible that in the near historical future, we will cause superhuman intelligences to exist. Prediction beyond that point is qualitatively different from futurisms of the past. I don't necessarily see any vertical asymptotes." So enthusiasts for this perspective (including me) are taking the idea much further than Vinge. Humanity, it is argued, will become first "transhuman" and then "posthuman." Under either interpretation, and unlike many currently fashionable debates, Vinge's singularity is an apocalyptic prospect based on testable science rather than religion or New Age millenarianism.
While Vinge first advanced his insight in works of imaginative fiction, he has featured it more rigorously in such formal papers as his address to the Vision-21 Symposium, sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, 30-31 March 1993. Professor Vinge opened that paper with the following characteristic statement, which can serve as a fair summary of my own starting point:
"The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence."
This remarkable prospect has not gone unnoticed among academics. University of Washington neurophysiologist Dr. William Calvin has discussed Vinge's claims in a vivid paper, "Cautions on the Superhuman Transition." Experts who have addressed the Singularity as specialists in fields such as AI research and nano-technology include Dr. Hans Moravec, director of the Carnegie Mellon Mobile Robot Lab, Dr. Eric Drexler, director of the Foresight Institute, and Dr. Ralph Merkle, principal research fellow at Jim von Ehr's nanotechnology company Zyvex and before that at Xerox's celebrated Palo Alto Research Center. Gregory S. Paul, a paleontologist, and Earl Cox, an authority on the design and application of fuzzy logic systems, assert in Beyond Humanity: "The next century promises to be a hyper shock future, nothing like the contemporary worldnothing like what we humans are used to, or have grown to expect. All the concerns we have today, and all the plans we are making to meet them, will be swept away by the changes that are likely in the next century and those that followchanges we have thought would take centuries, millennia, or even millions of years to come to pass."
Dr. Gregory J. E. Rawlins, of the Indiana University Computer Science Department, has published two books relevant to the Singularity. Moths to the Flame: The Seductions of Computer Technology (1996) can even be read on-line. Rawlins says bluntly: "We seem headed for our own starbirth, drawn to it just as inexorably as [interstellar] grains merge [to form stars], accelerating toward it just as surely as the merging accelerates. The attraction is massive, relentless, unstoppable. When our starbirth comes, some of us will no longer be truly human; and things we now call machines will no longer truly be machines." Rawlins is no undiscriminating booster of technology, warning: "What's special today is that because of the computer, an undifferentiated intelligence amplifier, our technology has nearly reached critical mass and is now juggernauting us around the dance floor at such a pace that we may never again be able to stop and catch our breath. Now prisoners of the dance, we're moths irresistibly attracted to the flame of technology. Prometheus, disguised as a scientist, has given us that flame. But fire also burns." Neither, though, is he a pessimist in this extraordinary moment of transition:
We, all of us, are part of the most thrilling adventure ever unleashed on planet earth. Instead of looking backward in anger and fear, let's look forward to the next dance step in the adventure we're crafting for ourselves. A century or so from now, the earth may simply be the home world of a species rich and strange, a fiercely new and amazingly interesting speciestranshumanity. The human adventure is just beginning.
Postcyberpunk writers such as Neal Stephenson and Bruce Sterling are developing this prospect in fiction. There is a vast, ongoing discussion among special interest groups on the Internet, extropian and other transhumanists (doughty foes of entropy; we'll get back to them), spearheaded by people like Oxford-trained philosopher Dr. Max More who, as a token of dynamic optimism, moved to California and changed his name from the less iconic Max O'Connor. They expect to be around during the Spike and genuinely hope to partake in an extraordinary upheavalthe transition, ultimately, to new forms of sentience and life, in the company of the post-Spike superintelligent machines. Dr. Ray Kurzweil, himself an extropian, has recently spread this presumption with remarkable success, in his book The Age of Spiritual Machines and frequent appearances in every conceivable medium, from articles and interviews in Wired magazine to stories in business journals such as Technology Review and Business 2.0, the Discovery Channel, and such popular American television shows as CBS's 48 Hours.
Nor is the idea altogether new. The important mathematician Stanislaw Ulam mentioned it in his "Tribute to John von Neumann," the founding genius of the computer age, in Bulletin of the American Mathematical Society in 1958. Another notable scientific gadfly, Dr. I. J. Good, advanced "Speculations Concerning the First Ultraintelligent Machine," in Advances in Computers, in 1965. Vinge himself hinted at it in a short story, "Bookworm, Run!," in 1966, as had sf writer Poul Anderson in a 1962 tale, "Kings Must Die." And in 1970, Polish polymath Stanislaw Lem, in a striking argument, put his finger directly on this almost inevitable prospect of immense discontinuity. Discussing Olaf Stapledon's magisterial 1930 novel Last and First Men, in which civilizations repeatedly crash and revive for two billion years before humanity is finally snuffed out in the death of the sun, he notes:
But let us keep in mind…another vision, in which the species' cataclysmic degeneration is not so profound…the ascent that follows exponentially from this premise would surpass the capacities of any artist's imagination. This means that even if the fate of humanity is not at all tragic, we are incapable of plausibly foreseeingin the very distant futuredifferent qualities of being, other than the tragic… But the existence of future generations totally transformed from ours would remain an incomprehensible puzzle for us, even if we could express it.
This is exactly Vinge's insight: that such exponentially cumulative change puts the future quite literally beyond our capacity to foresee it. The difference is that Vinge realizes how swift this change will be. It won't require humanity to await "the very distant future." But I suspect Lem, too, knew this, for he added:
It is a law of civilizational dynamics that instrumental phenomena grow at an exponential rate. Stapledon's vision owes its particular form and evenness to the fact that its author ignores this law… Technological development is an independent variable primarily because its pace is a correlative of the amount of information already acquired, and the phenomenon of exponential growth issues from the cross-breeding of the elements of the mass of information.
What's more, Lem understood the key factor that makes it so hard to extrapolate from what we know to what will actually come about the day after tomorrow. These innovations interact. You can't just alter one element of the world and leave the rest unchanged. Lem commented: "[T]he moment of the chromosome structure's discovery cannot be separated by 'long millennia' from an increase in knowledge that would permit, for example, the species to direct its development." This is a truth that stung us even before the close of the twentieth century, and now promises to utterly remake the twenty-first. From the moment the genome is finally mapped and its recipe understood, we shall begin to reshape ourselves, at first bit by bit but eventually, perhaps, entirely. What then? Lem even preempts Vinge's metaphor of an event horizon of prediction, noting "the real factors of exponential growth, which obstruct all long-range predictions; we can't see anything from the present moment beyond the horizon of the 21st century." Meanwhile, in the West, postindustrial sociologist F. M. Esfandiary, also known as FM-2030, asked Are You a Transhuman? (1987). And surely the most charming, preposterous pitch ever published is Professor Robert C. W. Ettinger's opening line to his 1972 book Man into Superman: "By working hard and saving my money, I intend to become an immortal superman."
But is it really going to happen? And should it?
Is all this enthusiasm for accelerating change, as routine and doom-savoring journalism often proclaims, just science run mad?
Is it nothing more than what Ed Regis, in his very funny book Great Mambo Chicken and the Transhuman Condition (1990), sardonically dubbed "fin-de-siècle hubristic mania?"
That would be a comforting and dismissive diagnosis, of course. Mockery is one way to close your eyes against that opaque, looming, onrushing future. But mockery or outrage are often the flip side of anxiety, if we're candid about it.
Plenty of doubters still confidently warn us that scientists are "playing God" or "interfering with Nature" when, let's say, a sheep is cloned, or molecular biologists manipulate somatic genes to cure cystic fibrosis (CF), or just map and file sequences for the Human Genome Project. Similar fear, one recalls, used to be directed at wild-eyed locomotive experimenters hurtling along at a breakneck twenty kilometers an hour.
Today, granted, there's far more reason to be nervous. CF remedies and cloned monkeys today, perfect teeth and enhanced IQ tomorrow, genes purchased and installed in the womb, or, trash television's favorite, the serried ranks of semihuman military clones. Still, it's quite impressive how some people have no trouble knowing precisely what God had in mind for every eventuality, including some just thought up yesterday morning in the lab.
Aside from this perpetual cry of frightened outrage from the excessively pious, many of us go into a self-inflicted cringe at the drop of a quark, or the sight of a diagram showing the coiled DNA helix like some ornate decoration from an altar in St. Peter's in Rome. Probably you'll resent me spelling this out, but look deep inside your heart and swear you've never said resentfully to yourself: "There Are Some Things the Average Man (and Woman) Are Too Dumb to know."
All those hideous, bristling equations. Those ferocious laws. The cascade of principles that nonscientists (I'm one too) will never start to understand in any depth, or even in any shallowness: relativity and quantum mechanics, chaos and thermodynamics, energy and entropy, theories of games and probabilities, recombinant genetics, oncogenes, T suppressor cells, melting nuclear reactors, the poisoned food chain, the crisping ozone hole, and that increasingly scary greenhouse effect.
But while we don't understand what the scientists are talking about, we certainly do know in our bones that their secret language encodes the future. That's why I've written this book: to explain how the Spike really is likely to happen, to you or your children, without going into equations, or the mysteries of the gene, or probability theory, or the mathematics of black holes…
Still, should we, here and now, care about the Spike? Plenty of wise men beg to doubt it. (I haven't heard any negative comments from wise women yet, perhaps because they're accustomed to thinking in generational time.) One of those wise men is Barry Owen Jones, a former minister for science in the Australian Parliament, and all-around guru of the futurehis version of the future. Jones is a world-class polymath with honorary doctorates in science and letters, in command of vast amounts of knowledge in biography (author of The Dictionary of World Biography, 1994), the arts, the sciences, and industry. His book Sleepers, Wake! (1982, revised 1995) was among the earliest to warn of the impact of the new global economy and the technological changes underpinning it. He was a UNESCO Executive Board member in Paris and founding father of a national Commission for the Future (now defunct, alas), among the world's first official institutions to take the greenhouse effect seriously and try to work out what might be done to mitigate its impact.
His estimate of the likely immediate future is measured. In the 1970s, he notes, he "took an optimistic view of a future in which, as [Nobel physics laureate] Dennis Gabor wrote, Mozartian man (and woman) can evolve. The ending of the Cold War ended the threat of world war and nuclear confrontation, and there has been a political transformation…Nevertheless, population explosion, the rise of tribalism and religious fundamentalism and excessive resource use by the West suggest that we have not yet learnt to solve global problems in a rational way."14 So Jones's response to the impending Spike, and the devastating, enthralling social impact it'll have upon us all, our children and grandchildren, is very odd. He is just not interested. "The long range future is 'unimaginable' because of the impossibility of establishing psychological engagement."
We're being reminded, in effect, that nobody in Leonardo da Vinci's day needed to get in a lather about military helicopters and submarines, because they'd remain nothing more than drawings in the old genius's sketch pad for another few centuries. Why should we care about nanotechnology, when right now it's no more than a few hundred equations, a few hundred designs on a CAD screen for gears and motors at the atomic scale, a few early-model submicro devices such as microlithography pens, sensors? Mr. Jones's question is not just "why should we care"but "how could we care?"
He spells this out with a sporty analogy, the kind politicians love to use when they instruct us in matters too hard for our poor little heads to take in: we can speculate about the winner of next year's football final, Mr. Jones declared near the close of the twentieth century, "or the U.S. presidential election in the year 2000but discussing the football or politics of 2100 is too remote for serious consideration."
Is that persuasive? So much, after all, for those bothersome concerns about the greenhouse effect, which certainly won't be impacting critically upon any U.S. presidential elections in the immediate future. Even in the most dire forecasts, hothouse carbon warming isn't likely to upset the football in the developed world for at least fifty more years.
The state of the world in the year 2100 is "too remote for serious consideration"? Suppose it is. Let's ignore the fact that if medical and geriatric improvements continue to multiply at the current rate, some of us might be alive and kicking and in good health in another century's time. Put aside the possibility that even if the research miracles arrive too late to repair and sustain the youngest of us, our children will certainly stand a good chance of surviving into the year 2100.
Leave all that to one side. The oddest aspect of any easy dismissal of the looming curve of the Spike is simply this: we probably won't need to wait until the start of the next century to be swept up by its escalator. The date proposed by most of the scientists who advance the notion of an impending technological Singularity is around 2035. Less than the distance into tomorrow that's elapsed since my last day at high school, back at the dawn of the 1960s.
The Spike starts its upward curve
The Spike was first glimpsed in 1953, not by wild prophets in the desert but by American armed forces personnel working for the Air Force Office of Scientific Research. They wanted to map the path of likely change in their aircraft and missiles.
After the Second World War, all bets on possible developments in the air were off. Radar had altered warfare on the ground and in the sky, and jet propulsion was obviously going to replace the propeller-driven aircraft known since antiquity. Rockets had fallen sickeningly on London, and now their designer, Wernher von Braun, was shooting for the stars under the auspices of his former enemies.
In fact, while reaching the stars was his long-term goal, his immediate project as technical director and then chief of the U.S. Army ballistics weapon program was the delivery of heavy nuclear devices, if push came to Cold War shove, into the Soviet Union and its reluctant allies. Everything was getting faster and more powerful and more deadly; it was a clear trend. You could map it on graph paper.
Curves and trends
In fact, that's what trends areimaginary connections drawn on suitable charts, smooth lines linking the points of inflexion of a series of S-shaped, or sigmoid, curves. Sigmoids start slow, go into a phase of rapid acceleration, then twist over and go flat again when the thing they're mapping reaches saturation. The lower half of the S is concave, the top half is convex, and the place where the convex part runs out of puff and starts to trail away is called "the point of inflexion." These curves might register rates of achievement in some enterprise. Speed, say. You can sketch the data points into a curve.
While the earliest steam car was built in Peking in 1681 by a Jesuit missionary, it wasn't until 1804 that Richard Trevithick hauled ten tons of cargo, plus threescore men and ten, at a dizzying eight kilometers an hour with his engine. A true railway train was built by George Stephenson in 1814, and his locomotive was first opened to the public in 1825. It was not, I think it's fair to say, astonishingly swift.
Things improved. By the 1840sthat is, in less than a generationan express rail trip from London to Exeter took less than a third of the time you'd require to jolt there by stagecoach. Within another two generations, trains ran between England and Scotland at 100 kilometers per hour. In 1955, a French electric locomotive reached 330 kilometers per hour, pretty nifty, and apart from a few special "bullet trains" this kind of mad dash is seldom equaled in ordinary commercial traffic nowadays.
In effect, the curve representing travel by rail rises quite dramatically after the invention of the steam train, more or less peaks within a century, and then chugs along at a pace set by the ever more mordant costs of improving and maintaining smooth tracks, installing reliable signals, then being beaten about the head and shoulders for market share by those automobiles and the vast corporations who chose to turn from rail to road and air for their haulage. It's an S-shaped curve.
One can chart the similar rise in speed of the gasoline-powered car, from Tin Lizzies on corrugated muddy country roads to Hyundais and Porsches purring down autobahnsor, more likely, stuck for infuriating minutes at a time in gridlock. Your car might be capable of 300 kilometers an hour but you'll never see it on your speedometer except by hiring time on a specialist racetrack. The curve for aircraft starts later and rises more sharply, but its rate of climb, too, falls off with the decades. The curve always flattens again.
Flying into the future
What's happening in all these cases is simple: technical solutions are tried for the many and unsuspected problems that need to be dealt with in a new medium, using mechanisms at first balky and makeshift but quickly brought to heel and greatly improved, if never quite perfected. So your chart of modes of transport and their success stories reveals a series of graceful rising trajectories, each of them bending inevitably toward the horizontal as the costs outrun the benefits, flattening when factors such as weight and available power and air resistance and load and safety and environmental impact and a hundred other criteria and parameters dominate the mix, the equation. Within a century, or even more rapidly these days, engineering wizardry turns scientific or technical breakthroughs from working machines to accomplished elements of daily life, and everything more or less…stabilizes.
If you wished to get from Europe to the United States with least time wasted in an aluminium tube, you paid premium prices and flew the Concorde faster than soundbut there's no way anyone other than a fighter pilot will get there faster than that, unless hundreds of billions of dollars get pumped into a project to build a ballistic aircraft that burns all the way free of the atmosphere on rocket engines and comes down like a flying brick, as the Space Shuttle does.
But note: this sequence of independent curves makes an intriguing pattern of its own. Actually, the curves are not strictly independent, since they derive from the same engineering technologies that feed back into each other to some extent. Still, a train is not a car, and a car certainly isn't a plane, and you can't get to orbit or the Moon in a jet. So perhaps it is genuinely startling to find that a second-order line can be drawn through the points of inflexion of these several only loosely linked histories of transport.
That higher-order curve, then, is a trend, for increase of speed as a whole, speed available for human use, in human transportation. It starts, as we noted earlier, at walking pace and stays there for what seems like eternity. It kicks up with the taming, and husbandry, of horses and their cousins. Wind and steam and gasoline goose it again and again into overdrive, until with the arrival of spacecraft the curve is headed almost straight up to the top of the page. It's a trenda metatrendthat looks as if the hand of God is pushing us along, with one invention peaking just as another comes along to take over the baton.
Is it a true trend? Well, yes and no. We have been moving faster and faster, just as the trend curve shows. But it can also be argued that it issorrynothing more than an artifact, an illusion, a pen stroke run through a series of histories that could in principle have been connected quite differently if they'd been made in a different culture from ours. One thinks of the Chinese invention of gunpowder, used for centuries as fireworks instead of firearms.
Still, once you've drawn that curve, once you've sketched it on logarithmic paper in which each vertical interval stands for ten times the speed of the one below it…why, you get a shivery feeling. Perhaps the line truly is telling you something. Perhaps it's a kind ofWell, a hunchy predictive device.
Successful trend predictions
At any rate, that's what the U.S. Air Force guys figured back in 1953 as they charted the curves and metacurves of speed. They kept the curve running, let it press forward. It told them something preposterous. They could not believe their eyes. The curve said they could have machines that attained orbital speed…within four years. And they could get their payload right out of Earth's immediate gravity well just a little later. They could have satellites almost at once, the curve insinuated, and if they wishedif they wanted to spend the money, and do the research and the engineeringthey could go to the Moon quite soon after that.
So the trend curve said. But, of course, trend curves are just optical illusions, created and warped by the partial, selected information you care to put into them.
Everyone in 1953 knew we could never get into space that quickly. Even the wildest optimists hoped for a lunar landing by no sooner than the year 2000 (that fabled signifier of the impossibly remote future).
The curve, however, as you know, was right on the money. Russia sent Sputnik into orbit in October 1957, and Armstrong said his little sentence on the Moon less than twelve years later. It was close to a third of a century sooner than loony space travel buffs like Arthur C. Clarkeor so conservatives had painted them, and now had to look away abashedhad expected it to occur.
Was the trend for speed actually exponential? It was starting to look like it.
Tracking the trend
Forty years ago, I learned about the trend curves that seemed to be dragging us inexorably into some kind of Spike from a popular science article by an engineer, the late G. Harry Stine. Under the pen name Lee Correy, Stine also wrote rather stolid science fiction, but he made his living as an innovations promoter, managing scientific research. No laboratory drone, he saw himself as a synthesist of cutting-edge ideas and practices. He subsequently published books promoting the concept of Solar Power Satellites to beam us down cheap electricity in microwave form. His 1961 article was a deliberately provocative slap at his fellow speculative writers, usually regarded by sober citizens as lunatic technophiles. Stine denounced these specialist dreamers and extrapolators for their stick-in-the-mud conservatism.
Look at the curves! Stine cried in effect. What's wrong with you? Are you all blind? A year later, in the wonderful nonfiction book Profiles of the Future, Arthur C. Clarke diagnosed this same defect as Failure of Nerve, and coupled it with another crime, Failure of Imagination. Stine was determined to fall victim to neither failing. The trends were going asymptotic, he pointed out.
When you're tracking the patterns formed in the twentieth and twenty-first centuries by data on available energy, or transport speed, or numbers of people in a world afflicted with unchecked overpopulation, you can get some hair-raising and very weird results. Too weird to be true. In 1973, sociologist FM-2030 predicted faxes, satellite cell phones, and something like the Internet (good going!) together with the feasible-but-impossibly-expensive "hypersonic planes projected for the late 1980s" that would "zip you anywhere on the planet in less than forty minutes," not to mention the wildly extravagant hope "that by 1985 we will be able to postpone aging in a dramatic way" and onto the truly fatuous: "The use of artificial moons or satellites to control tides and floods." Curves are tricky things to interpret realistically.
"If you really understand trend curves," Stine wrote with a perfect poker face, in 1961, "you can extrapolate them into the future and discover some baffling things. The speed trend curve alone predicts that manned vehicles will be able to achieve near-infinite speeds by 1982." Perhaps that seemed safe enough back then. Two decades away. Anything could happen in twenty years. Perhaps the horse would talk. To tell the truth, Stine was concerned that this prediction might be too conservative. "It may be sooner. But the curve becomes asymptotic by 1982."
You have probably noticed that this did not happen, except on the television and movie screens when starships routinely travel at Warp Speed or burn through wormholes from one side of the galaxy to the other. It's quite disappointing. So where was the flaw in Stine's case? Surely it wasn't simply that we didn't yet know how to do such things. Science keeps learning new and astonishing things about the world. We bump into discontinuities, and have to reformulate our theories, or the theories lead us into novel facts. Sociologists of science, as everyone now knows due to the term's misappropriation by New Agers, dub these major shifts "paradigm changes."
Stine was expecting some very big paradigm changes.
Led astray by his transport-speed trend, he had noted: "If this is really the case, a true scientific breakthrough of major importance must be in the offing in the next twenty years." But how could such an infinitely fast vehicle be propelled? Look, say the trend curve had got a little confused, mistaking Newtonian physics for the more up-to-date Einsteinian variety. Perhaps we would settle for close to the speed of light, the best one can hope for in a universe where nothing material can go faster than light? But that costs a lot, it takes plenty of propellant. Pushing a hundred-ton starship up to 99.99 percent of the speed of light, as close to infinite speed as we're likely to reach any time soon, you have to pump in so much energy that the ship is a kinetic bomb carrying more than 220 million megatons locked up in its inertial mass. Energy equals mass, remember, and that's how much brute energy it takes to get a hundred tons moving that fast. Luckily, explaned Stine, that's okay! "The trend curve for controllable energy is rising rapidly…By 1981, this trend curve shows that a single man will have available under his control the amount of energy equivalent to that generated by the entire sun" (his italics).
Oh dear. Oh dear oh dear. I don't think so. It would have taken more than a genuine "cold fusion" breakthrough to bring that one off. It would have taken a tame black hole in your tank.
If this goes on-
Stine's trend curves, in other words, were misleading. They were just what they seem to beidle curves linking four or five quite disparate historical trajectories. Athletes and swimmers have surpassed previous records every year on the dot, as they eat better and train more cunningly and indeed come into the world as bouncing world-beaters, having enjoyed optimum medically guided conditions in the womb. It doesn't mean that some day an Olympic-class runner will smash the tenth-of-a-second mile record set an instant earlier by her trainer-sponsored rival.
Does this splash of cold water mean that the Spike, the technological Singularity curving up there in the mid-twenty-first century, is nothing better than a mirage? Not at all. It wasn't sheer fantasy Stine was retailing, after all, just a rather trustingor wickedly goadingapplication of the principle "If this goes on."
Plenty of things are going on and will not stop before humankind and our world are changed forever.
In a tiptoeing sequel to his original article, published in 1986, Stine avoided any mention of his most preposterous trends and settled for merely utopian expectations:
We can't close the Pandora's Box of technology. Technology is never forgotten; it's only replaced by better technology. Because we can't put the thermonuclear bomb, recombinant DNA, and a host of other technological wonders back into Pandora's Box and forget them, we must deal with them. It's not easy.
He added: "A hundred years from now, barring an incredible combination of bad luck and poor management, people everywhere will be many, rich, and largely in control of the forces of nature. I've got faith in the capabilities of human beings. We'll make it. Therefore, we must learn how to be rich and handle abundance because we've never had to do it before."
Probably this less extravagant forecast would raise no eyebrows among comparatively conservative soothsayers. In 1996, a team from British Telecommunications made its futurological report for the period to 2020. The average Western life span by that date would be a century. Self-programming computers were expected as early as 2005, as was full voice interaction with machines. AIs emulating the human brain might exist by 2016, and by the same date genetic links to all diseases will have been mapped from the decoded DNA template so everyone will carry an individual genome record wired into a personal health card. These projections were modest, befitting a vast corporate institution like BT.
Among more adventurous observers, it has been forecast that the multibillion-dollar international Human Genome Project might turn up a cure for death itself, if mortality turns out to be governed by a tractable number of genes. How likely is that? Death, after all, results from many converging factors: genetic trade-offs, oxidative stress from metabolism itself, accumulated physical damage from an abrasive outside world. Still, University of Michigan gerontologist Richard Miller was cited in 2000 as declaring that senescence, or aging, is "a single, fairly tightly controlled process that has a relatively small number of genes timing it." In June 2000, the private corporation Celera and HUGO, as the project is known, jointly announced the completed initial map of the sequence of human DNAalong with the genetic instruction sets of several lesser creatures, for purposes of comparison. Reading the cookbook is not the same as knowing how to make the cake, granted, but it is the crucial first step to changing the recipe.
Cynthia Kenyon, Herbert Boyer Distinguished Professor of Biochemistry and Biophysics at the University of California, San Francisco, has reported that the life span of a kind of nematode worm, Caenorhabditis elegans, had been increased manifold just by mutating several genes that control the creatures' rate of metabolism. This is a version of a trick they can induce themselves during lean times, putting themselves into a sluggish state of arrest known as dauer. The standard maturation signal gene daf-2 is switched off during the emergency, allowing the expression of another gene, daf-16, that extends sleepy life span. Kenyon modified daf-2, creating a longer-lived, vigorous worm. C. elegans are simple critters, with just 959 body cells compared to the hundred trillion (1014) we are made from (although those hundred trillion comprise only 254 different cell types), but it's an extraordinary proof of what is possible. Early in the twenty-first century we will know how to locate similar genes in humans (daf-16 resembles mammalian HNF3 genes), how to edit them, perhaps how to switch them on and off, or defer and moderate their influence. The worms can do it themselves just by ignoring their environment; in 1999, Kenyon and a colleague announced in Nature that simply depriving the creatures of genes for the senses that provide feedback from their surroundings yielded a 50 percent increase in life span. Even more remarkable, in September 2000, Science announced that researchers at Emory University and biopharmaceutical company Eukarion, Inc. had boosted C. elegans life-span by the same amount just by adding synthetic versions of two enzymes, superoxide dismutase and catalase, which combat oxidative stress.
Is immortality around the corner?
Defeating death. Endless youth. Have we taken leave of our senses? Just so, according to former counterculture rebel Richard Neville, now an alternative futures spokesperson for the New Age. In a millennial alphabet published on January 1, 2000, he commented sarcastically (articulating the qualms of many, I suspect): "D is for Death, whose abolition by natural causes is now considered achievable, even by experts not known to be mad. Such could not be said of those actually seeking to inhabit their body forever." Are we back, after all, to Stine's ridiculous straight-up-the-page curves? Or are they, too, properly decoded as a hint of the Spike, not totally ridiculous after all?
Technology will certainly remedy many intractable medical conditions (as it has begun to do), by allowing damaged DNA to be repaired or bypassed.
Cancer, for instance, paradoxically afflicts those tissues that constantly repair themselvescolon, uterus, milk ducts, the skin on both the outside and inside of our bodies. Tumors can live forever, unlike regular tissue. It is, one might say, the downside of immortality. Mastering the cellular repair system that goes haywire in carcinoma might be only a few short steps from drugs for longevity. Dr. Robert Weinberg, a notable and productive oncology researcher, argues that evolution and Darwinism (at the gene level) turn out to be the key, perhaps surprisingly.
Is cancer caused by nasty things we eat or breathe (like cigarette smoke), or by viruses, or do our tissues just lose their grip, as they do when we age? All of the above. The secret of cancer is identical with the secret of life: we are fallible but brilliantly maintained organic machines controlled by a library of genes. These are both a hoard of recipes for building proteins, and part of the factory that compiles them. If the recipes get scrambled, our cells cook up the wrong materials. To use Weinberg's own metaphor, we have genetic brakes and accelerators. Malignant canceruncontrolled, disordered growthhappens when the brakes (tumor suppressors) are disabled and the accelerator (growth promoters) is jammed on. Proofreading systems, and standard "suicide genes" that usually destroy corrupted cells, must also fail. Cancerous cells are so rare (given that vast number of proliferating cells in the body) because each fatal tumor requires a series of perhaps five or six random, independent proof-copying or mutational errors. The odds of all these errors striking a single cell are very small; but over a lifetime we copy cells ten thousand trillion times. The few multiple mutants to escape correction become immortal and uncontrolled, even growing their own blood supplies.
Of course, one error might make the occurrence of others a thousand times more likely (just as having a tire blow out at high speed can make your windshield more likely to shatter), but this is not part of a cancer blueprint. Nor is environmental pollution as big a problem as you'd expect. In the last seventy years, adjusting for age and cigarette use, cancer rates have declined. If we can persuade people to give up smokinghard to do, at a time when the young have decided it's sexy againcancer rates really will plummet.
What of aging itself? Can we stop senescence? The same fast-turnover tissues prone to tumors also produce large amounts of the enzyme collagenase, which destroys proteins that help to protect skin from wrinkling. Medicos might block the action of collagenase, or modify it. Even normal eating ages us more rapidly. Cutting the dietary intake of rats, mice, and monkeys by 30 percent lowers their metabolism and extends life spans by up to 40 percent. Late in 1999, as we saw above, it was announced that the life expectancy of one kind of lab mouse has been extended 30 percent without near-starvation, by deleting the gene p66shc from its genetic recipe. It is suspected that once we know just how this works, it would work with people too. If so, we already verge on knowing what we need to do in order to live much longer, healthier lives, and if lower-calorie regimes work for people (as some medical enthusiasts claim) we can start right away.
From this mix of theory and practice, of new ideas and new facts, will emerge the altered bodies and minds (human, trans-human, and otherwise) of the twenty-first century.
Editing the code
Science can already literally replace the human heartwith a baboon's, say, or an artificial pump. In coming decades, it will gain increasing power literally to rewrite the genetic code that builds each heart from protein. The fundamentals of this enigmatic new scientific and political reality are best grasped by looking at precise examples like the inherited disease cystic fibrosis. One Caucasian in 25 carries a defective CF gene, so one child in 2500 will get this awful illness. Genetic mapping already provides prospective parents a simple, inexpensive test allowing them to assess their chances of creating a damaged child. Similar techniques permit prenatal screening for various crippling disorders, so only afflicted fetuses need be terminatedwhich can actually reduce the overall number of abortions.
More menacingor exhilaratingare prospects of splicing new genetic instructions into either somatic cells (bone marrow, say, in the body of someone born with a defect, an intervention that dies with the recipient and in any case needs to be topped up regularly) or germ-line cells (where the new instruction is passed onto the recipient's children). The former are now being tested in humans, while the latter are forbiddenthough they are commonplace in animal experiments, when human-mouse cell hybrids have long been a useful lab tool.
Although the Genome Project will accelerate the knowledge base for such interventions, there is very much more in an organism (especially a person) than is to be found even in a total DNA map. The ethical consequences are formidable, even before the slope of the Spike turns up into its headlong overhead ascent. We need to get our thinking under way well and truly in advance. It's often said that the DNA of our cells comprises a message, written in a "genetic code" or "language of the genes." Via a dizzying chemical virtuosity, its four-letter alphabet and three-letter words construct our tissues, so each of us is "written" into existence, within a specific, rich cultural environment, from that single recipe of 100,000 genes.
One of the most intriguing and hopeful of recent discoveries concerned the role of telomeres, nucleotide structures that cap the ends of chromosomes and help protect their stability. Since chromosomes are strings of recipe genes and control codonsthe design manuals, as it were, in the core of every cellthis suggested that the repair and integrity of the telomeres was itself a key to the reliable operation and preservation of tissues. That possibility is now in question, but it makes an fascinating story of how science can test new theories of longevity.
Some two decades ago, it was found that these crucial features are built out of short nucleotide sequences (TTAGGG) that are repeated again and again. In humans, these are strings two thousand chunks long, or should be. Electrifyingly, it appeared that normal aging was related to a design feature in standard cellular replication, apparently evolved as a precaution against runaway cancer formation. Each time a cell in your body divides to replace itselfand not all of them do sothe telomere tips tend to shorten. That doesn't happen to germ-line cells (those producing ova and spermatozoa). Apparently, special machinery was in place to guard the crucial sex cells from deterioration.
Take a human cell derived from a newborn baby, place it in a nutrient culture on a petri dish, and it will divide up to 90 times. A cell from an old person of 70 has far less kick left in it, or so it seems; it will stop replicating after 20 or 30 divisions. It has lapsed into senescence, reaching the celebrated Hayflick limit described some 30 years ago by Leonard Hayflick and associates at the Wistar Institute. Might that limit might be a by-product of telomere shortening? Perhaps cells estimated their permitted longevity by checking how much of the cap remains.
It turned out to be more complicated than this, actually, because the tips can also grow by adding on newly synthetized units. Cancer cells, as we know to our cost, regain the vivacity of youth, and it can hardly be a chance coincidence that their telomeres tend to be maintained in tip-top condition (but this, too, is not always so). Still, it seemed that if we learned how to turn off their telomere repair system, selectively, maybe we could defeat the tumors. And by enhancing telomeric repair in ordinary cells, maybe we could make them immortal.
The device that cells use to repair their chromosomal end caps is a specialized enzyme, telomerase. So we would face an agonizing choice: fight tumors by denying them telomerase (by introducing a tailored antagonist), and thereby, perhaps, hasten the body's general decline into senescence. Or enhance the longevity of all cells, while raising the risk that an opportunistic cancer will then burst into frantic, gobbling life.
The case for telomerase as a key to longevity is still uncertain. In March 1999, scientists at the Dana-Farber Cancer Institute of Harvard Medical School and Johns Hopkins School of Medicine announced in Cell that mice engineered not to make telomerase lost telomeres as they aged and suffered progressive defects in organs with rapid turnover of cells. Hair grayed and fell out earlier, stress impacted more severely, wounds healed slower. What's more, those cells building blood, immunity, and reproductive function significantly worsened in subsequent generations. Dr. Calvin B. Harley, the chief scientific officer at Geron Corporation, said: "This is a landmark study in telomere and telomerase biology. It underscores the potential of this field to lead to new medicines for treating various chronic, debilitating age-related diseases including cancer."
Geron's work with telomeres and telomerase, and similar pioneering studies by Drs. Woodring Wright and Jerry Shay at the University of Texas Southwestern Medical Center in Dallas, has at least partially overturned the Hayflick limit in healthy cells. Ordinary skin cells with telomerase deliberately activated have now been growing in research labs since 1997, multiplying in their sterile glass containers without error or cancer about once a day. These many hundreds of extra undamaged divisions do not mean telomerase is the key to eternal lifeother factors are also involved in promoting cellular longevity, and brain cells, for example, do not renew themselves. Indeed, Cambridge University molecular biology researcher Aubrey de Grey, an authority on mitochondria and aging, has stated bluntly that "cell division is too infrequent in the body to give telomere shortening any chance of playing a role in aging." It remains to be seen whether Geron's work with the enzyme will help lengthen life span. It is known that Dolly, the cloned sheep, has telomeres 20 percent shorter than normal, so she and other clones might be doomed to an abbreviated existence.
One of the most tragic of all medical disorders is Werner's syndrome, which causes its young victims to "age" with shocking speed. By their late twenties they resemble shrunken geriatrics. Their arteries and heart muscles are a messbut, oddly enough, their brains are unaffected. Werner's is caused by a defect in just a single recessive gene, of which the luckless victims possess two copies (as cystic fibrosis patients have two copies of the recessive CF gene). It codes for a helicase enzyme, which controls the unwinding of the DNA helix during replication. Loss of this key gadget stops other repair enzymes getting to the coded sequences inside each cell and "proofreading" and repairing them. The implication is that this molecular repair system might be boosted, although not simply by increasing the dosage of helicase (which, paradoxically, can be lethal). Such disorders show that senescence is not a simple "natural" curse that we all must endure. Science is opening paths to improved cell maintenance and replication, simultaneously protecting against cancer and increasing longevity.
No doubt, much more detailed understanding will emerge in laboratories during the next decade or two. It is still not impossible that subtle control of telomere maintenance will help extend human life spans. After all, we know that the cells giving rise to viable sperm and eggs are effectively immortal. There seems no reason why the machinery that protects them should not be craftily adapted and extended to the rest of our cells. In 1996, neuroscientist and physician Dr. Michael Fossel published Reversing Human Aging, preaching the gospel of telomerase therapy, which he claimed would be generally available "before 2015."
The book was dismissed by a commentator in New Scientist magazine as "feverishly optimistic," which seems just. In view of recent papers on telomere terminals in Science and Nature, and findings by such authorities as Blackburn, Dr. Titia de Lange, a cell biologist at Rockefeller University, and Nobelist Thomas R. Cech, at the University of Colorado, Fossel has jumped the gun. Certainly, natural selection has not tumbled to this trick or, if it has ever done so briefly, the genomes of those individuals failed to pass through the evolutionary sieve. In fact, there are evolutionary pressures causing many species to age and die. Genotypes that emphasize efficiency in maintaining the body they build tend to leave, in the long run, fewer offspring than those making little or no provision for correcting cellular errors once the prime breeding season is done. As well, some proteins that help an infant swiftly reach healthy breeding age can have terrible, even lethal consequences later on. That doesn't matter, though, to the blind mechanism of evolution. Once your genes are replicated into offspring that will be fertile in their turn, that's the end of evolution's accounting. Yet Fossel's telomeric theory may still be correct in principle (although it will only be part of the aging story). Knowing that, we can fix it, from the outside, once we know how.
It seems very hard for people to accept this prospect, despite much media excitement over recent breakthroughs in stem-cell research and other longevity-related advances. Conservative New York Times commentator William Safire, in his first column for the year 2000, edged up to the possibility. "If the next-to-last year of the second millennium is remembered for anything, it will be for the discovery of the human body's ability to regenerate itself," Safire stated with surprising boldness. Citing a conversation with Dr. Guy McKhann, head of the Harvard Mahoney Neuroscience Institute, he added: "These wild-card cells may be found not just in embryos but in adult bodies, and could, in effect, reset the clocktime and again, doubling and redoubling the life span." Safire went so far as to imagine "readers of a distant tomorrow" who millennia hence will say, " 'You know, this fellow was incredibly prescient.' " Almost inevitably, Safire adds the moralistic rider for the majority of his readers who cannot readily face this outcome: "And another will respond, with all human skepticism, 'Sure, he was rightbut do you really want to live forever?' "
Why not, though? Well, because of an antiquated, superstitious fear that we would thereby "break Nature's law" or "interfere with evolution's plan."
Evolution is not a planner
Despite the beautiful patterns of life, evolution has no plan. It is a gigantic, stupid lottery. Natural diversity, Stephen Jay Gould tells us, is usually attributable to nothing more interesting than a drunkard's walk away from a wall. Wherever she goes, the drunk will end up either smashing into the wall, or she'll topple, after a meandering course along the pavement, into the gutter. This is a dangerous metaphor, but Gould does not intend to denigrate humanityjust to rebuke our pride.
Emergent life starts simple (against the "left wall" of the complexity chart) because it can't start any other way. Mostly it stays simple. Even now, arguably, most of the earth's biomass is elegant, uncomplicated bacteria. Recently, archaic forms of life have been found dwelling happily deep under the crust. As much living material, simple but persistent, might be spread under our feet as floats and gallops and soars in all the familiar habitats of the globe.
Life, of course, never stays still. Mutation gnaws at each DNA message, and ruthless environmental selection winnows the alternatives. Over time, some variants grow more complex, wandering off toward the open-ended or right-hand edge of the chart. Others wander back again. Humans and other large animals exist way off on the rightmost tail of the curve. But this definitely does not imply that some imaginary "surging life force" has been struggling to create us!
Before Darwin, people supposed that God had done the design work. Evolution hinted that we could replace God-the-designer with some kind of drive toward complexity. Sadly, that notion is probably just as erroneous and self-preening. It's true that some forms have grown more complex in the billions of years since life's emergence on Earth. But this is not, Gould asserts, because there is any "complexification drive." It's a side effect of the wall over there on the left, and the vast eons of life's drunken stumbles.
In such a universe, we are freed from fears of impiety. Since evolution does not have a plan for us, we may choose one for ourselves. In fact, that is what we have always done, whether we knew it or not. So defeating death need be no more absurd a goal than finding remedies for nearsightedness, asthmaI have been on daily drugs for asthma for more than a quarter of a century, and it has improved my life beyond recognitionor, say, the lack of an ability to read and write at birth, or fly a jet by instinct.
Ultimately, we might expect to resolve the medical components of what is called "aging"the damage and at last senescence that now accumulates with the passage of timeand find ways to outwit them. In the longest term of the history of intelligent life in the universe, it will surely prove to be the case (tragic, but blessedly brief in comparative duration) that the routine and inevitable death of conscious beings was a temporary error, quickly corrected.
Recall William Safire's assumption that human wisdom speaks against the lure of living forever. Since we have never had this option, I rather think that traditional wisdom is recommending the abandonment of impossible and bruising dreams. Once those dreams approach realization, however, the situation is reversed. Yet that attainment will not be without its inevitable strange consequences. Vinge has declared:
Radical optimism has apocalyptic endpointseven if there are no hidden "monkey's paw" gotchas. It is interesting that the prospect of immortality leads to many of the same problems as increased intelligence. I could imagine living a thousand years, even ten thousand. But to live a million, a billion? For that, I would have to become something greater, ultimately something much, much greater.
A clone in sheep's clothing
Perhaps the most startling biological breakthrough of the end of the twentieth century was Dolly, the cloned sheep. Oddly enough, everyone had been awaiting the arrival of cloning, and yet nobody expected to see itespecially the scientists, who are usually the most conservative when it comes to predicting the near future.
Dolly was already seven months old by the time Nature published details of her unorthodox conception. Dr. Ian Wilmut's team, at Roslin Institute in Scotland, took the nucleus of an adult udder cell, tweaked it in various ways, and transplanted it into an unfertilized ovum purged of its own DNA, then implanted the viable embryo in a surrogate mother sheep. The task was not easy. Many hundreds of attempts were made before the successful pregnancy. Still, now that the method has been proved, the technology of cloning is swiftly maturing. We already know that clones made by nuclear transfer into eggs from donorsthe method used to build Dollyworks quite well with a variety of species, but individual cloned "twins" can turn out quite different from each other! In December 1999, Roslin Institute scientist Keith Campbell, who worked with Wilmut in creating Dolly, announced that ram clones had diverged with age. "You would not know they are clones," he said, since they now vary in size, appearance, and temperament. Why so? Perhaps mitochondrial DNA and other factors in the donor eggs' cytoplasm trigger the nuclear DNA in different ways.
By the time any experiments are made in human cloningbanned in many countries, but sure to be attempted sooner or laterthe procedures will certainly need to be fully understood and reliable. That, after all, is the way it worked with in vitro human fertilization, commonplace today despite initial skepticism and furious ethical debate. For now, ethicists remain deeply concerned by the cloning prospect. Brave New World! A series of inevitable, but dubious, horror stories have been presented in the press and on television:
Saddam Hussein or Adolf Hitler cloned into ranks of storm troopers. Hardly likely. Even if character is genetically ordained, which is doubtful, why would they obey their sarge? Besides, armies, like any organization, need diversity and variation (however much they appear to strive to stamp it out).
Rich old men purchasing identical heirs, bypassing nature's plan. This is possible, but interfering with an imaginary plan"playing God," as it's calledis not what's wrong with the idea. As we've seen, nature doesn't have a "plan." Nature, as Darwin showed us a century and a half back, is a blind, heedless machine that eats its children and kills the "unfit." Indeed, it kills the "fit" as well, soon enough. We can do better than naturelet's hope so, anyway.
It goes against God's express prohibition. Oh? Which chapter and verse in the Bible or the Koran forbids cloning by the transplantation of adult DNA into an enucleated, unfertilized egg? It is true that the Pope very quickly denounced the practice, declaring it sinful. He said the same about in vitro procedures. It was the same Pope, one recalls, who only recently apologized to the memory of Galileo and Darwin, admitting that they had been treated badly by the Church, which for decades if not centuries had denounced their teachings as wicked. Perhaps in a few more decades or centuries, cloning will be pardoned as well.
Cloning steals women's sacred reproductive powers. Well, not just yet. If you wish to clone yourselfillegallya human womb will still be needed, and a woman's ovum, which calls for cooperation rather than theft. In the medium term, it's true, there might be gene-engineered animal wombs. In the long term, a synthetic uterus. But it's not necessarily a matter of sexism. Many women will wish to use these services.
Actually, much of the uproar has been absurd from the start, based on a faulty understanding of how cloning operates. We do not go into metaphysical hysteria when twins or triplets are born. Yet any group of genetically identical humans, whether created by design or "natural" accident, is essentially just that. Are we terrified that quintuplets "share a soul," or must tussle for one?
As for fascistic breeding programsdo we see them right now? No. Are there well-funded orphanages filled with teams of children produced from the in vitro fusing of spermatozoa and ova from the powerful, the brilliant, the athletic? None that I've heard of. It could be done. It could have been going on for centuries, millennia. Why should cloning alter our reluctance to breed babies like sheep?
Actually, cloning makes less sense than a harem, if you're planning to make the world over in your own image. After all, your cloned copy isn't exactly the same as your twin, even aside from the fact that we hardly ever find one identical twin thirty years younger than the other.
As we've seen, somatic cells get old and tired. When they are copied in the course of life, errors creep in. So cloning yourself from a bit of your own tissue means the new baby begins with damaged DNAnot a terrific start in life. Not to mention the possible impact of telomere degradation in adult cells. Poor little Dolly is some years closer to the Hayflick limit than her woolly playmates. It is now known that her telomeres are a fifth shorter than usual, so perhaps she will keel over earlier. Not that this matters with lamb chops on the hoof, but it certainly does with human babies. So if anyone wished to adopt their own healthy cloned copiesbearing in mind that they might not turn out identical anywayit might be advisable to retain frozen samples of their own embryonic tissue, not something that hospitals and labs do just yet. At earliest, this would be an option of the children born into the twenty-first century. On the other hand, it is now known that stem cells from your own body can be provoked into growing any kind of tissue required, and so you might imagine growing an entire backup bodyperhaps without a functioning, aware brainfrom your own stem cells. As we shall see, this is surely not the way to go; it is a misunderstanding of the technological possibilities, let alone the moral issues. But we need to think this through.
It is usually said flatly that these suggestions are simply morally outrageous, not to be considered for a moment. Mightn't rich dictators have duplicate, younger bodies grown and exercised, in cloning farms, for use as hosts in brain transplant rejuvenation schemes (a standard horror scenario, despite the horrendous technical obstacles to brain transplants)? When their bodies are worn out, just call in the neurosurgeons and have their brains popped into a fresh new body. This scenario is remotely possible, but it misses two fairly obvious objections (aside from the fact that we already have laws against slavery and mutilation).
First, your brain is as old as the rest of you. If your heart and liver and eyes are wearing out, your brain probably hasn't got much longer to go either. In fact, many people deteriorate and die precisely because that incredibly complex and vulnerable organ, the human brain, has failed ahead of the rest of the organs. Besides, I wouldn't wish my brain to be grafted into a pre-aged clone.
Second, it is extraordinarily tricky to repair severed nerves, even in fingers or accidentally amputated limbs. Can you imagine what's involved in the wildly difficult task of disconnecting a brain from its sensory organs, and the rest of its body, and rewiring it into another body? The excruciating pain as the nerves learn their new connections? The tedium and frustration of learning every basic skill again like a babybut a baby with an adult mind trapped inside the skull.
In the long run, using some of the future technologies to be explored in later chapters, even this might be feasible without discomfort. Tiny machines no larger than molecules might pour into the brain through the bloodstream, nipping and tucking and tagging and rewiring. But the point to keep in mind is that once we have that level of technology, with the artificial intelligence support systems needed to run it, we won't need anything as coarse and morally offensive as transplants into cloned duplicates.
Besides, to repeat: a cloned double is your twin, and a young, defenseless child at that. If you had the chance today, would you treat your own twin as nothing more than a convenient assemblage of spare parts? Injured in a terrible accident, would you happily order your twin's brain removed so your own could be implanted in the healthy body? I didn't think so.
Could one attempt to sidestep the implications of atrocity by growing a replica body with the genetic pathway for brain development switched off? Thus, a brainless cranium, presumably "soulless," might await its new tenant.
Not a good idea. Aside from ethical repugnance, natural genetic or developmental errors such as microcephaly (tiny brain), or, worse, anencephaly (no brain) result in deformity of the body. In the anencephalic case, such damaged babies die shortly after birth. The complete brain isn't just the organ of thought and feelingit's the control center for the entire body's development.
So the impact of the cloning breakthrough will be more modest than alarmists fear. We will see benefits from the science it yields, now that specialists can test their theories using experimental and control animals that are literally identical (aside from inherited factors from those parts of the cell not in the nucleus, such as maternal mitochondriawhich might not be insignificant). One pathway sure to yield benefits is selective growth of compatible organs grown from your own stem cells. In 2000, Japanese researchers led by Tokyo University biologist Makoto Asashima, announced that they had grown frog eyes and ears simply by cultivating embryo cells in different concentrations of retinoic acid, which somehow triggers the expression of genes needed for the different organs. Kidneys developed by a similar process had been transplanted into frogs, which had survived longer than a month. In the long run, such methods will teach us many of the answers we need to know to protect both body and brain against damage, deterioration, and perhaps death itself.
Waiting in the freezer
Yet even if it turns out that nothing will stay the Grim Reaper this side of the Spike, all is not lost. An answer has been suggestedcryopreservation, or very deep freezingthat could harbor you into the borderlands and beyond. The hope is that the medical science of the next few decades (or even centuries), surely bound to be more advanced than ours, will finally gain the know-how to revive you from temporary death.
Forty-odd years ago, that method was nothing better than a narrative device in fanciful stories. When Dr. Robert C. W. Ettinger suggested it seriously in his 1964 book The Prospect of Immortality, few took him seriously. Today a number of private companies exist that will accept your money or insurance policy and contract to preserve your corpse (as your temporarily defunct person will be crassly regarded by many, including the authorities) in a chilly storage medium for as long as it takes.
Meanwhile, cryonicists are pressing ahead with revival research, within the sadly restricted limits of their budgets. Ultimately, the cryonics supporters hope, your preserved, undeteriorated, but inanimate body and brain will be thawed, its damaged condition made good (which calls for repairs first to whatever killed you, and then to any further injuries inflicted postmortem by the cooling and thawing protocols), and you will awaken. Ralph Merkle, who expects the repairs to be made by nanomachines, put it like this in a review paper:
Cryonic suspension is a method of stabilizing the condition of someone who is terminally ill so that they can be transported to the medical care facilities that will be available in the late 21st or 22nd century…While there is no particular reason to believe that a cure for freezing damage would violate any laws of physics (or is otherwise obviously infeasible), it is likely that the damage done by freezing is beyond the self-repair and recovery capabilities of the tissue itself. This does not imply that the damage cannot be repaired, only that significant elements of the repair process would have to be provided from an external source.
The danger that the freezing process itself damages the tissues to such an extent they cannot be repaired in the future has always been recognized. Originally, in the 1960s and 1970s, whole bodies were drained of blood and perfused with a chemical cocktail designed to prevent ice crystals forming inside the cooling tissues, crystals that would lacerate cells hideously from within during rewarming. Today, superior perfusants are under development, and many clients choose the much cheaper method of having just their heads frozen"neurological suspension," as it's politely called. The reasoning, however macabre, is that by the time successful thawing is in place, superior technology should have no trouble cloning a new, youthful torso to replace the sacrificed portions. How much fun you'd have acclimatizing to a world at least several generations sundered from your own, and perhaps already transformed by the convulsions of the Spike, is less often canvassed.
Two obvious answers spring to mind. We may hope that counseling and psychological practices will improve in line with the requisite medical advances. Indeed, by the time cryonics clients are ready to be awakened, perhaps technology will provide neural chips and enhancers to bring the revived dead swiftly up to speed. The second answer is grimmer: if you don't like the brave new world, you can always…well…kill yourself. Permanently, this time.
In the meantime, not too many are opting for cryo services, sometimes not even those who have signed up. The celebrated guru and transhumanist Timothy Leary, who died in 1996, declined at the last minute to confirm the procedure, although a crew was on standby. However, Ettinger's late wife Mae was cryosuspended in March 2000, as was futurist FM-2030 in July 2000, and psychology professor Dr. James Bedford has been in suspension, moved from one cryonics support organization to another, for more than three decades. Bedford died of renal cancer aged 73 on January 12, 1967. Volunteers injected his corpse with protective fluids and slowly lowered his temperature in a liquid nitrogen bath to minus-196 degrees Celsius. If the Spike is on schedule, he might have to wait as long again, or more, for resurrection.
Some 84 cryonics patients are now preserved in liquid nitrogen at four different cryonics companies in the United States.
As Ettinger put it many years ago, with a certain bitter whimsy: many are cold but few are frozen.
Personally, I'd prefer to avoid cryonic methods by outliving the need for them, so I'm also eager to see metabolic and genetically engineered repair processes retrofitted when those techniques come on line. I want my damaged teeth fixed, and my hair backand my back back, for that matter. And, yes, vigorous, indefinite longevity would be a useful bonus. I wouldn't sneer at physical immortality.
As we shall now see, such hopes for Promethean technology are very far from being just a wistful fantasy. The Spike could change everything utterly, in ways too ruinous and horrifying to regard with merely human gaze. But then again, it might bring a kind of…transhuman redemption.
Copyright © 2001 by Damien Broderick