We sometimes forget that our information technologies and our desire and ability to communicate have long and complicated histories. Gleick, best-selling author of Chaos: Making a New Science and Genius: The Life and Science of Richard Feynman, acknowledges this by exploring the history of information, in all its modern ambiguity, and covering ideas, people, and technologies involved in its development. Gleick discusses African talking drums, information theory and its contributions to physics and biology, cryptography, alphabets and scripts, dictionaries, telegraphs, and more. Although accessible and enjoyable, from chapter to chapter the book sometimes feels loose and unstructured, as the author jumps too quickly from one subject to another. Too often he emphasizes biographical information about historical individuals at the expense of a coherent historical account. VERDICT Despite the disjointed narrative, this is recommended for general readers interested in the history of information technology and enamored of the information age who wonder what it means to characterize our era as such.—Jonathan Bodnar, Georgia Inst. of Technology Lib. & Information Ctr., Atlanta
The Information is so ambitious, illuminating and sexily theoretical that it will amount to aspirational reading for many of those who have the mettle to tackle it. Don't make the mistake of reading it quickly. Imagine luxuriating on a Wi-Fi-equipped desert island with Mr. Gleick's book, a search engine and no distractions. The Information is to the nature, history and significance of data what the beach is to sand.
The New York Times
Gleick ranges over the scientific landscape in a looping itinerary that takes the reader from Maxwell's demon to Godel's theorem, from black holes to selfish genes. Some of the concepts are challenging, but as in previous books…Gleick provides lucid expositions for readers who are up to following the science and suggestive analogies for those who are just reading for the plot. And there are anecdotes that every reader can enjoy…
The New York Times
…rich and fascinating…The book explains more fully and more systematically than any other how the foundations of our information order were laid.
The Washington Post
Publishers Weekly - Publishers Weekly Audio
Overwhelmed as we are with today's unceasing gush of information—some essential, some useless, and much falling into the broad middle of the spectrum—a study of how we got here and the innovators who played a part in creating the dazzling web of contemporary communications could not be more timely. Gleick's survey of pioneers of information, from Alan Turing to Claude Shannon, follows the many-layered strands forming the information superhighway. Rob Shapiro, slightly nasal, reads in measured fashion, pausing luxuriously between sentences and paragraphs to allow Gleick's own gush of information to sink in. Shapiro's stateliness makes for an artful contrast with Gleick's study of go-go modernity; listening to the audiobook manages to not add to the feeling of being overwhelmed. A Pantheon hardcover. (Mar.)
In 1948, Bell Laboratories announced the invention of the electronic semiconductor and its revolutionary ability to do anything a vacuum tube could do but more efficiently. While the revolution in communications was taking these steps, Bell Labs scientist Claude Shannon helped to write a monograph for them, A Mathematical Theory of Communication, in which he coined the word bit to name a fundamental unit of computer information. As bestselling author Gleick (Chaos) astutely argues, Shannon's neologism profoundly changed our view of the world; his brilliant work introduced us to the notion that a tiny piece of hardware could transmit messages that contained meaning and that a physical unit, a bit, could measure a quality as elusive as information. Shannon's story is only one of many in this sprawling history of information. With his brilliant ability to synthesize mounds of details and to tell rich stories, Gleick leads us on a journey from one form of communicating information to another, beginning with African tribes' use of drums and including along the way scientists like Samuel B. Morse, who invented the telegraph; Norbert Wiener, who developed cybernetics; and Ada Byron, the great Romantic poet's daughter, who collaborated with Charles Babbage in developing the first mechanical computer. Gleick's exceptional history of culture concludes that information is indeed the blood, the fuel, and the vital principle on which our world runs. (Apr.)
From the Publisher
“Magnificent…this elegant, insightful study reminds us that we have always been adrift in an incomprehensible universe.” –Los Angeles Times, Best Books of 2011
“Grand, lucid and awe-inspiring…information is about a lot more than what human beings have to say to each other. It’s the very stuff of reality, and never have its mysteries been offered up with more elegance or aplomb.” –Salon, Best of 2011
“With his ability to synthesize mounds of details and to tell rich stories, Gleick ably leads us on a journey from one form of communicating information to another.” –Publishers Weekly, Top 100 Books of 2011
“Ambitious, illuminating and sexily theoretical.” –New York Times
“Gleick does what only the best science writers can do: take a subject of which most of us are only peripherally aware and put it at the center of the universe.” –Time
"The Information isn't just a natural history of a powerful idea; it embodies and transmits that idea, it is a vector for its memes . . . and it is a toolkit for disassembling the world. It is a book that vibrates with excitement." Cory Doctorow, Boing Boing
“No author is better equipped for such a wide-ranging tour than Mr. Gleick. Some writers excel at crafting a historical narrative, others at elucidating esoteric theories, still others at humanizing scientists. Mr. Gleick is a master of all these skills.” —The Wall Street Journal
“Extraordinary in its sweep . . . Gleick’s story is beautifully told, extensively sourced, and continually surprising.” —The Boston Globe
“Audacious. . . . Like the best college courses: challenging but rewarding.” —USA Today
“Challenging and important. . . . This intellectual history is intoxicating—thanks to Gleick’s clear mind, magpie-styled research and explanatory verve.” —The Plain Dealer
“Gleick’s skill as an explicator of counterintuitive concepts makes the chapters on logic . . . brim with tension.” —The Oregonian
“The Information puts our modern ‘information revolution’ in context, helping us appreciate the many information revolutions that preceded and enable it. The internet certainly has changed things, but Gleick shows that it has changed only what has already changed many times before. . . . His enthusiam is contagious.” —New Scientist
“Impressively, reassuringly, Gleick’s substantial, dense book comes as close as anything of late to satiating [the] twin demand for knowledge and clarity.” —The Irish Times
“This is a work of rare penetration, a true history of ideas whose witty and determined treatment of its material brings clarity to a complex subject.” —The Daily Telegraph (London)
“The page-turner you never knew you desperately wanted to read.” —The Stranger
“To grasp what information truly means—to explain why it is shaping up as a unifying principle of science—Gleick has to embrace linguistics, logic, telecommunications, codes, computing, mathematics, philosophy, cosmology, quantum theory and genetics. . . . There are few writers who could accomplish this with such panache and authority. Gleick, whose 1987 work Chaos helped to kickstart the era of modern popular science, is one.” —The Observer (London)
“Enlightening. . . . Engagingly assembled.” —Nature
“ Mesmerizing. . . . As a celebration of human ingenuity, The Information is a deeply hopeful book.” —Nicholas Carr, The Daily Beast
“An amazing erudite and yet highly readable account of why and how information plays such a central role in all our lives, Gleick’s The Information is amongst the most profound books written about technology over the last few years.” —TechCrunch TV
“The web Gleick has woven is a rare one, a whole that envelops and exceeds its many parts, which certainly suits his topic. His contribution—too easily underrated in a work that synthesizes the ideas of others—lies in linking fields of science that aren’t connected in a formal sense. By the close of the book you cannot think of information as you might have before.” —Tim Wu, Slate
“[Gleick] is wrestling with truly profound material, and so will the reader. This is not a book you will race through on a single plane trip. It is a slow, satisfying meal.” —David Shenk, Columbia Journalism Review
“Gleick connects the dots that connect information to us, and there are many dots. . . . Here in one volume is the great story of the most important element at work in the world, and its story is well told. I had forgotten what a fantastic stylist Gleick is. It’s a joy to read him talking about anything.” —Kevin Kelly, The Technium
“Packed with the rich history of human thought and communication through the ages.” —PopMatters
Think your inbox is jammed now, your attention span overtaxed? It's only the beginning, writes pop-science writer Gleick (Isaac Newton, 2003, etc.) in this tour of information and the theory that goes along with it.
It has been a long progression toward the infoglut of today. The author chooses as a logical if unanticipated starting point the talking drums of Africa, an information technology that delivers a satisfying amount of signal in all the noise. From those drums to Morse code, and indeed to binary signaling, is a pretty short hop—and one that Gleick takes, writing along the way about such things as how Samuel Morse and his partner decided which letters were the most used in English, and therefore merited the shortest sequences of dot and dash. The author tours through the earliest information technologies—the intaglio scratches of stone and bone on prehistoric caves, the emergent ideographs of the first Chinese scripts and so on—before getting into the meatier mathematics of more recent times, which led Charles Babbage, say, to ponder the workings of the first oh-so-clunky computers. As Gleick writes, Babbage surrounded himself with fellow science nerds who agreed to write and send scientific papers to one another every six months, though if a member were delayed by a year, "it shall be taken for granted that his relatives had shut him up as insane." The discussion becomes more complex with the intersection of modern physics. In the emergence of Claude Shannon and Alan Turing's first stirrings of modern information theory, the author's skills as an interpreter of science shine. None of his discussion will be news to readers of Tim Wu's exemplaryThe Master Switch(2010) or of the oldCoevolution Quarterly, but Gleick covers the ground in a way that no other book quite manages to do.
Gleick loves the layered detail, which might cause some to sigh, "TMI." But for completist cybergeeks and infojunkies, the book delivers a solid summary of a dense, complex subject.
Read an Excerpt
[Due to website constraints, we are unable to include any images that might have been included in the original text.]
Into the Meme Pool
(You Parasitize My Brain)
When I muse about memes, I often ﬁnd myself picturing an ephemeral ﬂickering pattern of sparks leaping from brain to brain, screaming “Me, me!”
—Douglas Hofstadter (1983)
“Now through the very universality of its structures, starting with the code, the biosphere looks like the product of a unique event,” Jacques Monod wrote in 1970. “The universe was not pregnant with life, nor the biosphere with man. Our number came up in the Monte Carlo game. Is it any wonder if, like a person who has just made a million at the casino, we feel a little strange and a little unreal?”
Monod, the Parisian biologist who shared the Nobel Prize for working out the role of messenger RNA in the transfer of genetic information, was not alone in thinking of the biosphere as more than a notional place: an entity, composed of all the earth’s life-forms, simple and complex, teeming with information, replicating and evolving, coding from one level of abstraction to the next. This view of life was more abstract—more mathematical—than anything Darwin had imagined, but he would have recognized its basic principles. Natural selection directs the whole show. Now biologists, having absorbed the methods and vocabulary of communications science, went further to make their own contributions to the understanding of information itself. Monod proposed an analogy: Just as the biosphere stands above the world of nonliving matter, so an “abstract kingdom” rises above the biosphere. The denizens of this kingdom? Ideas.
Ideas have retained some of the properties of organisms. Like them, they tend to perpetuate their structure and to breed; they too can fuse, recombine, segregate their content; indeed they too can evolve, and in this evolution selection must surely play an important role.
Ideas have “spreading power,” he noted—“infectivity, as it were”—and some more than others. An example of an infectious idea might be a religious ideology that gains sway over a large group of people. The American neurophysiologist Roger Sperry had put forward a similar notion several years earlier, arguing that ideas are “just as real” as the neurons they inhabit. Ideas have power, he said.
Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and thanks to global communication, in far distant, foreign brains. And they also interact with the external surroundings to produce in toto a burstwise advance in evolution that is far beyond anything to hit the evolutionary scene yet. . . .
I shall not hazard a theory of the selection of ideas.
No need. Others were willing.
Richard Dawkins made his own connection between the evolution of genes and the evolution of ideas. His essential actor was the replicator, and it scarcely mattered whether replicators were made of nucleic acid. His rule is “All life evolves by the differential survival of replicating entities.” Wherever there is life, there must be replicators. Perhaps on other worlds replicators could arise in a silicon-based chemistry—or in no chemistry at all.
What would it mean for a replicator to exist without chemistry? “I think that a new kind of replicator has recently emerged on this planet,” he proclaimed at the end of his ﬁrst book, in 1976. “It is staring us in the face. It is still in its infancy, still drifting clumsily about in its primeval soup, but already it is achieving evolutionary change at a rate that leaves the old gene panting far behind.” That “soup” is human culture; the vector of transmission is language; and the spawning ground is the brain.
For this bodiless replicator itself, Dawkins proposed a name. He called it the meme, and it became his most memorable invention, far more inﬂuential than his selﬁsh genes or his later proselytizing against religiosity. “Memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation,” he wrote. They compete with one another for limited resources: brain time or bandwidth. They compete most of all for attention. For example:
Ideas. Whether an idea arises uniquely or reappears many times, it may thrive in the meme pool or it may dwindle and vanish. The belief in God is an example Dawkins offers—an ancient idea, replicating itself not just in words but in music and art. The belief that the earth orbits the sun is no less a meme, competing with others for survival. (Truth may be a helpful quality for a meme, but it is only one among many.)
Tunes. This tune
has spread for centuries across several continents. This one
a notorious though shorter-lived invader of brains, overran an immense population many times faster.
Catchphrases. One text snippet, “What hath God wrought?” appeared early and spread rapidly in more than one medium. Another, “Read my lips,” charted a peculiar path through late twentieth-century America. “Survival of the ﬁttest” is a meme that, like other memes, mutates wildly (“survival of the fattest”; “survival of the sickest”; “survival of the fakest”; “survival of the twittest”; . . . ).
Images. In Isaac Newton’s lifetime, no more than a few thousand people had any idea what he looked like, though he was one of England’s most famous men, yet now millions of people have quite a clear idea— based on replicas of copies of rather poorly painted portraits. Even more pervasive and indelible are the smile of Mona Lisa, The Scream of Edvard Munch, and the silhouettes of various ﬁctional extraterrestrials. These are memes, living a life of their own, independent of any physical reality. “This may not be what George Washington looked like then,” a tour guide was overheard saying of the Gilbert Stuart painting at the Metropolitan Museum of Art, “but this is what he looks like now.” Exactly.
Memes emerge in brains and travel outward, establishing beachheads on paper and celluloid and silicon and anywhere else information can go. They are not to be thought of as elementary particles but as organisms. The number three is not a meme; nor is the color blue, nor any simple thought, any more than a single nucleotide can be a gene. Memes are complex units, distinct and memorable—units with staying power. Also, an object is not a meme. The hula hoop is not a meme; it is made of plastic, not of bits. When this species of toy spread worldwide in a mad epidemic in 1958, it was the product, the physical manifestation of a meme, or memes: the craving for hula hoops; the swaying, swinging, twirling skill set of hula-hooping. The hula hoop itself is a meme vehicle. So, for that matter, is each human hula hooper—a strikingly effective meme vehicle, in the sense neatly explained by the philosopher Daniel Dennett: “A wagon with spoked wheels carries not only grain or freight from place to place; it carries the brilliant idea of a wagon with spoked wheels from mind to mind.” Hula hoopers did that for the hula hoop’s memes—and in 1958 they found a new transmission vector, broadcast television, sending its messages immeasurably faster and farther than any wagon. The moving image of the hula hooper seduced new minds by hundreds, and then by thousands, and then by millions. The meme is not the dancer but the dance.
We are their vehicles and their enablers. For most of our biological history they existed ﬂeetingly; their main mode of transmission was the one called “word of mouth.” Lately, however, they have managed to adhere in solid substance: clay tablets, cave walls, paper sheets. They achieve longevity through our pens and printing presses, magnetic tapes and optical disks. They spread via broadcast towers and digital networks. Memes may be stories, recipes, skills, legends, and fashions. We copy them, one person at a time. Alternatively, in Dawkins’s meme-centered perspective, they copy themselves. At ﬁrst some of Dawkins’s readers wondered how literally to take that. Did he mean to give memes anthropomorphic desires, intentions, and goals? It was the selﬁsh gene all over again. (Typical salvo: “Genes cannot be selﬁsh or unselﬁsh, any more than atoms can be jealous, elephants abstract or biscuits teleological.” Typical rebuttal: a reminder that selﬁshness is deﬁned by the geneticist as the tendency to increase one’s chances of survival relative to its competitors.)
Dawkins’s way of speaking was not meant to suggest that memes are conscious actors, only that they are entities with interests that can be furthered by natural selection. Their interests are not our interests. “A meme,” Dennett says, “is an information packet with attitude.” When we speak of ﬁghting for a principle or dying for an idea, we may be more literal than we know. “To die for an idea; it is unquestionably noble,”
H. L. Mencken wrote. “But how much nobler it would be if men died for ideas that were true!”
Tinker, tailor, soldier, sailor . . . Rhyme and rhythm help people remember bits of text. Or: rhyme and rhythm help bits of text get remembered. Rhyme and rhythm are qualities that aid a meme’s survival, just as strength and speed aid an animal’s. Patterned language has an evolutionary advantage. Rhyme, rhythm, and reason—for reason, too, is a form of pattern. I was promised on a time to have reason for my rhyme; from that time unto this season, I received nor rhyme nor reason.
Like genes, memes have effects on the wide world beyond themselves: phenotypic effects. In some cases (the meme for making ﬁre; for wearing clothes; for the resurrection of Jesus) the effects can be powerful indeed. As they broadcast their inﬂuence on the world, memes thus inﬂuence the conditions affecting their own chances of survival. The meme or memes composing Morse code had strong positive feedback effects. “I believe that, given the right conditions, replicators automatically band together to create systems, or machines, that carry them around and work to favour their continued replication,” wrote Dawkins. Some memes have evident beneﬁts for their human hosts (“look before you leap,” knowledge of CPR, belief in hand washing before cooking), but memetic success and genetic success are not the same. Memes can replicate with impressive virulence while leaving swaths of collateral damage—patent medicines and psychic surgery, astrology and satanism, racist myths, superstitions, and (a special case) computer viruses. In a way, these are the most interesting—the memes that thrive to their hosts’ detriment, such as the idea that suicide bombers will ﬁnd their reward in heaven.
When Dawkins ﬁrst ﬂoated the meme meme, Nicholas Humphrey, an evolutionary psychologist, said immediately that these entities should be considered “living structures, not just metaphorically but technically”:
When you plant a fertile meme in my mind you literally parasitize my brain, turning it into a vehicle for the meme’s propagation in just the way that a virus may parasitize the genetic mechanism of a host cell. And this isn’t just a way of talking—the meme for, say, “belief in life after death” is actually realized physically, millions of times over, as a structure in the nervous systems of individual men the world over.
Most early readers of The Selﬁsh Gene passed over memes as a fanciful afterthought, but the pioneering ethologist W. D. Hamilton, reviewing the book for Science, ventured this prediction:
Hard as this term may be to delimit—it surely must be harder than gene, which is bad enough—I suspect that it will soon be in common use by biologists and, one hopes, by philosophers, linguists, and others as well and that it may become absorbed as far as the word “gene” has been into everyday speech.
Memes could travel wordlessly even before language was born. Plain mimicry is enough to replicate knowledge—how to chip an arrowhead or start a ﬁre. Among animals, chimpanzees and gorillas are known to acquire behaviors by imitation. Some species of songbirds learn their songs, or at least song variants, after hearing them from neighboring birds (or, more recently, from ornithologists with audio players). Birds develop song repertoires and song dialects—in short, they exhibit a bird-song culture that predates human culture by eons. These special cases notwithstanding, for most of human history memes and language have gone hand in glove. (Clichés are memes.) Language serves as culture’s ﬁrst catalyst. It supersedes mere imitation, spreading knowledge by abstraction and encoding.
Perhaps the analogy with disease was inevitable. Before anyone understood anything of epidemiology, its language was applied to species of information. An emotion can be infectious, a tune catchy, a habit contagious. “From look to look, contagious through the crowd / The panic runs,” wrote the poet James Thomson in 1730. Lust, likewise, according to Milton: “Eve, whose eye darted contagious ﬁre.” But only in the new millennium, in the time of global electronic transmission, has the identiﬁcation become second nature. Ours is the age of virality: viral education, viral marketing, viral e-mail and video and networking. Researchers studying the Internet itself as a medium—crowdsourcing, collective attention, social networking, and resource allocation—employ not only the language but also the mathematical principles of epidemiology.
One of the ﬁrst to use the terms viral text and viral sentences seems to have been a reader of Dawkins named Stephen Walton of New York City, corresponding in 1981 with Douglas Hofstadter. Thinking logically—perhaps in the mode of a computer—Walton proposed simple self-replicating sentences along the lines of “Say me!” “Copy me!” and “If you copy me, I’ll grant you three wishes!” Hofstadter, then a columnist for Scientiﬁc American, found the term viral text itself to be even catchier.
Well, now, Walton’s own viral text, as you can see here before your eyes, has managed to commandeer the facilities of a very powerful host—an entire magazine and printing press and distribution service. It has leapt aboard and is now—even as you read this viral sentence—propagating itself madly throughout the ideosphere!
(In the early 1980s, a magazine with a print circulation of 700,000 still seemed like a powerful communications platform.) Hofstadter gaily declared himself infected by the meme meme.
One source of resistance—or at least unease—was the shoving of us humans toward the wings. It was bad enough to say that a person is merely a gene’s way of making more genes. Now humans are to be considered as vehicles for the propagation of memes, too. No one likes to be called a puppet. Dennett summed up the problem this way: “I don’t know about you, but I am not initially attracted by the idea of my brain as a sort of dung heap in which the larvae of other people’s ideas renew themselves, before sending out copies of themselves in an informational diaspora. . . . Who’s in charge, according to this vision—we or our memes?”
He answered his own question by reminding us that, like it or not, we are seldom “in charge” of our own minds. He might have quoted Freud; instead he quoted Mozart (or so he thought):
In the night when I cannot sleep, thoughts crowd into my mind. . . .
Whence and how do they come? I do not know and I have nothing to do
with it. Those which please me I keep in my head and hum them.
Later Dennett was informed that this well-known quotation was not Mozart’s after all. It had taken on a life of its own; it was a fairly successful meme.
For anyone taken with the idea of memes, the landscape was changing faster than Dawkins had imagined possible in 1976, when he wrote, “The computers in which memes live are human brains.” By 1989, the time of the second edition of The Selﬁsh Gene, having become an adept programmer himself, he had to amend that: “It was obviously predictable that manufactured electronic computers, too, would eventually play host to self-replicating patterns of information.” Information was passing from one computer to another “when their owners pass ﬂoppy discs around,” and he could see another phenomenon on the near horizon: computers connected in networks. “Many of them,” he wrote, “are literally wired up together in electronic mail exchange. . . . It is a perfect milieu for self-replicating programs to ﬂourish.” Indeed, the Internet was in its birth throes. Not only did it provide memes with a nutrient-rich culture medium; it also gave wings to the idea of memes. Meme itself quickly became an Internet buzzword. Awareness of memes fostered their spread.
A notorious example of a meme that could not have emerged in pre-Internet culture was the phrase “jumped the shark.” Loopy self-reference characterized every phase of its existence. To jump the shark means to pass a peak of quality or popularity and begin an irreversible decline. The phrase was thought to have been used ﬁrst in 1985 by a college student named Sean J. Connolly, in reference to a certain television series. The origin of the phrase requires a certain amount of explanation without which it could not have been initially understood. Perhaps for that reason, there is no recorded usage until 1997, when Connolly’s roommate, Jon Hein, registered the domain name jumptheshark.com and created a web site devoted to its promotion. The web site soon featured a list of frequently asked questions:
Q. Did “jump the shark” originate from this web site, or did you create the site to capitalize on the phrase?
A. This site went up December 24, 1997 and gave birth to the phrase “jump the shark.” As the site continues to grow in popularity, the term has become more commonplace. The site is the chicken, the egg, and now a Catch-22.
It spread to more traditional media in the next year; Maureen Dowd devoted a column to explaining it in The New York Times in 2001; in 2003 the same newspaper’s “On Language” columnist, William Saﬁre, called it “the popular culture’s phrase of the year”; soon after that, people were using the phrase in speech and in print without self-consciousness— no quotation marks or explanation—and eventually, inevitably, various cultural observers asked, “Has ‘jump the shark’ jumped the shark?” (“Granted, Jump the Shark is a brilliant cultural concept. . . . But now the damn thing is everywhere.”) Like any good meme, it spawned mutations. The “jumping the shark” entry in Wikipedia advised in 2009, “See also: jumping the couch; nuking the fridge.”
Is this science? In his 1983 column, Hofstadter proposed the obvious memetic label for such a discipline: memetics. The study of memes has attracted researchers from ﬁelds as far apart as computer science and microbiology. In bioinformatics, chain letters are an object of study. They are memes; they have evolutionary histories. The very purpose of a chain letter is replication; whatever else a chain letter may say, it embodies one message: Copy me. One student of chain-letter evolution, Daniel W. VanArsdale, listed many variants, in chain letters and even earlier texts: “Make seven copies of it exactly as it is written” ; “Copy this in full and send to nine friends” ; “And if any man shall take away from the words of the book of this prophecy, God shall take away his part out of the book of life” [Revelation 22:19]. Chain letters ﬂourished with the help of a new nineteenth-century technology: “carbonic paper,” sandwiched between sheets of writing paper in stacks. Then carbon paper made a symbiotic partnership with another technology, the typewriter. Viral outbreaks of chain letters occurred all through the early twentieth century.
“An unusual chain-letter reached Quincy during the latter part of 1933,” wrote a local Illinois historian. “So rapidly did the chain-letter fad develop symptoms of mass hysteria and spread throughout the United States, that by 1935–1936 the Post Ofﬁce Department, as well as agencies of public opinion, had to take a hand in suppressing the movement.” He provided a sample—a meme motivating its human carriers with promises and threats:
We trust in God. He supplies our needs.
Mrs. F. Streuzel . . . . . . . .Mich.
Mrs. A. Ford . . . . . . . . . .Chicago, Ill.
Mrs. K. Adkins . . . . . . . . Chicago, Ill. etc.
Copy the above names, omitting the ﬁrst. Add your name last. Mail it to ﬁve persons who you wish prosperity to. The chain was started by an American Colonel and must be mailed 24 hours after receiving it. This will bring prosperity within 9 days after mailing it.
Mrs. Sanford won $3,000. Mrs. Andres won $1,000.
Mrs. Howe who broke the chain lost everything she possessed.
The chain grows a deﬁnite power over the expected word.
DO NOT BREAK THE CHAIN.
Two subsequent technologies, when their use became widespread, provided orders-of-magnitude boosts in chain-letter fecundity: photocopying (c. 1950) and e-mail (c. 1995). One team of information scientists—Charles H. Bennett from IBM in New York and Ming Li and Bin Ma from Ontario, Canada—inspired by a chance conversation on a hike in the Hong Kong mountains, began an analysis of a set of chain letters collected during the photocopier era. They had thirty-three, all variants of a single letter, with mutations in the form of misspellings, omissions, and transposed words and phrases. “These letters have passed from host to host, mutating and evolving,” they reported.
Like a gene, their average length is about 2,000 characters. Like a potent virus, the letter threatens to kill you and induces you to pass it on to your “friends and associates”—some variation of this letter has probably reached millions of people. Like an inheritable trait, it promises beneﬁts for you and the people you pass it on to. Like genomes, chain letters undergo natural selection and sometimes parts even get transferred between coexisting “species.”
Reaching beyond these appealing metaphors, they set out to use the letters as a “test bed” for algorithms used in evolutionary biology. The algorithms were designed to take the genomes of various modern creatures and work backward, by inference and deduction, to reconstruct their phylogeny—their evolutionary trees. If these mathematical methods worked with genes, the scientists suggested, they should work with chain letters, too. In both cases the researchers were able to verify mutation rates and relatedness measures.
Still, most of the elements of culture change and blur too easily to qualify as stable replicators. They are rarely as neatly ﬁxed as a sequence of DNA. Dawkins himself emphasized that he had never imagined founding anything like a new science of memetics. A peer-reviewed Journal of Memetics came to life in 1997—published online, naturally—and then faded away after eight years partly spent in self-conscious debate over status, mission, and terminology. Even compared with genes, memes are hard to mathematize or even to deﬁne rigorously. So the gene-meme analogy causes uneasiness and the genetics-memetics analogy even more.
Genes at least have a grounding in physical substance. Memes are abstract, intangible, and unmeasurable. Genes replicate with near-perfect ﬁdelity, and evolution depends on that: some variation is essential, but mutations need to be rare. Memes are seldom copied exactly; their boundaries are always fuzzy, and they mutate with a wild ﬂexibility that would be fatal in biology. The term meme could be applied to a suspicious cornucopia of entities, from small to large. For Dennett, the ﬁrst four notes of Beethoven’s Fifth Symphony were “clearly” a meme, along with Homer’s Odyssey (or at least the idea of the Odyssey), the wheel, anti-Semitism, and writing. “Memes have not yet found their Watson and Crick,” said Dawkins; “they even lack their Mendel.”
Yet here they are. As the arc of information ﬂow bends toward ever greater connectivity, memes evolve faster and spread farther. Their presence is felt if not seen in herd behavior, bank runs, informational cascades, and ﬁnancial bubbles. Diets rise and fall in popularity, their very names becoming catchphrases—the South Beach Diet and the Atkins Diet, the Scarsdale Diet, the Cookie Diet and the Drinking Man’s Diet all replicating according to a dynamic about which the science of nutrition has nothing to say. Medical practice, too, experiences “surgical fads” and “iatroepidemics”—epidemics caused by fashions in treatment—like the iatroepidemic of children’s tonsillectomies that swept the United States and parts of Europe in the mid-twentieth century, with no more medical beneﬁt than ritual circumcision. Memes were seen through car windows when yellow diamond-shaped baby on board signs appeared as if in an instant of mass panic in 1984, in the United States and then Europe and Japan, followed an instant later by a spawn of ironic mutations (baby i’m board, ex in trunk). Memes were felt when global discourse was dominated in the last year of the millennium by the belief that the world’s computers would stammer or choke when their internal clocks reached a special round number.
In the competition for space in our brains and in the culture, the effective combatants are the messages. The new, oblique, looping views of genes and memes have enriched us. They give us paradoxes to write on Möbius strips. “The human world is made of stories, not people,” writes David Mitchell. “The people the stories use to tell themselves are not to be blamed.” Margaret Atwood writes: “As with all knowledge, once you knew it, you couldn’t imagine how it was that you hadn’t known it before. Like stage magic, knowledge before you knew it took place before your very eyes, but you were looking elsewhere.” Nearing death, John Updike reﬂects on
A life poured into words—apparent waste
intended to preserve the thing consumed.
Fred Dretske, a philosopher of mind and knowledge, wrote in 1981: “In the beginning there was information. The word came later.” He added this explanation: “The transition was achieved by the development of organisms with the capacity for selectively exploiting this information in order to survive and perpetuate their kind.” Now we might add, thanks to Dawkins, that the transition was achieved by the information itself, surviving and perpetuating its kind and selectively exploiting organisms.
Most of the biosphere cannot see the infosphere; it is invisible, a parallel universe humming with ghostly inhabitants. But they are not ghosts to us—not anymore. We humans, alone among the earth’s organic creatures, live in both worlds at once. It is as though, having long coexisted with the unseen, we have begun to develop the needed extrasensory perception. We are aware of the many species of information. We name their types sardonically, as though to reassure ourselves that we understand: urban myths and zombie lies. We keep them alive in air-conditioned server farms. But we cannot own them. When a jingle lingers in our ears, or a fad turns fashion upside down, or a hoax dominates the global chatter for months and vanishes as swiftly as it came, who is master and who is slave?
From the Hardcover edition.