Read an Excerpt
WHAT IS THINKING?
By the time the rainy days of the tropical autumn of 1984 arrived, most Brazilians had had enough. For twenty years, their beloved country had been ruled by a vicious dictatorship, brought to power by a military coup d'état that triumphed, emblematically, on April Fools' Day 1964. For the next two decades the military regime built an infamous legacy, marked primarily by its rampant incompetence, widespread corruption, and shameful political violence against its own people.
By 1979, thanks to the growing popular opposition to the regime, the latest four-star general installed in the presidential palace had no alternative but to grant amnesty to the political leaders, scientists, and intellectuals who had fled into exile abroad. A gradual, controlled return to civilian rule had been mapped out by the generals, beginning with popular elections for state governorships in the fall of 1982.
That November, the opposition parties won by a landslide. By the next year, however, that small token of democracy had been all but forgotten. Brazilians realized they had the right and, more importantly, the power to demand more than a dictator's political bread crumbs. They wanted to oust the military government, but not through another coup d'état. Instead, they wanted to vote it into retirement through a direct election for president. That is how, seemingly out of nowhere, a nationwide movement demanding immediate direct elections for president (diretas jáin Portuguese) broke loose. The first rally took place in the tiny northeastern city of Abreu e Lima on March 31, 1983. By November, a somewhat shy crowd of ten thousand people had gathered to protest in Brazil's most populous and wealthy city, São Paulo. From that point, the movement grew exponentially. Two months later, on January 25, 1984, the day São Paulo celebrated its 430th anniversary, more than two hundred thousand people were chanting their collective demand for immediate presidential elections. In a matter of days, gigantic crowds started to converge on the main squares of Rio de Janeiro, Brasilia, and other major cities.
On the evening of April 16, 1984, more than one million people congregated in the heart of downtown São Paulo to participate in the largest political rally ever staged in the country's history. In a matter of hours, a river of people, most dressed in the Brazilian national colors, green and yellow, inundated the valley where the city was originally founded. Every new group of people that arrived immediately joined into an already familiar, two-word rhythmic chant that erupted briskly somewhere in the crowd, every minute or so, and spread like thunder through space: "Diretas já, diretas já" (Elections now, elections now). If you have never taken part in a chorale formed by one million people, I recommend the experience. Nothing can prepare you for its penetrating sound, and nothing this side of the Milky Way will allow you to forget it. It is the sort of sound that carves memories for a lifetime.
Pressed by the ever-increasing flow of people, I climbed to the roof of a newsstand and, for the first time that night, gained a panoramic view of the entire citizenry that was conquering São Paulo's Anhangabaú Valley with its two-word song. For the long-vanished Tupi-Guarani, the native Indian tribe that inhabited that land before the Portuguese arrived in the sixteenth century, the stream that had run through the valley was known as the "river of the bad spirits." Not anymore. That night, the only river visible was a mighty Amazon of people. No bad spirit would have dared to exert itself in such a purposeful human ocean.
"What do we want?" part of the crowd spontaneously asked.
"Diretas" (elections), the rest of us answered.
"Quando" (when)? another group provoked.
"Já, já, já!" (now, now, now!) the whole crowd screamed back.
When that million-people choir began to sing the Brazilian anthem, not even the sky could hold its tears anymore. As the traditional São Paulo drizzle descended, I absorbed this resounding demonstration of what a population of individuals can do when they collaborate in harmony to achieve a common goal. Even though the message transmitted by the crowd (Diretas já!) was always the same, at any moment in time, a different combination of many voices was recruited to produce the crowd's roar. People weren't necessarily able to scream every time. Some were talking to their neighbors; others became temporarily hoarse, or were distracted while waving their flags; others simply dropped out of the chorale due to sheer emotion. Moreover, even as handfuls of people began to leave the rally later on, the crowd continued to thunder. For any observer, the loss of those few protesters did not make any difference at all—the overall potential population was so huge that the loss of a few people did not meaningfully alter the result.
Ultimately, the voices of those millions of Brazilians were heard. A few days later, I met with my mentor, Dr. César Timo-Iaria, to discuss a paper by David Hubel and Torsten Wiesel, who had shared the Nobel Prize in Physiology or Medicine in 1981 for their groundbreaking research on the visual cortex. Hubel and Wiesel had recorded the electrical activity of single neurons in the visual cortex, using the classical reductionist approach that was the norm in labs around the world at the time. I innocently asked Timo-Iaria why we did not do the same. His reply was as forceful as the roar that I had experienced as a member of the crowd in São Paulo: "We do not record from a single neuron, my son, for the same reason that the rally you attended a few days ago would be a disaster if, instead of one million people, only one person had showed up to protest," he said. "Do you think that anyone would pay attention to the plea of a single person screaming at a political rally? The same is true for the brain: it does not pay attention to the electrical screaming of a single noisy neuron. It needs many more of its cells singing together to decide what to do next."
Had I been more observant on that historic night in 1984, I may have understood that the dynamic social behavior of that thundering crowd had set before me most of the neurophysiological principles that I would obsessively investigate over the next quarter of a century. But instead of listening to a chorus of political protesters, I would be listening for the virtually unheard electrical symphonies created by large populations of neurons.
These neural ensembles would eventually provide the means for liberating a primate brain from its biological body. But in the mid-1980s, very few neuroscientists saw any reason to relinquish the reductionist experimental paradigm and its focus on single neurons. Perhaps this was because other scientific fields, including particle physics and molecular biology, had experienced extraordinary success with reductionism; in particle physics, for instance, the theory and ultimate discovery of smaller and smaller particles, such as quarks, proved to be a linchpin in the definition of the so-called standard model, which continues to be the basis of our understanding of the physical universe.
Roughly speaking, in mainstream twentieth-century neuroscience, the reductionist approach meant breaking the brain into individual regions that contained a high density of neurons, known as brain areas or nuclei, and then studying the individual neurons and their connections within and between each of these structures, one at a time and in great detail. It was hoped that once a large enough number of these neurons and their connections had been analyzed exhaustively, the accumulated information would explain how the central nervous system works as a whole. Allegiance to reductionism led most neuroscientists to dedicate their entire careers to describing the anatomical, physiological, biochemical, pharmacological, and molecular organization of individual neurons and their structural components. This painstaking and wonderful collective effort generated a tremendous wealth of data from which many outstanding discoveries and breakthroughs resulted. With the unfair benefit of hindsight, today one could argue that neuroscientists were trying to decipher the workings of the brain in the same way as an ecologist would attempt to study the physiology of a single tree at a time in order to understand the rain forest ecosystem, or an economist would monitor a single stock to predict the stock market, or a military dictator would try to arrest a single protester at a time to reduce the effectiveness of a million-strong Brazilian chorus chanting diretas já in 1984.
For an observer who benefits today from the century-old work of the true giants of brain research, it seems that what much of neuroscience still lacks is an experimental paradigm for dealing with the complexity of brain circuits. Today, systems formed by large numbers of interacting elements—things like a political movement, the global financial market, the Internet, the immune system, the planet's climate, and an ant colony—are known as complex systems whose most fundamental properties tend to emerge through the collective interactions of many elements. Typically, such complex systems do not reveal their intimate collective secrets when approached by the reductionist method. With its billions of interconnected neurons, whose interactions change from millisecond to millisecond, the human brain is an archetypal complex system.
Part of the neglect toward exploring the complexity of the brain could be justified by the tremendous experimental challenges involved in "listening" simultaneously to the electrical signals produced by large numbers of individual single neurons, distributed across multiple brain areas, in a behaving animal. For example, at the time Brazilian crowds were fighting for presidential elections, no one in the neuroscience community was sure what type of sensor could be implanted in the brains of animals so that many of these minute neuronal electrical signals could be sampled simultaneously for many days or weeks, while subjects performed a variety of behavioral tasks. Moreover, there was no electronic hardware or sufficiently powerful computer available that neurophysiologists could readily utilize to filter, amplify, display, and store the electrical activity generated by tens of individual neurons simultaneously. Neurophysiologists wondered, almost in despair, how they might choose which neurons to record from in each brain structure. Worst of all, nobody had any idea how to analyze the huge mountain of neurophysiological data that would be generated in case these technical bottlenecks could somehow be solved.
Paradoxically, few neuroscientists ever doubted that the astonishing feats accomplished by the human mind—from the production of artificial tools to the generation of self-awareness and consciousness—arise from the brain's huge number of neurons combined with their intricate pattern of massive parallel connectivity. But for decades, any attempt to tackle the technical hurdles to listen to brain symphonies was dismissed as a chimera, a high-tech experimental utopia that might only be realized through an effort on the scale of the Manhattan Project.
Essentially, all expressions of human nature ever produced, from a caveman's paintings to Mozart's symphonies and Einstein's view of the universe, emerge from the same source: the relentless dynamic toil of large populations of interconnected neurons. Not one of the numerous complex behaviors that are vital for the survival and prosperity of our species—or, for that matter, of our close and distant cousins, primates and mammals—can be enacted by the action of a single neuron, no matter how special this individual cell may be. Thus, despite the great deal we have learned about how single neurons look and function, and despite innumerous scientific achievements of brain research for the past century, the straightforward application of reductionism to brain research has proven to be insufficient and improper as a strategy to deliver the field's most cherished promise, a comprehensive theory of thinking.
All this means that the traditional and well-disseminated view of the brain, the one espoused in artful prose and beautiful illustrations in most of the neuroscience textbooks, can no longer stand. In much the same way that Einstein's theory of relativity revolutionized the classic view of the universe, the traditional single neuron–based theory of brain function needs to be categorically replaced by what amounts to a relativistic view of the mind.
The first step in proposing any new scientific theory is to define a proper level of analysis for investigating phenomena and testing one's hypothesis about them. This allows for validating orfalsifying the proposed theory—the essence of the scientific method. I contend that the most appropriate approach to understanding thinking is to investigate the physiological principles that underlie the dynamic interactions of large distributed populations of neurons that define a brain circuit (see Fig. 1.1). Neurons transmit information to one another through long, projecting structures—their axons—which make discrete, noncontinuous contact (the synapse) with nerve cell bodies and their protoplasmic, treelike structures, called dendrites. In my view, while the single neuron is the basic anatomical and information processing-signaling unit of the brain, it is not capable of generating behaviors and, ultimately, thinking. Instead, the true functional unit of the central nervous system is a population of neurons, or neural ensembles or cell assemblies. Such a functional arrangement, in which populations of neurons rather than single cells account for the information needed for the generation of behaviors, is also commonly referred to as distributed neuronal coding.
FIGURE 1.1 The architecture of a neural circuit. Reproduction of a Ramón y Cajal original drawing showing a neural circuit formed by many neurons. One single neuron and its cellular specializations are highlighted. In general, dendrites serve as the main neuronal specialization receiving synapses from other neurons. Axon terminals establish the neuron's synapses with other brain cells. (Cajal's drawing from "Histology of the Nervous System" was reproduced with permission of the Cajal Legacy, Instituto Cajal [CSIC], Madrid, Spain.)
Thinking with populations of neurons! Even two of humanity's most intimate possessions—a sense of self and a body image—are fluid, highly modifiable creations of the brain's mischievous deployment of electricity and a handful of chemicals. They both can change or be changed on less than a second's notice. And, as we will see, they do.
During the first half of the twentieth century, so-called single-neuron neurophysiologists argued, with seemingly incontrovertible evidence, that after sensory information was sampled from the external world through specialized receptors—the skin, retina, inner ear, nose, and tongue—it ascended through specific sensory nerve pathways that terminated in specific cortical areas. These areas were identified as the primary sites for processing sensory information in the cortex, with the somatosensory (tactile), visual, and auditory areas gaining particular prominence. During the same period, however, an American psychologist, Karl Lashley, emerged as the poster boy for the opposition: the distributionist camp. Lashley's main obsession was to identify the location in which the brain stores a memory, which he called the engram. In his experiments, he would surgically remove cortical tissue from various areas of the brains of rats, monkeys, and apes, both before and after the animals had been taught to perform particular behaviors, which ranged from simple tasks (learning how to identify a particular object visually and then jump or reach for it) to complex problem solving (learning how to navigate an elaborate maze). After an animal was trained, he measured the impact of the cortical lesions he had created on the animal's capacity to acquire or retain the behavioral skill, or habit, it had learned. With this experimental process, he aimed to understand how associations were built between sensory information and motor behavior.
According to Lashley, after animals had been trained in a simple task, much of the remaining cortex could be removed without affecting significantly the animal's behavioral performance—provided that some volume of the primary sensory cortex involved in the task was left intact. In fact, if just one-sixtieth of the primary visual cortex remained, the animal would retain a visual-motor habit it had learned. Faced with simple tasks, the brain was amazingly resilient in handling sensory information. In his classic article, "In Search of the Engram," Lashley summarized his results as the "principle of equipotentiality," whereby memory traces were distributed throughout the sensory area, not in a specific neuron or small group of neurons.
Yet, Lashley also had found that the brain was less able to recover from damage when faced with more complex behavioral tasks. An animal would begin to make task errors with a small number of lesions, and the amount of errors produced was proportional to the cortical mass removed surgically. Once 50 percent or more of the neocortex had been removed, the animal began to lose the learned habit altogether, requiring extensive retraining. Based on these findings, Lashley proposed a second principle of memory, the "mass action effect," which stated that "some physiological mode of organization or integrating activity is affected rather than specific associative bonds." Complex problem solving became "disordered" when parts of the cortex were taken out of commission.
Many neuroscientists criticized Lashley's conclusions. Even today, simply mentioning his name in a scientific talk invariably triggers all-knowing chortles of derision. Most of the scientific blowback was leveled at his experimental approach, particularly in trying to create brain lesions and then correlating them with too simplistic or too complex tasks. Still, Lashley was able to show that there was more going on in the primary sensory cortices than most neuroscientists were willing to acknowledge.
Usually, academic battles turn out to be so bloody because their stakes are often so miserably low. Not in this case. Defining the functional unit of the brain is a solemn endeavor. After all, this quest aims at pinpointing exactly which piece of organic matter decides, on your behalf, where your body starts and ends, what it feels like to be human, what deeper beliefs you hold, and how your children and the children of your children will one day remember who you were and what became of your legacy as a human being. Few human enterprises come close in relevance and drama as the ongoing search for the true reasons that make each of us feel so irrevocably different and unique, and yet so strikingly similar to our kin.
A simple analogy helps to clarify the distinction between the two competing views of brain function I am presenting here. Consider the role played by musicians in a symphony orchestra. If you had tickets to hear a concert and arrived to find out that just one bassoonist had showed up, you would be rather disappointed at the end of the night: no matter how proficient the musician had been, and how much effort he or she put into the performance, you would not be able to imagine the symphony's full score—not even if, instead of a bassoonist, the glorious violinist Anne-Sophie Mutter or the electrifying pianist Maria João Pires were onstage. You would only get an appreciation of the entire musical tapestry of the symphony if a significant number of musicians performed together, simultaneously. For the distributionists, when the brain creates a complex message or task using a large number of neurons, it is composing a type of symphony.
A neuronal concerto.
Coding a complex neuronal message or task into a large number of small, individual fragments or actions is similar to the work of an orchestra—each fragment helps to create the meaningful whole, like the million human voices singing "diretas já" that, with its sheer power, dethroned a dictator. This sort of distributed message strategy is often found in nature.
Distributed strategies are present in many aspects of our daily lives. For instance, the production of complex phenotypical traits—how our genetic makeup is expressed in, say, our physical appearance—often relies on the concurrent expression of many genes distributed across an array of chromosomes. Another natural distributed strategy involves multiprotein complexes, which operate within individual cells to perform a variety of functions ranging from DNA translation and repair to the release of chemicals, known as neurotransmitters, by neuronal synapses. Each protein is responsible for one specific subtask and many proteins may interact together to achieve a rather complex operation. For instance, different protein complexes, embedded in the width of the lipid membrane of a single neuron, form a variety of membrane ion channels. Each ion channel works like a tunnel through the membrane. When this tunnel opens, a particular ion (sodium, potassium, chloride, or calcium) can enter or exit the cell. Multiple ion channels cooperate to maintain or alter the electrical membrane potential of a single neuron. A single ion channel cannot regulate this process, just like a single neuron cannot produce a meaningful behavior. Instead, a population of diverse ion channels is needed for neuronal cell membranes to work properly.
Distributed strategies work at higher levels, too. For example, African lions usually hunt in packs, particularly when they want to capture large prey, such as a seemingly vulnerable elephant drinking at a water hole. This pack approach ensures that if one of the lions is killed by the elephant, the rest of the pack still have a chance to get that valuable elephant steak tartare by the end of the night. Conversely, some of the most preyed-upon species usually defend themselves from potential predators by gathering into dense groups when they roam in search of food for themselves. Thus, flocks of migratory birds crossing the thin air of the Himalayas, schools of fish navigating the glassy green shallows of the Caribbean, and swarms of capybaras, a South American rodent weighing more than one hundred pounds with menacing front teeth but little else to defend it, each rely on distributed strategies for protection against predators. By increasing the density of the pack of individuals traveling together, they divide the attention of their nemesis and significantly reduce the probability that a given individual will be caught. By doing so, the chance of perpetuating the group as a whole increases—a distributed strategy of risk management.
Does this approach to handling risk sound familiar? When financial managers advise you to diversify your portfolio, spreading your investments across a large number of companies representing multiple sectors of the economy, they are proposing exactly this sort of distributed strategy, without the capybara's menacing teeth. Even the most influential technology of our time, the Internet, relies on distributed computer grids to fulfill our apparently limitless thirst for information. No single computer controls the flow of bits and bytes across the whole system, and there's no single cable connecting your computer to Google's headquarters when you type in a request to find a Web page on a particular subject. Rather, huge numbers of interconnected machines very quickly route your Google search to one of the company's many computer servers in Mountain View, California. If one of these machines goes bonkers, no problem; the remaining computer network ensures that your query is not lost.
But why do distributed strategies work so well? Why, from proteins to packs of capybaras, does it make sense to rely on large and distributed populations of individual elements? To answer this fundamental question, let's return to the brain and examine the advantages of such a population-coding scheme for thinking.
By distributing thought across a large population of neurons, evolution has designed an insurance policy for the brain. In most cases, people do not lose an important brain function when a single neuron or a small chunk of brain tissue is damaged due to a localized trauma or a minor stroke. Indeed, because of distributed coding, a great deal of brain damage has to occur before patients exhibit any clinical signs and symptoms of neurological dysfunction. Conversely, imagine the risks you would incur if only one of the neurons in your entire brain was in charge of conveying a key aspect of your life, say, the name of your favorite Brazilian soccer team. Lose that neuron, and that information would be forever lost. Yet, throughout our adult lives, individual neurons continuously die without any major side effects. The fact that we almost never notice functional or behavioral effects, though these minuscule neuronal tragedies take place every day, speaks volumes in favor of distributed coding in the brain. Neuron populations are highly adaptive, or plastic; when damaged or dead neurons need to be bypassed, the remaining ones can self-reorganize, changing their physiological, morphological, and connectivity makeup when repetitively exposed to tasks and environments. As my friend Rodney Douglas, of the University of Zurich, recently noted, the brain truly works like an orchestra, but a unique one, in which the music it produces can almost instantaneously modify the configuration of its players and instruments and self-compose a whole new melody from this process.
Evolution may also have favored distributed population coding because it is far more efficient at delivering many complex messages than single-neuron coding. Let's take a simple example. Suppose that a single neuron can convey, or in neuroscience jargon, represent, two distinct messages by flipping between two frequencies of electrical firing, either very rapid firing or very slow firing. If just one neuron was devoted to detecting images in the visual field of an animal, the animal's brain would only be able to respond to two distinct images—firing at a rapid rate when one of the images was detected and at a slow rate for the other. Any other images would not be discernible by the single neuron at that same moment. Now suppose that one hundred different neurons were allocated to perform the same job. The number of distinct images that could be detected with the same two firing states would jump to 2100.
In addition to this dramatic increase in computational power and memory, distributed coding in the brain relies on massive parallel information processing. Single neurons are capable of establishing an incredible number of connections by giving rise to axon processes that branch and reach many other different neurons simultaneously. This intricate mesh of neuronal connections can achieve wondrous things. For example, as part of my doctoral thesis, I created a simple computer program that could store, in a square matrix format, the direct connections shared between the pairs of brain areas and nuclei that form the circuit responsible for controlling cardiovascular functions. I then selected the most important forty brain structures that defined this circuit and identified which of the related forty nuclei were directly connected by a bundle of axons or nerves that uses only one synapse, called a monosynaptic pathway. In my computer program's forty-by-forty matrix, rows indicated the brain structures from which neurons gave rise to such monosynaptic pathways; the columns indicated the structures that received them. If structure number 4 had neurons that sent direct axonal projections to structure number 38, I noted "1" in the respective matrix position (intersection of row 4 and column 38). If neurons belonging to structure 38 reciprocated this connection and sent axons back to structure 4, another "1" was added to the intersection of row 38 and column 4. If there was no direct connection between a given pair (for instance, between nuclei 5 and 24), a "0" was added to the respective matrix position (see a reduced example in Fig. 1.2). Having gone through the trouble of building such a detailed matrix of direct, monosynaptic connections, I decided to ask a very simple question: given all the known pairwise connectivity of the circuit, how many neural pathways existed that could connect any pair in this circuit that did not have a direct monosynaptic connection? In other words, was there any way for information to flow between two unconnected pairs in the circuit? With that question in mind, I set a series of twenty IBM-PC XT microcomputers to run my program, hoping to get an answer. Each of these computers was supposed to seek potential multisynaptic pathways linking one of twenty distinct pairs of brain structures that did not share a direct monosynaptic pathway. At the end of this search, each computer would then print out the potential pathways in a list and a summary graph. I then headed out for a five-day holiday, to celebrate that most sacred of Brazilian religious events, Carnival.
Imagine my shock when, upon returning to the lab, I found that piles and piles of printouts had been generated by half of the computers. The programs running on those ten computers had identified thousands of potential multisynaptic pathways for connecting pairs of structures that did not talk directly to each other (Fig. 1.2). More surprising, of the other ten computers, some had not yet finished printing the potential pathways, while others had simply run out of paper. Even with just a handful of direct pairwise nuclei connections, there were hundreds of thousands or even millions of potential pathways for exchanging information between pairs of brain structures that did not share a monosynaptic connection.
By relying on large populations of interconnected neurons and massive parallel processing to encode information, advanced brains like ours become dynamic systems in which the whole becomes larger than the sum of its individual parts. That happens because the overall dynamic interactions of the network can generate complex global patterns of activity, known as emergent properties, which cannot be predicted from the outset by the linear sum of the individual features of individual elements. Such extreme nonlinear behavior enhances dramatically the physiological and behavioral outcomes that can emerge from the neural networks of the brain. Distributed networks formed by millions or even billions of neurons generate emergent properties such as brain oscillations, complex rhythmic firing patterns that underlie a variety of normal and pathological functions including certain states of sleep and epileptic seizures. Emergent brain properties also generate highly elaborate and complex brain functions, such as perception, motor control, dreaming, and a person's sense of self. Our very consciousness, arguably the most awesome endowment known to us, likely arises as an emergent property of a multitude of dynamically interacting neuronal circuits of the human brain.
FIGURE 1.2 Use of graph theory to study the distribution of pathways linking pairs of neurons. On top, a square matrix is used to represent the direct, monosynaptic connectivity of a small brain circuit. In this matrix, 1 represents the existence of a direct connection between a pair of brain structures, while 0 depicts its absence. Next to the matrix, a graph is used to represent the circuit. Circles with numbers represent the structures and directional arrows represent the direct connectivity information contained in the square matrix. The histogram below depicts the total number of pathways linking two structures (carotid baroreceptor and cerebellum) that did not share a direct, monosynaptic connection. The X axis represents the number of synapses of the pathways and the Y axis depicts the number of pathways found. Notice that millions of pathways were found for this particular example. (Courtesy of Dr. Miguel Nicolelis; redrawn by Dr. Nathan Fitzsimmons, Duke University.)
But the new view of the brain I propose involves much more than a simple shift in emphasis from a single neuron to populations of connected brain cells. Up to now, most neurophysiological theories have consistently ignored the fact that highly elaborated brains do not sit tight and wait for things to happen. Instead, these brains take the initiative and actively gather information about the body in which they are embedded and its surrounding world, tirelessly and diligently sewing the cloth of reality, opinions, loves, and, I am afraid, even prejudices that we proudly, and sometimes blindly, wear every millisecond of our lives, blissfully unaware of where it all comes from. This active information-seeking maintains what I call the "brain's own point of view": the combination of the brain's accumulated evolutionary and individual life history, its global dynamic state at a given moment in time, and its internal representation of the body and the external world. All these components, which comprise our most intimate mental existence, merge into a comprehensive and exquisitely detailed rendition of reality.
The "brain's own point of view" influences decisively the way we perceive not only the complex world around us, but also our body image and our sense of being. The Cartesian assumption, which poses that our brains passively interpret or decode signals coming from the outside world, without any preconceived viewpoint attached to it, can no longer stand up to the experimental evidence. In fact, to fulfill its enormous scientific potential—from unveiling the intricate physiological principles that govern the operation of the human brain to developing brain-machine interfaces capable of both rehabilitating patients devastated by neurological diseases and greatly augmenting human reach—mainstream neuroscience must divest itself from its twentieth-century dogma and wholeheartedly embrace this new view.
In his masterpiece, The Organization of Behavior, published in 1949, the Canadian psychologist Donald O. Hebb promoted the concept that cell assemblies are the true functional unit of the nervous system. A student of Lashley's, Hebb also postulated that no "single nerve cell or pathway [is] essential to any habit or perception." He also pointed out that "electrophysiology of the central nervous system indicates … that the brain is continuously active, in all its parts. An afferent [incoming] excitation [from the body's periphery] must be superimposed on an already existent excitation [inside the brain]. It is therefore impossible that the consequence of a sensory event should often be uninfluenced by the pre-existent [brain] activity."
I propose that the brain's work results from the dynamic interplay of billions of individual neurons that create a continuum in which neuronal space and time seamlessly combine. In a fully behaving animal, as Hebb proposed, no incoming sensory stimulus is processed without first being compared against the brain's internal predispositions and expectations, arduously built through the collection of signals and memories from previous encounters with similar, and even not so similar, stimuli earlier in life. The diffuse electrical response evoked in the brain of a conscious subject when a novel message arrives from the periphery of the body seems to depend heavily on the internal state of the brain at that particular moment in time. Thus, while the constancy of the velocity of light determines why space and time have to be relativized in relation to the state of motion of a pair of observers in the universe, I contend that evolutionary and individual history, the fixed maximum amount of energy a brain can consume, and the maximum rate of neuronal firing offer the constraints that require an equivalent relativization of space and time within our heads.
Most information about the world and our body comes to the brain as a result of exploratory actions initiated by the brain itself. Perception is an active process that starts inside our heads and not at a peripheral site on the body with which the outside world happens to come in contact. Through a variety of exploratory behaviors, the brain continuously tests its own point of view against the new information it encounters. Even though we routinely experience the "feeling" on our fingertips of such tactile attributes as texture, shape, and temperature, in reality these sensations are skillful illusions crafted by the brain—"felt" during the split second in which our fingertips make contact with an object and collect and transmit the sensory data via the nerves back to the brain. If the feeling doesn't match the brain's expectations, it will correct the mismatch by creating a moment of surprise and discomfort, like the one that emerges when you reach into a package of bread and drop the slice when you find it is wet and slippery rather than dry and crumbly. The same process takes place during our elaborate, simultaneous visual, auditory, olfactory, and gustatory "experiences" of the world. All these indisputable human traits are borne by massive electrical brainstorms, which we usually refer to, more colloquially, as the act of thinking.
But could we push the definition of thinking any further? I believe so. I propose that the brain is actually the most awesome simulator to evolve in the known universe, at least as far as we can independently verify. Like a faithful and patient modeler of reality, the main business of our brains is to produce a large variety of behaviors that are vital for our existence as human beings. In essence, these physiological purposes boil down to:
(a) maintaining our bodies in working order through the global physiological process called homeostasis;
(b) building and storing very detailed models of the external world, of our lives, and of the continuous encounters between the two; and
(c) actively and continuously exploring the surrounding environment in search of new information to test and update these internal models. This includes learning from experience and predicting future events and their payoffs by generating potential expectations for their outcomes, costs, and benefits.
This short list covers most of the basic functions of the central nervous system.
By definition, a good simulation or model allows its user to continuously analyze and monitor all sorts of events to predict future outcomes. Neurophysiologists have spent a great deal of time investigating how the brain maintains the body's homeostasis, and in recent decades there's been an explosion of research into how the brain encodes sensory, motor, and cognitive information. But for the most part, because of the difficulty in studying these phenomena experimentally, they have avoided the highly complex behaviors that are encompassed in building and nurturing a model of the world, the pervasive and primordial human longing to create a detailed account, no matter how mystical and abstract, of how the universe was created, how humanity emerged, and why we have the gift of life in this otherwise mundane solar system. Often, these longings are left to the realm of religious inquiry. But these same complex behaviors also endow humans with an ardent curiosity, a key and unique trait of our species, which has led to the emergence of art as well as scientific thinking. Complex behaviors also encompass the elaborate social and courtship strategies employed by humans to achieve the evolutionary goal of transmitting genes to future generations, as well as our continuous attempts to imprint our ideas, dreams, beliefs, fears, and passions into the memories of our loved ones, friends, and other members of our species.
By now you may be thinking that the theoretical shift I am proposing is no big deal. Yet, this issue has played a central role in an ongoing theoretical scrimmage that has engulfed neuroscience during a two-hundred-year intellectual battle over the brain's soul. And as it happens, the notion of the brain as a model builder has found significant support outside the neuroscience community. In his classic book, The Selfish Gene, the British evolutionary biologist Richard Dawkins espouses the view that the brain, particularly that of humans, has evolved the enormously advantageous capacity of creating very elaborate simulations of reality. The physicist David Deutsch goes even further by proposing, in his book The Fabric of Reality, that all "we experience directly is a virtual-reality rendering, conveniently generated for us by our unconscious mind from sensory data plus complex inborn and acquired theories (i.e. programs) about how to interpret them."
In the first paragraph of his masterpiece book, Cosmos, Carl Sagan muses, "The Cosmos is all that is or ever was or ever will be. Our feeblest contemplations of the Cosmos stir us—there is a tingling in the spine, a catch in the voice, a faint sensation, as if a distant memory, of falling from a height. We know we are approaching the greatest of mysteries."
As far as we know, there is only one offspring of this awesome cosmos capable of deciphering its majestic language while producing a reel of luxurious sensations that our true parents, faraway deceased supernovas, never had the privilege to enjoy. While these progenitors burned away, unaware that their stardust would one day blow the breath of life on a small bluish planet revolving around an average star located in a remote corner of a distant galaxy, our brains endow us to lustily consume every bit of our conscious existence while silently carving in our minds the many intimate tales of a lifetime.
Thus, if ever there has been a scientific battle worth fighting for, it is the one in which neuroscientists have embroiled themselves for the past two centuries. And if you asked me to take a side, I would not hesitate a millisecond to say that, at the end of this intellectual brawl, as Brazilians proved twenty-five years ago, those siding with the inebriant plea generated by another huge crowd, one formed by billions of interconnected neurons, shall prevail.
Excerpted from Beyond Boundaries by Miguel Nicolelis
Copyright 2011 by Miguel Nicolelis
Published in 2011 by Times Books, an imprint of Henry Holt and Company
All rights reserved. This work is protected under copyright laws and reproduction is strictly prohibited. Permission to reproduce the material in any manner or medium must be secured from the Publisher.