The New York Times
Mind Wide Open: Your Brain and the Neuroscience of Everyday Lifeby Steven Johnson, Alan Sklar
• How do we "read" other people?
• What is the
- Editorial Reviews
- Product Details
- Related Subjects
- Read an Excerpt
- What People Are Saying
- Meet the author
In this nationally bestselling, compulsively readable account of what makes brain science a vital component of people's quest to know themselves, acclaimed science writer Steven Johnson subjects his own brain to a battery of tests to find out what's really going on inside. He asks:
• How do we "read" other people?
• What is the neurochemistry behind love and sex?
• What does it mean that the brain is teeming with powerful chemicals closely related to recreational drugs?
• Why does music move us to tears?
• Where do breakthrough ideas come from?
Johnson answers these and many more questions arising from the events of our everyday lives. You do not have to be a neuroscientist to wonder, for example, why do you smile? And why do you sometimes smile inappropriately, even if you don't want to? How do others read your inappropriate smile? How does such interplay occur neurochemically, and what, if anything, can you do about it?
Fascinating and rewarding, Mind Wide Open speaks to brain buffs, self-obsessed neurotics, barstool psychologists, mystified parents, grumpy spouses, exasperated managers, and anyone who enjoys speculating and gossiping about the motivations and behaviors of other human beings. Steven Johnson shows us the transformative power of understanding brain science and offers new modes of introspection and tools for better parenting, better relationships, and better living.
The New York Times
Steven Pinker, author of The Blank Slate and How the Mind Works
"Celebrates the brain's complexity and wonder even as it demonstrates that you can get to know your mind better than you ever thought."
- Tantor Media, Inc.
- Publication date:
- Edition description:
- Unabridged, 7 CDs, 8 hours
- Product dimensions:
- 6.30(w) x 5.56(h) x 0.67(d)
Read an Excerpt
"He that has eyes to see and ears to hear may convince himself that no mortal can keep a secret. If his lips are silent, he chatters with his fingertips; betrayal oozes out of him at every pore."
I'm gazing into a pair of eyes, scanning the arch of the brow, the hooded lids, trying to gauge whether they're signaling defiance or panic. Just a pair of eyes -- no mouth or torso, no hand gestures or vocal inflections. All I have to go on is a rectangular photo of two eyes staring at me from a computer screen. When I've made my judgment -- it's defiance, after all -- another set pops on the screen, and I start my examination all over again.
This reverse eye exam is part of an ingenious psychological test devised by the British psychologist Simon Baron-Cohen. The test presents you with thirty-six different sets of eyes, some crinkled in mirth, others gazing off to the horizon deep in thought. Below each image are four adjectives, such as:
It's your job to choose the adjective that best fits the image. Is that raised eyebrow a sign of doubt? Or is it rebuke? The eyes themselves are a demographic mix: some weathered and ancient, others accented with mascara and eyeliner. The subtlety of the expressions is astonishing; as I scroll from image to image, I'm seeing the human eye with a fresh perspective, feeling a newfound amazement at its communicative range.
This test, though, is not ultimately about the eye'scapacity to signal emotion. It's about something just as impressive, and just as easily overlooked: the brain's ability to read those signals, to peer into the inner landscape of another mind, while relying only on the most transient of cues. You won't find exam questions like these on the IQ test, or the SATs, but the mental skills being measured here are as eanotherssential as any in our cognitive toolbox. It turns out that one of the human brain's greatest evolutionary achievements is its ability to model the mental events occurring in other brains.
Chances are you've had an experience roughly like this: you're at a social gathering with colleagues or peers -- say it's an office holiday party -- and you run into a coworker with whom you have an unspoken rivalry. It's one of those relationships that is chummy on the surface, but right beneath there's a competitive energy that neither side acknowledges. When you first encounter your colleague, there's the usual pleasant banter, but before long he's confessed to you that something has gone wrong with his career trajectory: either he's lost a big account at work or the fellowship didn't come through or the last batch of short stories got rejected. Whatever it is, it's bad news. It's the sort of news that a friend should perhaps greet with a concerned, doleful expression, which is exactly the expression that you deliberately contort your face into as he delivers the news.
The trouble is, you're only a friend on the surface. Below the surface, you're a rival, and a rival wants to grin at this news, wants to relish the schadenfreude. And so for a split second, as you're hearing the fateful syllables roll off his tongue, his tone foreshadowing his disappointment before the sentence is even complete, you let out the slightest hint of a grin.
And then an intricate dance begins. As your face wraps itself up in dutiful concern, you detect a flash of something in his face, a momentary startle that says, "Were you just smiling right there?" Perhaps his eyes suddenly lock on to your pupils, or he pauses in midsentence as though something has distracted him. In your mind, an interior closed-captioning emerges: "Did he see that grin?" As you offer your condolences, you can't help wondering if your words sound cruel rather than comforting. "Is he thinking that I'm faking all this sympathy? Maybe I should tone it down a notch just in case."
The silent duet of those two internal monologues should be familiar to you, even if you're the sort of person who never, ever gloats at another's downfall. (Henry James made a literary career out of documenting these subtle interactions.) It needn't be a Cheshire cat grin that provokes the interior monologues: imagine a conversation between two potential lovers, in which one worries that a facial expression has betrayed his love before he has summoned the courage to make a formal declaration. Sometimes the closed-captioning can overshadow the main dialogue, which can make for stilted conversation, with each participant second-guessing the other's thoughts.
This silent conversation -- a passing grin, a sudden look of recognition, a lurking question about another's motivation -- comes so naturally to us that most of the time we're not even aware that we are locked into such a complex exchange. The internal duet comes naturally because it relies on parts of the brain that specialize in precisely this kind of social interaction. Neuroscientists refer to this phenomenon as "mindreading" -- not in the ESP sense, but rather in the more prosaic, but no less impressive, sense of building an educated guess about what someone else is thinking. Mindreading is literally part of our nature. We do it more effortlessly, and with more nuance, than any other species on the planet. We construct working hypotheses about what's going on in other people's heads almost as readily as we convert oxygen into carbon dioxide.
Because mindreading is part of our nature, we don't bother to teach it in schools or test our aptitude for it in placement exams. But it is a skill like any other, a skill that is unevenly distributed throughout the general population. Some people are deft mindreaders, picking up subtle intonational shifts and adjusting their response with imperceptible ease. Others mindread with the subtlety of a Mack truck, constantly second-guessing themselves or interrogating their conversational partners. Some are simply "mindblind," shut off entirely from other people's internal monologues.
Even though we don't teach this particular skill in school, and we barely have a vocabulary to describe it, our mindreading abilities play a key role in our work and relationship successes, our sense of humor, our social ease. But to understand these consequences, you have to stop taking the internal duet for granted. You have to slow it down, explore its underlying processes, recognize the duet for the marvel that it is.
Our growing appreciation for the art of mindreading was accelerated in the late 1990s by the discovery of "mirror neurons" in the brains of monkeys, neurons that fire both when a monkey does a particular task -- grabbing a branch, for example -- and when the monkey sees another monkey do that same task, suggesting that the brain is designed to draw analogies between our own mental and physical states and those of other individuals. At the same time, researchers explored the premise that autistic people suffer from a kind of mindblindness, preventing them from building hypotheses about others' internal monologues. In related studies, evolutionary psychologists began to think about the Darwinian rewards of mindreading in a social species, examining chimp populations for signs of comparable internal duets. Yet other scientists speculated on the connection between mirror neurons and the origins of language, since all forms of communication presuppose a working model of the object you're attempting to communicate with. For language to evolve, humans needed a viable theory about the minds of other people -- otherwise, they'd just be talking to themselves.
Let's now go back to that silent duet at the office party, to the moment that half-concealed grin leaks out of the side of your mouth before you can replace it with the look of sympathy. What's happening here? Most of the time you walk around with the assumption that you're the boss of you, that you have a unified self that controls your actions in a relatively straightforward way. But your telltale grin challenges most of our assumptions about this selfhood, because at that moment at the office party, you are trying your hardest to do the exact opposite of smiling; you're trying to look concerned and upset, full of compassion. But your mouth wants to smile. Whose mouth is it anyway?
The answer is that your mouth has several masters, and some of them are brain subsystems that regulate emotional states. Smiling at times of genuine pleasure is not a learned behavior; every recorded culture on the planet represents the internal mental state of happiness with a smile. Deaf-blind children start smiling on the exact same developmental timetable as children who can see and hear. Cultures certainly differ in their assumptions about what makes people happy, as the popularity of frog's legs and Steven Seagal movies in France will attest. And cultures also differ in their production of fake smiles, as in the beaming "bye-bye nows" of American flight attendants. But genuine happiness -- whatever the details of its origin -- expresses itself as a smile in all normal homo sapiens.
Ironically, the forced smile of the flight attendant demonstrates just how innate the smiling reflex really is. A century and a half ago, the French neurologist Duchenne de Boulogne began studying the muscular underpinnings of people's facial expressions, using the then-state-of-the-art technologies of photography and electricity. Duchenne photographed his subjects in various emotional states, and tried to automatically simulate their expressions by activating specific muscles with a small jolt of electric current. (The images from Duchenne's experiments look like something from a Nine Inch Nails video.) In 1862, he published his findings in a volume titled Mechanism of the Human Physiognomy, which Darwin drew upon extensively ten years later in his best-selling The Expressions of the Emotions in Man and Animals. But Duchenne's research soon fell into oblivion, only to be discovered more than a century later by the University of California at San Francisco psychologist Paul Ekman, now generally considered to be the world's leading expert on facial expressions.
The most widely cited discovery in Duchenne's work involved smiling. Using his crude tools, Duchenne established that genuine smiles and fake smiles utilize completely distinct ensembles of facial muscles -- most visible in the eyes, which crinkle in real smiles but remain unchanged in the faux ones. (As a tribute to his long-neglected forebear, Ekman began referring to the genuine article as a "Duchenne smile.") The muscle that controls eye-smiling is called the orbicularis oculi, and its activation has proved to be a reliable indicator of internal happiness or mirth. Modern brain scans show that pleasure centers in the brain light up in sync with the orbicularis oculi, but show no activity during fake smiles created with the mouth alone. The next time you want to know if your beaming waiter truly wants you to have a nice day, check out the outer edges of his eyebrows; if they don't dip slightly when he smiles, he's faking it.
Duchenne's insights into the muscular underpinnings of the smile make it easier to detect counterfeit good cheer, but they also teach us a more important lesson about selfhood and the emotions. Duchenne smiles are not willed deliberately into existence. You can consciously paint a fake smile on your face, but a real one erupts through a process that your conscious mind controls only in part. This is demonstrated most vividly in studies of stroke victims who suffer from a disturbing condition known as central facial paralysis, which prevents them from voluntarily moving either the left or right side of their face, depending on the location of the neurological damage. When these individuals are asked to smile or laugh on command, they produce lopsided grins: one side of the mouth curls up, the other remains frozen. But when they're told a joke or they're tickled, full smiles animate their face.
This is why the smile has more than one master: sometimes it is triggered by the emotional systems, other times by areas that control voluntary facial movement. (Of course, depending on the brain region, the smile will differ slightly in its expression.) So that inadvertent grin that slips out at the news of your rival's misfortune? It's the result of two brain systems vying for control of the same face. The part of the brain that controls voluntary muscle movement -- called the motor cortex -- sends a command instructing the face to appear sympathetic. But your emotional system is requesting a toothy grin. Your face can't satisfy both requests at the same time, so what results is a little bit of both: a grin that swiftly morphs into an expression of worried sincerity.
And herein lies lesson one of that office party encounter: your brain is not a general-purpose computer with one unified central processor. It is an assemblage of competing subsystems -- sometimes called "modules" -- specialized for particular tasks. Most of the time, we only notice these modules when their goals are out of sync. When they work together, they coalesce into a unified sense of self. The idea of multiple selfhood is not, strictly speaking, a discovery of the brain sciences. There's a long tradition of artists and philosophers documenting how fragmented we are below the surface, most notably in the modernist writers that pried open the psyche a century ago. Here's Virginia Woolf describing the struggle between the two models of self in Mrs. Dalloway:
How many million times she had seen her face, and always with the same imperceptible contraction! She pursed her lips when she looked in the glass. It was to give her face point. That was her self -- pointed; dartlike; definite. That was her self when some effort, some call on her to be her self, drew the parts together, she alone knew how different, how incompatible and composed so for the world only into one centre, one diamond, one woman who sat in her drawing-room and made a meeting-point...
Freud famously envisioned the psyche as a battleground among three competing forces: id, superego, and ego. The modern understanding of the brain shatters that earlier vision into dozens of component parts, some specializing in core survival tasks, such as heartbeat regulation and the fight-or-flight instinct, others focused on more prosaic skills, such as face recognition. Your personality is, in a real sense, the aggregate of the differing strengths of each of these modules -- as they have been shaped both by nature and nurture, by your genes and by your lived experience. In other words: you are the sum of your modules.
If the modular nature of the mind is often hidden to us, how can we see behind the curtain of the unified self and catch a glimpse of those interacting components? Several avenues are available to us. There are the studies of pathological cases popularized by books such as Oliver Sacks's The Man Who Mistook His Wife for a Hat, in which we detect the existence of modules through patients who have suffered targeted brain damage that takes out one or two modules but leaves the rest of the brain functioning normally. Or we can experience the modularity of the brain more directly by taking drugs that throw a monkey wrench into its machinery, causing individual modules to take on a new autonomy (which is why people on drugs often feel as though they hear voices). Or you can gaze inside your brain directly, using today's brain-imaging technologies.
Another more entertaining way into the modular mind is through the back door of illusions and various tricks of the mind. Optical illusions help reveal modules by triggering conflicts between different submodules in the visual system: modules for distinguishing between background and foreground, recognizing borders between objects, or locating objects in 3-D space. Remember the childhood game of spinning in place and then stopping quickly to feel the spinning continue? In this game, as you turn, objects in the room pass by you in a counterclockwise direction. But when you stop, you feel a sense of vertigo, and the room seems to be spinning around you in the reverse direction, as though you were standing at the motionless center of a merry-go-round. Why does the room seem to spin after you've stopped moving? And why does it appear to spin in the other direction?
This staple of early childhood play reveals the brain's modular approach to detecting motion. The part of the brain that evaluates whether you're moving relies on two primary sources: information from the visual field and information from the fluid sloshing around in your inner ear. Most of the time, those two lieutenants concur in their assessments to their commander, but when you stop suddenly after spinning clockwise, the liquid in your inner ear continues to move around for a few seconds more, while your vision responds instantly to the cessation of movement. So the haptic centers of the brain are taking in conflicting data: the inner ear reports you're still moving, while the eyes report that you're at rest. The only way the brain can resolve this conflict is to assume that both reports are correct: you are still spinning, but it doesn't seem that way because the world around you is spinning right along with you. The illusion of the world rotating is actually a brilliant on-the-fly interpretation that your brain makes to reconcile the conflicting data it receives. It's not the correct interpretation, of course, but it's a revealing one.
Module disagreement is not a bad way of describing the ultimate cause behind that inadvertent grin at the office party: part of your brain wants to smile, and part of it wants to show sympathy. The result is a kind of "slip of the face": the mouth and eyes betraying an emotion that the social self wants suppressed. The lesson here is that the control structures between modules often matter as much as the strength or weakness of each module itself. The brain is a network, and the way that each node in that network communicates with other nodes is an essential part of its higher-level properties. Even among the macrostructures of the brain, the connections made are as important as the individual structures themselves. One notable difference between male and female neuroanatomy is the communication channel that connects the left and right hemispheres, called the corpus callosum, which is much larger in women than in men. We now believe that this increased connectivity enables women to do a better job than men at reconciling the sometimes conflicting interpretations offered up by each hemisphere.
Some people are good at suppressing grins, while others are lousy at it. Some modules are better at overriding other modules; some are more submissive. Understood in the broadest sense, the process of growing up can be seen as the slow subjugation of emotional centers -- such as the amygdala, which plays an essential role in fear responses -- by the more recently evolved regions of the brain located in the prefrontal cortex that control voluntary actions, long-term planning, and other higher functions. Infants are born with relatively well-developed amygdalas, which is why they're so good at being frightened right out of the gate. But their prefrontal regions take most of childhood to mature.
So not only is the mind a network of distinct modules, but those modules sometimes compete with each other. The brain's modular system cannot be imagined as a neurological report card, with a B+ for face recognition and a failing grade for mindreading. This is because the modules interact with each other, sometimes inhibiting, sometimes amplifying, sometimes translating or interpreting in novel ways. The brain is much more like an ecosystem than a list of stable personality traits, with modules simultaneously competing and relying on each other. Hence lesson two: It's a jungle in there.
So if we now understand something about that renegade grin, what can we say about its detection? The silent duet of mindreading begins in your colleague's brain when he first thinks to himself, midsentence, that you might be quietly celebrating his bad news. It's fitting that the telltale sign is the crinkling of your eyes, as your orbicularis oculi betrays your inner state. Mindreading is in many ways a kind of eye-reading -- we learn a great deal about the content of other people's thoughts by watching their eyes. Eyes are essential to building what brain scientists call a "theory of other minds."
The connection between mindreading and eye-reading begins early in child development -- so early, in fact, that it is unlikely to be the product of learned behavior. In their first year, most children will become adept at something called "gaze monitoring": they see you looking off toward the corner of the room; they turn and look in that direction; then they check back to make sure the two of you are looking at the same thing. Because we do it so well, gaze monitoring doesn't seem like much of an accomplishment, but it requires an elaborate understanding of the human visual apparatus, too elaborate to be purely the product of cultural learning.
Think about what's implied in gaze monitoring. First, you have to understand that people have their own perceptions of the world, distinct from yours. Second, some of those perceptions flow into their mind through their eyes. Third, you can determine the objects people perceive by drawing a straight line from the black circles in the middle of their eyes outward. Fourth, when those black circles shift, that means the gaze has shifted to another object. Consequently, if you want to know what another person is perceiving, you follow the movement of those black circles, and then shift your own gaze toward the object they're focused on.
If the gaze-monitoring skill were purely a learned behavior, it would take a month of school and a four-year-old's brain to master it. Infants can barely be taught how to use a spoon, much less how to track retinal movements and deduce inner mental states. They can't learn gaze monitoring, but they do it nonetheless -- because their brains contain a cheat sheet of sorts that prepares them for the underlying principles of gaze monitoring, a kind of psychological physics: people have minds; people's minds perceive different things; part of that perception happens through the eyes; if you want to know what someone's thinking, look at his eyes. These biological cues start early in life: one study found that two-month-old infants were more likely to stare at the eyes than at any other part of the face.
As we grow older, we scrutinize people's eyes for subtler cues: not just what they're looking at, but what they're thinking and feeling. Because our emotional systems are wired directly to our facial muscles, á la the Duchenne smile, we often get accurate portraits of other people's moods just by scanning their eyes or the corners of their mouth. As our office party exchange shows, sometimes that portrait gives a more accurate testimony than people's verbal descriptions of their moods. Who are you going to believe -- me or my lying eyes?
Gaze monitoring and emotional expression recognition are two of the fundamental mindreading systems, but we also use other tricks. We monitor speech intonation carefully for emotional nuance. We put ourselves into other people's mental shoes -- what the cognitive scientists call the "simulation theory" of mindreading, according to which your brain is effectively running a mini-simulation of someone else's to anticipate how the other person might feel.
Your brain runs all these routines any time you interact with other people. It takes careful training, or massive distraction, to stop your mind from inferring other people's mental states as you talk to them. Mindreading is a background process that feeds into our foreground processes; we're aware of the insights it gives us but usually not aware of how we're actually getting that information, and how good we are at extracting it.
The sophistication of our mindreading skills is part of our heritage as social primates; our biology contains cheat sheets for building theories about other minds because our brains evolved -- and continue to evolve -- in complex social environments where being able to outfox or cooperate with your fellow humans was essential to survival. So just as some animals evolved nervous systems that were adapted for sudden movement or sonar, our brains grew increasingly sophisticated at modeling the behavior of other brains. An entire host of neurological systems revolve around the expectation that you will spend much of your life managing social relationships of one sort or another. Your brain is wired to expect an environment with oxygen, gravity, and light. It's also wired to expect an environment populated by other brains. Hence lesson three: Deep down, we're all extroverts.
We're all extroverts, except those of us whose brains have developed without the normal mindreading systems. There are dozens of neurological disorders that compromise social skills, but few are more common than the family of conditions that we generally call "autism."
Autistic people possess many skills lacking in the normal population: they often have nearly photographic memories and astonishing mathematical abilities. Their ease with mechanical systems, including computers, can be extraordinary. But autism impairs social skills dramatically. While autistic people can usually learn and communicate using language, there is something missing in their exchanges with other people, some strange distance in their social demeanor. They seem emotionally remote, disconnected.
Many experts now believe that this distance derives from a distinct neurological condition: autistics are mindreading-impaired. The social distance associated with autism is a vivid example of the brain's modular nature: autistics generally have above-average IQs, and their general logic skills are impeccable. But they lack social intelligence, particularly the ability to make on-the-fly assessments of other people's inner thoughts. Autistic people do have to go to school to read facial expressions -- learning to intuit another person's mood is at least as challenging for them as learning to read is for the rest of us. When you're engaged in conversation, you don't think to yourself, "Aha! His right eyebrow just crinkled up. He must be happy." You just sense that there's a happy expression on his face. But autistics have to perform precisely that kind of deliberate analysis, memorizing which expressions are associated with which emotions and then studying people's faces actively as they talk, looking for signs. One of the early predictors of autism in toddlers is an inability to perform gaze monitoring. It's as though autistics are born without the social physics that the rest of us possess innately, as though they were mindblind.
Simon Baron-Cohen believes that the symptoms of autism exist on a continuum: while some people clearly suffer from extreme cases, millions suffer only from minor cases of mindblindness. (Because autism is ten times more likely to develop in boys than girls, Baron-Cohen has argued that the disorder should be considered simply an extreme version of the male brain's tendencies, rather than a disconnected aberration.) The history of mathematics and physics is populated by borderline autistics: people with great number skills but limited social grace. We all know bright people who perform poorly in social situations, seem disengaged in conversation, or fail to pick up on our emotional cues. Even if you're a particularly astute mindreader, you probably have your own "autistic moments" in passing, when you're conducting a conversation on autopilot, lost in your own internal monologue. If you spend enough time with the literature, you can't help dividing up your friends and colleagues into the talented mindreaders and the mind-dyslexics. You start evaluating your own prowess as you engage with other people. Mindreading becomes a part of your basic vocabulary for evaluating yourself and others: some people have a sharp sense of humor, some are quick learners, some are good mindreaders.
If autism exists on a continuum, then it's possible to locate yourself on that continuum. You can take a simple test called the Autism Spectrum Quotient that Baron-Cohen and his colleagues created -- answer fifty questions about yourself on a Web page, and a simple program spits out a number between 1 and 32. The higher the number, the closer you are to autism. (The median result is 16.4.) It's not exactly hard science because it relies on self-evaluation and the questions themselves are relatively broad. But if you trust your ability to assess the general areas of your personality, the test provides a rough sketch of your autism quotient (otherwise known as "AQ").
The questions are phrased as statements with which you can "definitely agree," "slightly agree," "slightly disagree," or "definitely disagree."
- I frequently find that I don't know how to keep a conversation going.
- I find it easy to "read between the lines" when someone is talking to me.
- I usually concentrate more on the whole picture, rather than on the small details.
- I am not very good at remembering phone numbers.
- I don't usually notice small changes in a situation or a person's appearance.
If you've read something about autism, or the theory of other minds, these questions will seem predictable enough. When I took the test -- if you must know, I scored a 15, just slightly less autistic than average -- I flipped through the questions with a kind of jaded awareness: here's the facial expression question, here's the number memory question. It was only when I went back and reviewed the exam that I realized my familiarity with the topic had blinded me to something fascinating about the test itself.
Think about those last two statements: "I am not very good at remembering phone numbers" and "I don't usually notice small changes in a situation or a person's appearance." Now, if you come to the test knowing something about autism, you'll instantly deposit those two statements on opposite ends of the AQ spectrum. An autistic person, you'll think, will be good at remembering phone numbers and bad at noticing small changes in someone's appearance. But if you don't know anything about autism, if you're just coming to the test with a commonsense understanding of human psychology, then those two attributes will hardly seem like opposites. You'd probably think someone with a good memory for phone numbers would be more likely to notice small changes in appearance: she'd be detail-oriented, good at keeping track of small things. Certainly these don't seem like traits that would naturally be opposed to one another. But if you know something about the brain science behind autism, the fact that the two traits are inversely related makes perfect sense, because number skills and mindreading skills aren't simply the result of general intelligence; they're specialized modules, modules that for some as of yet unknown reason have been yoked together in the brain's wiring.
This is one of the key insights that neuroscience brings to our sense of self: strengths or weaknesses in one area are often predictive of strengths or weaknesses in seemingly unrelated areas. It makes intuitive sense to us that people who are better at processing language might be worse at processing visual data, or that blind people might have sharper hearing than people with eyesight. But you're less likely to get a nod of agreement when you propose that people who are good at factoring pi in their heads are usually bad at tracking eye movements. Yet that is the brain's reality. The more you understand the mind in the light of modern brain science, the more you recognize that isolated traits you possess aren't necessarily isolated -- the brain is full of zero-sum games, where one talent prospers at the expense of another. Sometimes those balancing acts involve related skills; sometimes the connection is more obscure. Thus our final principle: Your brain contains some strange bedfellows.
Is mindreading one of our long-decay ideas, an idea that transforms your own sense of self? I believe it is, but to grasp that importance you can't think of mindreading simply as another word for "empathy." We all know people who are more empathic than others, who are more sensitive to others' feelings. Empathy is a powerful human trait, and it would be wrong to underestimate its centrality in our social interactions. By the same token, empathy is nothing new. What is new, I think, is the notion of the second-by-second, instinctive dance of mindreading: the mental sparring at the office party. Empathy is something you're consciously aware of feeling; you think to yourself, "It breaks my heart to see her so sad." Mindreading is faster than that, more invisible. The data it relies on flies by at lightning speed: a momentary tonal shift, a pause that suggests hesitation, a brief, inquisitive twist of the head. You may consciously evaluate the data once it has been interpreted -- "Why did she seem startled by that news?" -- but the act of interpretation itself is closer to a reflex than to a deliberate act of contemplation or analysis. One way of describing mindreading is via an idiom that we often use for performers: having a feel for your audience. Having a feel for your audience is different from being sensitive to the feelings of your audience, which is what empathy is all about.
For weeks after I first started reading about the neuroscience of mindreading, I found myself in conversations with friends or new acquaintances with a second-level, meta-interior monologue running through my head. Instead of watching their facial expressions for subtle clues about their internal state, I was watching their reactions to my expressions and speculating on their mindreading skills.
At a dinner party, I'd be listening to a friend follow a dozen irrelevant detours in telling what should have been a thirty-second story and suddenly recognize something I'd felt intuitively about him for years but never really put into words: he's mind-dyslexic. With other friends (many of them women) I finally understood part of why I had enjoyed our conversations so much over the years -- our internal duets were as rich as the external ones. I put myself under the same microscope, noticing that in certain social situations I would be more "locked in" to my conversational partner, whereas in others my mindreading antenna appeared to get lousy reception. This resonance is the sign of a long-decay idea -- it's like a tune that gets stuck in your head, and you can't help humming it wherever you go.
The more I thought about mindreading, the more I wanted to quantify my skills at it. The autism quotient test had whetted my appetite, but it was too subjective, and the skills it assessed were as much about that broader category of empathy as they were about the local reflex of analyzing facial expressions. I wanted my mindreading skills analyzed the way you'd have your vision tested, and I figured if there was anyone who could help me in this quest, it was Simon Baron-Cohen. That's how I eventually found myself scrolling through those computer images of eyes, scanning for drooping eyelids and furrowed brows.
I'd read a little about the eye-reading test before I actually sat down to take it and had imagined it to be much simpler than it turned out to be. Emotion scholars tend to divide up the spectrum of human emotion into two camps: the "primary" emotions of happiness, sadness, fear, anger, surprise, and disgust; and the "secondary" social emotions of embarrassment, jealousy, guilt, and pride. I figured the test would involve mapping one or the other of those ten sentiments to a pair of eyes, which seemed easy enough.
But when I actually started to read through the instructions, I was shocked to find that the glossary of emotional states went on for several pages -- ninety-three emotions in all, everything from "aghast" to "tentative." I'd anticipated choosing between "happy" and "sad," but instead the test wanted me to distinguish between "flirtatious," "playful," and "friendly," or "upset," "worried," and "unfriendly." As I read through the list, one disturbing thought came abruptly into my head: I am going to flunk this test. There was no way I could detect emotions this subtle in static images of two eyes. Perhaps my autism quotient score wasn't accurate, after all. If nonautistic people could read eye expressions at this level of sophistication, then maybe I was closer to Rain Man than I thought.
The test began with a grainy black-and-white image of an elderly man's eyes that looked like a close-up from a Jean Cocteau film. The left eye was wide open, the right more hooded. The emotion options were "hateful," "panicked," "arrogant," and "jealous." My first impulse was to choose "panicked," but as I studied that right eye, I began to have second thoughts. Was there something angry there? Or something wounded, as of a jealous husband who has just stumbled across his wife in the arms of another man? The more I scrutinized the image, the harder it got to discern a clear emotion. I decided to go with my initial hunch.
I turned to the next image, and a set of younger eyes of indeterminate gender stared back at me: perfectly symmetrical, with the slightest suggestion of a squint. I thought to myself, This is what they mean when someone has a "gleam" in their eyes. The first emotion option was "playful" and I immediately said, That's the one. But then I read on: "comforting," "irritated," and "bored" were the other options. Definitely not bored, but maybe what I saw as playfulness was really being comforting, being sympathetic. What was a gleam anyway? When I tried to locate the specific gleaming quality, the effect seemed to dissipate. As I searched for that original playfulness, I thought I detected a hint of irritation in the eyes. This is madness, I thought: I'm overanalyzing these images. Better to go with the gut, since this is supposed to measure gut responses anyway. I marked down "playful" and moved on.
As the test progressed, I got a little better at sticking with my original hunches, but with each image, the clarity of the initial emotion grew less intense the longer I analyzed it. All but a few had an emotion that struck me at first glance, and while second thoughts caused me to doubt most of my first decisions, I went with my initial instincts throughout the test. By the end, I felt as though I would probably come out with half of the answers correct, which seemed like a pretty good ratio given the subtlety of emotions being presented.
But as it turned out, I was way off in my self-assessment. Instead of missing 18 of 36 questions, I had missed only 5. On the first seventeen images, the source of so much second-guessing, I'd been 100 percent right. It's an interesting test when you think you're failing, and you end up getting an A (or at least a solid B+). Particularly if you base all your answers on your gut reactions, and ignore all your attempts to outthink the exam. When I tried to interpret the images consciously, surveying each lid and crease for the semiotics of affect, the data became meaningless: folds of tissue, signifying nothing. But when I just let myself look -- look without thinking -- the underlying emotions came through with startling clarity. I couldn't explain what made a gleam gleam, but I knew one when I saw it.
If there was a connection to Rain Man's autism, it was here, in that instinctive "gut feeling," in the mental computation so fast and so transparent that it doesn't feel like thinking. Afterward, I was reminded of the classic stories of autistic people emptying a box of matchsticks and somehow just "seeing" exactly how many are scattered across the floor. The number just pops into their head, as vivid and unavoidable as a face. They have a gut feeling for numbers, the way most of us have a gut feeling for "playful" and "panicked."
Only neither feeling comes from the gut. After I finished the test, I asked Baron-Cohen what had been going on in my brain as I analyzed the images. "We've done fMRI scans of people taking the 'reading the eyes' test, and what we've found is that the amygdala lights up in trying to figure out people's thoughts and feelings. In people with autism, they show highly reduced amygdala activity," he explained. In many ways, the amygdala is the "gut feeling" center of the brain, implicated in all sorts of emotional processing. Recently, it has been shown to play a central role in our understanding of fear (which we will return to in the next chapter); when people have a "sinking feeling in their gut," or feel "gripped" by fear, the reaction has most likely been triggered by the amygdala. People with amygdala damage caused by strokes or head injuries often report that they are unable to detect fearful expressions in other people's faces. But as Baron-Cohen's test suggests, fear is only part of the amygdala story. "My hunch is that the amygdala is actually used to detect a much more varied range of emotions," he told me.
Inspired by the subtle emotional discrimination he found among his test subjects, Baron-Cohen has set out on a more ambitious quest: "We decided to figure out just how many emotions there are." He began with a survey of emotional descriptors taken from a collection of thesauri, which produced a list thousands of words long. Baron-Cohen and his team, aided by a lexicographer, then winnowed out the synonyms, creating a smaller collection of "discrete emotional concepts."
"We came up with a number," he said with a laugh. "Four hundred and twelve."
Four hundred and twelve unique emotions. The fact that our vocabularies include adjectives for so many emotional states, coupled with how well nonautistics score on the eye-reading exam, drives the point home: we are equipped biologically with an incredibly sensitive antennae for emotional variation. Baron-Cohen's latest mission is to build a tool that will help people whose antennae are broken. "What we've done is asked actors and actresses to create facial expressions for each of the four hundred and twelve emotions, and then included them all on a DVD. It's like an encyclopedia for emotion," he said.
"It was designed for people who score poorly on the autism tests, who want to learn emotional recognition in a slightly artificial way." Because autistics often possess higher-than-average skills at what Baron-Cohen calls "systematizing" -- learning the rules of a given system, breaking it down into its component parts -- one option for them is to improve their emotional recognition skills by systematizing the human face.
Baron-Cohen continued: "It's not the intuitive way of approaching people, but you could do it. You could try to figure out the rules that allow you to read another's emotional expression. It's like trying to learn a second language, sitting there with a grammar book and rules of syntax trying to figure it out in a different way than you would if you were a native speaker." The two approaches originate in different regions of the brain: the intuitive recognition centered in the amygdala and the systematizing ability residing in the neocortex, the seat of higher logic and language.
The clash between the amygdala and the neocortex explains my indecision while taking the eye-reading test. My gut reactions would flash up instantly from the amygdala, after which the neocortex would start analyzing the image in a more systematic way. But I haven't trained my neocortex to recognize emotions; I haven't spent time with Baron-Cohen's encyclopedia -- precisely because my amygdala does such a good job on its own. And so the more I analyzed a given image logically, the less clear the answer became. The next time you're advised to trust your gut when you're meeting someone new, ignore the advice. Your gut has nothing to do with it. But by all means trust your amygdala.
There's a crucial scene near the beginning of Henry James's The Golden Bowl in which the recently married Maggie Verver walks in to find her beloved father, the long-widowed billionaire Adam Verver, engaged in what appears to be flirtatious conversation with a young woman. In a glance, Maggie suddenly grasps that her own marriage has created a new possibility: that her father might remarry after years of living as a bachelor with his only daughter. The rest of the book plays out, in a sense, the aftershock of this moment of recognition: the father does eventually marry another woman, with more or less disastrous consequences. But the originating scene itself unfolds without words spoken between father and daughter; it is as exacting, and as lyrical, an account of mindreading as you are likely to find in literature:
[Maggie's appearance] determined for Adam Verver, in the oddest way in the world, a new and sharp perception. It was really remarkable: this perception expanded, on the spot, as a flower, one of the strangest, might, at a breath, have suddenly opened. The breath, for that matter, was more than anything else, the look in his daughter's eyes -- the look with which he saw her take in exactly what had occurred in her absence.
The visual communication flows in both directions; as Mr. Verver contemplates the look in his daughter's eyes, she in turn recognizes his recognition:
He became aware himself, for that matter, during the minute Maggie stood there before speaking; and with the sense, moreover, of what he saw her see, he the sense of what she saw him...Her face couldn't keep it from him; she had seen, on top of everything, in her quick way, what they both were seeing.
James spends ten pages plumbing the depths of what he calls this "mute communication" -- slowing the tape down to analyze its every twitch and unspoken innuendo. The passage gives us a wonderful instance of the human mind's powers of perception, on two levels. First, there is the silent duet between father and daughter, each of whom reads volumes into a simple pair of expressions glimpsed across a room. And then we have the observatory power of James himself, recognizing the depth of the exchange and drawing it out long enough for us to dissect its subtlety.
I bring up this scene because I think what James does here runs parallel to what the brain sciences can do for our own self-awareness. They can help us see our interactions with a new clarity, to detect long-term patterns or split-second instincts that might otherwise go unnoticed, sometimes because they operate below conscious awareness and sometimes because we're so familiar with them that they've become invisible to us. There are differences in approach between the discerning eyes of scientists and novelists: James doesn't offer a working theory to explain how Adam Verver manages to gather so much information out of a passing glance; and brain scientists don't usually weave their insights into gripping narratives. But both approaches can illuminate the life of the mind. To use a Jamesian term, they give us powers of discrimination.
In recent years, any time the brain sciences and the arts have intersected, the debate has generally been framed in terms of evolutionary psychology: does the Darwinian approach have something useful to teach us about the cultural achievements of art? The clashes that usually characterize these debates occur because on some level evolutionary explanations operate against the grain of art. Purely Darwinian models of the mind are about human universals, about what unites us as a species. Great novels or paintings or films are about the conflict between human universals and the local events of our personal and public histories. The narrative form that evolutionary psychology most closely resembles is myth: the enduring struggles and drives that define the human condition. The creative arts are about seeing what happens when individual lives intersect with these human drives, and often with the broader currents of history. This is why, more often than not, you get fireworks when the Darwinians and the art critics appear on the same panel. But when you widen the lens to see beyond evolutionary psychology, the conflict disappears. Brain science is as much about those chance events and individual personalities as it is about enduring truths and human universals. The last few decades of research have revealed, again and again, the way specific memories transform us as we grow and develop, the way life experience wires our brains as meticulously as our genes do. When we participate in mindreading's silent duet, we're drawing upon cognitive tools that are a part of our evolved human nature, but every mindreading exchange is also colored by the memories and associations unique to an individual life. We're wired to see smiles as a sign of internal happiness, but a smile can also remind us of a parent's grin from our childhood or a movie star's smile beaming down from the silver screen or a joke we told over breakfast this morning. Brain science has much to teach us about the way those individual memories are formed, and how they come to weigh on our subsequent behavior. The impact of past events on the present is so crucial to the modern understanding of the brain that this book doesn't include a single chapter on memory. This is because in many respects all the chapters are about memory.
Virginia Woolf described the compensation for growing old as gaining "the power of taking hold of experience, of turning it round, slowly, in the light." Memories transform our perception of the present, but the process is even more nuanced and layered than that: reactivating memories in a new context changes the trace of memory itself. For a long time, neuroscientists assumed that memories were like volumes stored in a library; when your brain remembered something, it was simply searching through the stacks and then reading aloud from whatever passage it discovered. But some scientists now believe that memories effectively get rewritten every time they're activated, thanks to a process called reconsolidation. (Freud sensed this process as well, though he gave it a different name: Nachtraglichkeit, or "retroactivity.") To create a synaptic connection between two neurons -- the associative link at the heart of all neuronal learning -- you need protein synthesis. Studies on rats suggest that if you block protein synthesis during the execution of learned behavior--the brain's memory of a reward cycle, for instance -- the learned behavior disappears. Instead of just recalling a memory that had been forged days or months before, the brain forges the memory all over again, in a new associative context. In a sense, when we remember something, we create a new memory, one shaped by the changes that have happened to our brain since the memory last occurred to us. So the science is telling us two things: our brains are designed to capture the idiosyncrasies of our lives, and those lives -- our memories of them -- are being rewritten with each passing day.
You need only read a few pages of Proust to know that artists have been exploring these properties for centuries, if not millennia, just as James grasped the transformative power of mindreading. Indeed, the world of culture -- particularly the poets and novelists and philosophers -- has historically led the way in widening our understanding of the brain's faculties, much as that flower opened under Adam Verver's gaze. This they continue to do. The only difference now is that they have some competition.
Copyright © 2004 by Steven Johnson
What People are saying about this
--- Kirkus Reviews
Meet the Author
Steven Johnson is the author of Emergence: The Connected Lives of Ants, Brains, Cities, and Software, which was named as a finalist for the 2002 Helen Bernstein Award for Excellence in Journalism and was a New York Times Notable Book of 2001, as well as a "best book of the year" in Discover, Esquire, The Washington Post, and The Village Voice. He writes the monthly "Emerging Technology" column for Discover and is a contributing editor at Wired. His work has also appeared in The New York Times, The Wall Street Journal, The Nation, The New Yorker, Harper's, and The Guardian. He is also the author of Interface Culture: How New Technology Transforms the Way We Create and Communicate. Johnson holds a B.A. in semiotics from Brown University and an M.A. in English from Columbia. He lives in New York City with his wife and two sons.
and post it to your social network
Most Helpful Customer Reviews
See all customer reviews >