Read an Excerpt
The Science of Consequences
HOW THEY AFFECT GENES, CHANGE THE BRAIN, AND IMPACT OUR WORLD
By SUSAN M. SCHNEIDER
Prometheus Books
Copyright © 2012 Susan M. Schneider
All right reserved.
ISBN: 978-1-61614-662-7
Chapter One
CONSEQUENCES EVERYWHERE "An elderly male chimpanzee [was] observed in the field by Dr. A. Kortlandt of Amsterdam's Zoological Laboratorium. It was the chimpanzee's custom to go to a certain place where he could see the sun go down. He went every evening and he would stay until the sun had set and the color was gone from the sky. Then he would turn away and find his place to sleep." —Sally Carrighar, Home to the Wilderness, 1973
Consequences provide the motivation that sends butterflies to flowers and people to the moon. The pursuit of happiness means the pursuit of consequences, large and small, sunsets included.
And consequences are everywhere. Some are immediate; others loom on the horizon to be anticipated or evaded. They're good, they're awful, they're everything in between. They work for tigers and for turtles—and for us. How ironic, then, that consequences and the science that focuses on them are so often overlooked.
Every day we work toward goals, goods, and incentives—consequences—for this minute, tomorrow, next year. Many rewards are obvious and immediate; others are subtle and easily missed. Some take a lifetime to achieve. Most are mundane, like paychecks and movies and smiles on friends' faces. Few remain unchanged in value, instead routinely transforming over time. Their variety seems infinite, far beyond biological drives like food, shelter, and sex. This chapter introduces that variety.
ORIGINS AND DEFINITIONS
Like most things, consequences started simple. We can't know what animal first learned from consequences, but flatworms, like the tiny planaria found in ponds, are reasonable candidates. Certainly both invertebrates and vertebrates are capable of this type of learning, and ancient planaria are considered common ancestors of both lines. On the planet for over 500 million years, they're the most primitive biological group to have "higher" neural features like brains (miniature ones, to be sure). Accordingly, their abilities have been researched extensively.
Despite looking like baby matchsticks that have taken to swimming, planaria can learn to work for consequences. In one study, some planaria had to move past an "electric eye" to shut off an unpleasant, intense light—which was a powerful reward. Others in a matched group were given the same intervals of light on and light off, but the intervals were independent of what these planaria did. Only the planaria with the light-off reward dependent on their behavior dramatically increased their interceptions of the electric eye—thus showing that this was truly learning from consequences and not just an effect of the light and dark alternation itself. These flatworms can even learn to work only when a signal is present: no signal, no reward.
This research illustrates what reinforcers are: By definition, reinforcers both depend on behaviors and sustain them. If a behavior gets going and keeps going because of a consequence, that consequence is a reinforcer. If a behavior declines because of a consequence, that consequence is a negative (a punisher).
Things that seem like rewards sometimes aren't: what matters is what actually happens, not the intention. Your Uncle Pete used to think that chucking you under the chin was rewarding. Wrong. Similarly, classroom reprimands sometimes function as reinforcers because of the attention that goes with them (from classmates as well as from the teacher). If a "reward" has no effect on a behavior, then it's not a reward: Again, it's what actually happens that matters. I will use reward and reinforcer interchangeably in this book.
Because planaria are very simple animals, the consequences that are effective for them are limited. More complex invertebrates like the octopus show more sophisticated learning, influenced by a greater variety of reinforcers. However, the range of effective consequences has mainly been explored in vertebrates, and it's large indeed.
WALTZING PIGEONS AND ROLLER-COASTER FISH: CONSEQUENCES ACROSS SPECIES
We love our pets when they behave like us—witness the dogs that enjoy watching TV and the owners who leave Spot's favorite program on. It may be harder to imagine common city pigeons doing something along these lines, but as a child, author and animal lover Gerald Durrell had a hand-raised pigeon that loved music and would snuggle close to the speaker of an old-fashioned record player. What's more, the bird performed distinctive dances to marches and waltzes.
Later, scientists found that pigeons not only distinguish between different types of music, they also categorize unfamiliar tunes in the same way that people do, down to different eras of classical music. And a number of species will work for music as a reward. The Java sparrow has been shown to prefer Bach over the twentieth-century dissonant composer Schoenberg, just like most of us. (Even rats feel the same way.)
Ducks, such as common eiders, have been observed bobbing down rapids and then flying back up to repeat the fun. Otters slide down mudbanks, and starlings down snowy rooftops. Naturalist Edwin Way Teale once observed six chimney swifts soaring in a natural wind upflow, diving back down, and repeating the ride again and again. They were not feeding, they were "sporting in the wind." What a different perspective on wild animals these episodes give us.
Even cold-blooded creatures get in on the act. I once laughed out loud at the sight of two puffer-type fish in an aquarium at an airport, of all places. They were taking turns deliberately swimming toward a strong circulation current that whirled them about and pushed them down, rather like a roller coaster. Other slimmer fish in the tank didn't appear to enjoy this and avoided the area. Perhaps the boxlike shape of the puffers changed their hydrodynamics. Or perhaps they were just well-fed youngsters with time on their hands.
What do these unusual animal reinforcers have in common with human rewards? A lot. The human love for roller coasters is based on reinforcers that are probably similar to those for the birds and the fish. Likewise, when it comes to musical harmony versus dissonance, the basic principles of hearing are similar in most vertebrates, even if the audible frequencies vary. Lacking ears, planaria are missing out.
GETTING STIMULATED: SENSORY CONSEQUENCES
But all creatures have sensory systems of some sort, even planaria, and they bring lots of rewarding possibilities.
As advertisers are well aware, in our constant-stimulation industrialized world, sheer sensory input is one of the most widespread reinforcers. Marketing experts outdo one another to hold our attention with rewarding visuals, sounds, and actions. Back in the nineteenth century, the first kaleidoscopes set sales records because of their novel combination of intrinsically reinforcing colors and shapes, which were perpetually complex, variable, and unpredictable—reinforcing on many levels (not unlike a computer screensaver). Even three-month-old babies will turn their heads so they can see complex patterns but not simple ones.
Humans aren't the only ones to appreciate these features: in laboratory settings, mice and baby chicks will work to view more intricate patterns. Chicks that had hatched just the day before, for example, already found looking at a design of stripes and semicircles more rewarding than eyeing plain gray.
The level of optimal stimulation varies. High levels are sometimes rewarding and emotionally energizing, other times just stressful. After a period of high stimulation, even thrill seekers may prefer simple, quiet surroundings. "Boredom" indicates a lack of reinforcers, not a lack of stimulation.
Any small reinforcers, such as the view outside a window, can be fountains in a desert of tedium. But even when we're doing something we enjoy—watching the Olympics or planting petunias—change is eventually welcome. Aristotle noted that "we cannot be moulded into only one habit, because each desire can operate only for short periods and must give way in turn to others."
THE SPICE OF LIFE: VARIETY AS A CONSEQUENCE
Variety is usually a reinforcer, as formal research confirms, and not just for humans. Pets and zoo animals get bored too. Monkeys will work for the opportunity simply to view other monkeys, or a toy train, or even the normal room activity of people, and these consequences remain effective for long periods. So monkeys like watching us as much as we like watching them—a lot. Animal behavior is so appealing that zoos and public aquariums enjoy big crowds, and millions of people have pets.
Variability is an important part of what is rewarding, in animal behavior and in plenty of other places. Sports are inherently variable; so are movie plots (well, good ones). Yes, we reread favorite books, but seldom immediately after we've finished them—and few would choose to watch A Hard Day's Night all day and all night, even though it's a classic. Some reinforcers can bear repetition, as when a child listens to the same song for hours (driving everyone else nuts!). But children eventually learn to ration that song, choosing variety instead.
Teachers do well to bear this in mind. A lack of variability in voice inflection can be the single best predictor of poor teaching evaluations. Relatedly, I once had an excellent student who got A's on all his quizzes but tired of seeing comments in the mode of "Great" and "Nice job." He made up a list of alternatives like "Publish it" and "Hey-ho!"
The desire for variety can be subtle. We may not realize why a particular writer strikes us as dull despite interesting content, but monotonous sentence length and rhythm might explain it. As famed writing instructor Gary Provost noted, "The ear must have variety or the mind will go out to lunch." In music, too, most successful composers across genres find ways to vary their melodies, harmonies, and rhythms to maintain interest. If that's not enough, music players offer a "shuffle play" feature that presents the tracks in random order, different each time.
The principle applies even to ordinary household tasks. From bestselling author and cook James Peterson: "I rarely prepare dishes according to an exact recipe because I never like to cook the same thing twice—I need to invent as I go along, or I get bored." After the same chow three days in a row, even rats prefer new foods, new flavors.
THE CREATIVE CONSEQUENCE
What could be more intriguing than the discovery that variability is not only reinforcing but it is itself a reinforceable characteristic of our behavior? Creativity is not a zero-sum game, such that we each have only a limited amount. Instead, it's a "nurturable" that blossoms with encouragement: if you feed it, it will grow. And it's not limited to people.
Back in 1969, then dolphin trainer Karen Pryor and her colleagues knew better than most how inventive and adaptable dolphins are. They decided to see just how far dolphin creativity would go by rewarding only behaviors that they had never seen two particular dolphins do before. The dolphins met the challenge, coming up with quirky movements and stunts that they never would have had occasion for in the wild. If dolphins could do this, how about kids? During the same period, scientists rewarded a few toddlers for building-block constructions that differed from prior designs—and the youngsters' creativity took off. Two of the three children came up with four times as many new designs when they were praised for doing so, compared to a baseline period with no particular consequences.
Experimental psychologist Allen Neuringer's decades-long research program established the effect conclusively. His ordinary pigeons pecked two projecting buttons called keys. A reinforceable unit was eight key pecks that could be any combination of lefts and rights. To get delicious grains, the pigeons were able to produce sequences of eight that were consistently different from their last fifty sequences. And they weren't accomplishing this feat by memorizing. Instead, they behaved like random number generators, which was an efficient solution. Can people do the same? First we have to overcome misconceptions of what "random" means mathematically. Neuringer rewarded participants for producing what they thought were random sequences of digits, then ranked the sequences on a number of statistical criteria and found that his volunteers did rather poorly. He discovered, for example, that we tend to alternate digits more often than is the case in a true random series. But with practice and feedback, people, like pigeons, were reliably able to generate random-appearing sequences, a skill that used to be considered extremely difficult.
We may not often need to act randomly—quite the reverse. But if we're regularly reinforced—or punished—for variable behaviors like exploring or showing curiosity, guess what can happen? Just as with the kids' building-block designs, our creativity can soar—or plummet. Many Fortune 500 companies are quite aware of this, Google's innovation policies being one well-known example.
TAKING ADVANTAGE OF VARIETY
Some consequences for exploring, both positive and negative, are present naturally: variety, for example. We've seen how even little changes of scenery can be refreshing. Riding in a moving vehicle is reinforcing for those with good vision. Dogs get a kick out of the variety of scents to be sniffed on a short walk. On the other hand, familiarity is reinforcing under other circumstances, and exploring can mean leaving what is not only familiar but also safe. You don't find wild mice exploring in broad daylight but rather at night when they're at less risk; if their existing foraging locations fail them, they must seek new ones. They probably discovered their existing sources through exploring, so there's a history of reinforcement that helps.
The principle applies more broadly, of course. When do we challenge ourselves with the new instead of sticking to the old? Most of us like some sort of balance between the two, in the same way as optimal stimulation. Still, if we've had good luck (that is, been reinforced) seeing movies by particular directors, or reading books by particular authors, we're likely to stick with them. Unknowns, with their novelty and more questionable reward values, have a harder time vying for our attention.
Reinforcers like movies and books automatically offer variety. Even better, a "generalized reinforcer," like money, can be exchanged for a great variety of other reinforcers. Point systems take advantage of the power of these consequence choices. At one Michigan school, reward options included fun time with the principal for 75 points and free time in the computer lab for 100 points. Researchers who study people often have to provide varied rewards to keep their participants motivated. A common solution is to offer lottery tickets for a chance to win prizes, such as gift certificates or pizzas. What about animals? Orcas (killer whales) at one SeaWorld never knew which reinforcers would come next: stroking and scratching, social attention, toys, and so on. The result was that "the shows can be run almost entirely without the standard fish reinforcers; the animals get their food at the end of the day."
Animals with variety in their lives are healthier and happier, just like people. Environmental stimulants like toys are best varied, and those that produce unpredictable movements tend to be especially valuable and long-lasting. One big success at one zoo was a sturdy swinging bag, hardly naturalistic, but rubbed and butted for hours by a rhino (see chapter 13 for more examples).
THE POSITIVE SIDE OF PROBLEMS
Monkeys can so love solving mechanical and other "educational" puzzles that they will sometimes do so without any additional reward. The activity itself is intrinsically reinforcing. Karen Pryor reported on a challenge for a rough-toothed porpoise: it had to learn to pick, among several choices, the one that matched a sample. In the session where the porpoise first made notable progress, it continued working even when it was no longer eating the fish it had earned—as if it enjoyed something about the learning itself, just as we do. (Of course, social approval from the trainer certainly helped.) Along these lines, here's an anecdote about biopsychologist Donald Hebb: A chimpanzee working on challenging categorization problems hoarded its banana slices until Hebb ran out. But the chimp continued to work—and after solving the problems gave Hebb the banana slices!
(Continues...)
Excerpted from The Science of Consequences by SUSAN M. SCHNEIDER Copyright © 2012 by Susan M. Schneider. Excerpted by permission of Prometheus Books. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.