Despite their variety, consequences appear to follow a common set of scientific principles and share some similar effects in the brain--such as the "pleasure centers." Nature and nurture always work together, and scientists have demonstrated that learning from consequences predictably activates genes and restructures the brain. Applications are everywhere--at home, at work, and at school, and that's just for starters. Individually and societally, for example, self-control pits short-term against long-term consequences.
Ten years in the making, this award-winning booktells a tale ranging from genetics to neurotransmitters, from emotion to language, from parenting to politics, taking an inclusive interdisciplinary approach to show how something so deceptively simple can help make sense of so much.
|Product dimensions:||6.00(w) x 8.90(h) x 0.90(d)|
About the Author
Susan M. Schneider, PhD (Stockton, CA), a biopsychologist and naturalist, has an international reputation in nature-nurture relations, mathematical modeling of animal behavior, and the principles of learning from consequences. She was a friend of B. F. Skinner, who mentored her at the start of her academic career. Schneider is currently a visiting scholar at the University of the Pacific. She has been a professor at St. Olaf College, Auburn University, and Florida International University, and a visiting research fellow at the University of Auckland.
Read an Excerpt
The Science of ConsequencesHOW THEY AFFECT GENES, CHANGE THE BRAIN, AND IMPACT OUR WORLD
By SUSAN M. SCHNEIDER
Prometheus BooksCopyright © 2012 Susan M. Schneider
All right reserved.
Chapter OneCONSEQUENCES EVERYWHERE
"An elderly male chimpanzee [was] observed in the field by Dr. A. Kortlandt of Amsterdam's Zoological Laboratorium. It was the chimpanzee's custom to go to a certain place where he could see the sun go down. He went every evening and he would stay until the sun had set and the color was gone from the sky. Then he would turn away and find his place to sleep." —Sally Carrighar, Home to the Wilderness, 1973
Consequences provide the motivation that sends butterflies to flowers and people to the moon. The pursuit of happiness means the pursuit of consequences, large and small, sunsets included.
And consequences are everywhere. Some are immediate; others loom on the horizon to be anticipated or evaded. They're good, they're awful, they're everything in between. They work for tigers and for turtles—and for us. How ironic, then, that consequences and the science that focuses on them are so often overlooked.
Every day we work toward goals, goods, and incentives—consequences—for this minute, tomorrow, next year. Many rewards are obvious and immediate; others are subtle and easily missed. Some take a lifetime to achieve. Most are mundane, like paychecks and movies and smiles on friends' faces. Few remain unchanged in value, instead routinely transforming over time. Their variety seems infinite, far beyond biological drives like food, shelter, and sex. This chapter introduces that variety.
ORIGINS AND DEFINITIONS
Like most things, consequences started simple. We can't know what animal first learned from consequences, but flatworms, like the tiny planaria found in ponds, are reasonable candidates. Certainly both invertebrates and vertebrates are capable of this type of learning, and ancient planaria are considered common ancestors of both lines. On the planet for over 500 million years, they're the most primitive biological group to have "higher" neural features like brains (miniature ones, to be sure). Accordingly, their abilities have been researched extensively.
Despite looking like baby matchsticks that have taken to swimming, planaria can learn to work for consequences. In one study, some planaria had to move past an "electric eye" to shut off an unpleasant, intense light—which was a powerful reward. Others in a matched group were given the same intervals of light on and light off, but the intervals were independent of what these planaria did. Only the planaria with the light-off reward dependent on their behavior dramatically increased their interceptions of the electric eye—thus showing that this was truly learning from consequences and not just an effect of the light and dark alternation itself. These flatworms can even learn to work only when a signal is present: no signal, no reward.
This research illustrates what reinforcers are: By definition, reinforcers both depend on behaviors and sustain them. If a behavior gets going and keeps going because of a consequence, that consequence is a reinforcer. If a behavior declines because of a consequence, that consequence is a negative (a punisher).
Things that seem like rewards sometimes aren't: what matters is what actually happens, not the intention. Your Uncle Pete used to think that chucking you under the chin was rewarding. Wrong. Similarly, classroom reprimands sometimes function as reinforcers because of the attention that goes with them (from classmates as well as from the teacher). If a "reward" has no effect on a behavior, then it's not a reward: Again, it's what actually happens that matters. I will use reward and reinforcer interchangeably in this book.
Because planaria are very simple animals, the consequences that are effective for them are limited. More complex invertebrates like the octopus show more sophisticated learning, influenced by a greater variety of reinforcers. However, the range of effective consequences has mainly been explored in vertebrates, and it's large indeed.
WALTZING PIGEONS AND ROLLER-COASTER FISH: CONSEQUENCES ACROSS SPECIES
We love our pets when they behave like us—witness the dogs that enjoy watching TV and the owners who leave Spot's favorite program on. It may be harder to imagine common city pigeons doing something along these lines, but as a child, author and animal lover Gerald Durrell had a hand-raised pigeon that loved music and would snuggle close to the speaker of an old-fashioned record player. What's more, the bird performed distinctive dances to marches and waltzes.
Later, scientists found that pigeons not only distinguish between different types of music, they also categorize unfamiliar tunes in the same way that people do, down to different eras of classical music. And a number of species will work for music as a reward. The Java sparrow has been shown to prefer Bach over the twentieth-century dissonant composer Schoenberg, just like most of us. (Even rats feel the same way.)
Ducks, such as common eiders, have been observed bobbing down rapids and then flying back up to repeat the fun. Otters slide down mudbanks, and starlings down snowy rooftops. Naturalist Edwin Way Teale once observed six chimney swifts soaring in a natural wind upflow, diving back down, and repeating the ride again and again. They were not feeding, they were "sporting in the wind." What a different perspective on wild animals these episodes give us.
Even cold-blooded creatures get in on the act. I once laughed out loud at the sight of two puffer-type fish in an aquarium at an airport, of all places. They were taking turns deliberately swimming toward a strong circulation current that whirled them about and pushed them down, rather like a roller coaster. Other slimmer fish in the tank didn't appear to enjoy this and avoided the area. Perhaps the boxlike shape of the puffers changed their hydrodynamics. Or perhaps they were just well-fed youngsters with time on their hands.
What do these unusual animal reinforcers have in common with human rewards? A lot. The human love for roller coasters is based on reinforcers that are probably similar to those for the birds and the fish. Likewise, when it comes to musical harmony versus dissonance, the basic principles of hearing are similar in most vertebrates, even if the audible frequencies vary. Lacking ears, planaria are missing out.
GETTING STIMULATED: SENSORY CONSEQUENCES
But all creatures have sensory systems of some sort, even planaria, and they bring lots of rewarding possibilities.
As advertisers are well aware, in our constant-stimulation industrialized world, sheer sensory input is one of the most widespread reinforcers. Marketing experts outdo one another to hold our attention with rewarding visuals, sounds, and actions. Back in the nineteenth century, the first kaleidoscopes set sales records because of their novel combination of intrinsically reinforcing colors and shapes, which were perpetually complex, variable, and unpredictable—reinforcing on many levels (not unlike a computer screensaver). Even three-month-old babies will turn their heads so they can see complex patterns but not simple ones.
Humans aren't the only ones to appreciate these features: in laboratory settings, mice and baby chicks will work to view more intricate patterns. Chicks that had hatched just the day before, for example, already found looking at a design of stripes and semicircles more rewarding than eyeing plain gray.
The level of optimal stimulation varies. High levels are sometimes rewarding and emotionally energizing, other times just stressful. After a period of high stimulation, even thrill seekers may prefer simple, quiet surroundings. "Boredom" indicates a lack of reinforcers, not a lack of stimulation.
Any small reinforcers, such as the view outside a window, can be fountains in a desert of tedium. But even when we're doing something we enjoy—watching the Olympics or planting petunias—change is eventually welcome. Aristotle noted that "we cannot be moulded into only one habit, because each desire can operate only for short periods and must give way in turn to others."
THE SPICE OF LIFE: VARIETY AS A CONSEQUENCE
Variety is usually a reinforcer, as formal research confirms, and not just for humans. Pets and zoo animals get bored too. Monkeys will work for the opportunity simply to view other monkeys, or a toy train, or even the normal room activity of people, and these consequences remain effective for long periods. So monkeys like watching us as much as we like watching them—a lot. Animal behavior is so appealing that zoos and public aquariums enjoy big crowds, and millions of people have pets.
Variability is an important part of what is rewarding, in animal behavior and in plenty of other places. Sports are inherently variable; so are movie plots (well, good ones). Yes, we reread favorite books, but seldom immediately after we've finished them—and few would choose to watch A Hard Day's Night all day and all night, even though it's a classic. Some reinforcers can bear repetition, as when a child listens to the same song for hours (driving everyone else nuts!). But children eventually learn to ration that song, choosing variety instead.
Teachers do well to bear this in mind. A lack of variability in voice inflection can be the single best predictor of poor teaching evaluations. Relatedly, I once had an excellent student who got A's on all his quizzes but tired of seeing comments in the mode of "Great" and "Nice job." He made up a list of alternatives like "Publish it" and "Hey-ho!"
The desire for variety can be subtle. We may not realize why a particular writer strikes us as dull despite interesting content, but monotonous sentence length and rhythm might explain it. As famed writing instructor Gary Provost noted, "The ear must have variety or the mind will go out to lunch." In music, too, most successful composers across genres find ways to vary their melodies, harmonies, and rhythms to maintain interest. If that's not enough, music players offer a "shuffle play" feature that presents the tracks in random order, different each time.
The principle applies even to ordinary household tasks. From bestselling author and cook James Peterson: "I rarely prepare dishes according to an exact recipe because I never like to cook the same thing twice—I need to invent as I go along, or I get bored." After the same chow three days in a row, even rats prefer new foods, new flavors.
THE CREATIVE CONSEQUENCE
What could be more intriguing than the discovery that variability is not only reinforcing but it is itself a reinforceable characteristic of our behavior? Creativity is not a zero-sum game, such that we each have only a limited amount. Instead, it's a "nurturable" that blossoms with encouragement: if you feed it, it will grow. And it's not limited to people.
Back in 1969, then dolphin trainer Karen Pryor and her colleagues knew better than most how inventive and adaptable dolphins are. They decided to see just how far dolphin creativity would go by rewarding only behaviors that they had never seen two particular dolphins do before. The dolphins met the challenge, coming up with quirky movements and stunts that they never would have had occasion for in the wild. If dolphins could do this, how about kids? During the same period, scientists rewarded a few toddlers for building-block constructions that differed from prior designs—and the youngsters' creativity took off. Two of the three children came up with four times as many new designs when they were praised for doing so, compared to a baseline period with no particular consequences.
Experimental psychologist Allen Neuringer's decades-long research program established the effect conclusively. His ordinary pigeons pecked two projecting buttons called keys. A reinforceable unit was eight key pecks that could be any combination of lefts and rights. To get delicious grains, the pigeons were able to produce sequences of eight that were consistently different from their last fifty sequences. And they weren't accomplishing this feat by memorizing. Instead, they behaved like random number generators, which was an efficient solution. Can people do the same? First we have to overcome misconceptions of what "random" means mathematically. Neuringer rewarded participants for producing what they thought were random sequences of digits, then ranked the sequences on a number of statistical criteria and found that his volunteers did rather poorly. He discovered, for example, that we tend to alternate digits more often than is the case in a true random series. But with practice and feedback, people, like pigeons, were reliably able to generate random-appearing sequences, a skill that used to be considered extremely difficult.
We may not often need to act randomly—quite the reverse. But if we're regularly reinforced—or punished—for variable behaviors like exploring or showing curiosity, guess what can happen? Just as with the kids' building-block designs, our creativity can soar—or plummet. Many Fortune 500 companies are quite aware of this, Google's innovation policies being one well-known example.
TAKING ADVANTAGE OF VARIETY
Some consequences for exploring, both positive and negative, are present naturally: variety, for example. We've seen how even little changes of scenery can be refreshing. Riding in a moving vehicle is reinforcing for those with good vision. Dogs get a kick out of the variety of scents to be sniffed on a short walk. On the other hand, familiarity is reinforcing under other circumstances, and exploring can mean leaving what is not only familiar but also safe. You don't find wild mice exploring in broad daylight but rather at night when they're at less risk; if their existing foraging locations fail them, they must seek new ones. They probably discovered their existing sources through exploring, so there's a history of reinforcement that helps.
The principle applies more broadly, of course. When do we challenge ourselves with the new instead of sticking to the old? Most of us like some sort of balance between the two, in the same way as optimal stimulation. Still, if we've had good luck (that is, been reinforced) seeing movies by particular directors, or reading books by particular authors, we're likely to stick with them. Unknowns, with their novelty and more questionable reward values, have a harder time vying for our attention.
Reinforcers like movies and books automatically offer variety. Even better, a "generalized reinforcer," like money, can be exchanged for a great variety of other reinforcers. Point systems take advantage of the power of these consequence choices. At one Michigan school, reward options included fun time with the principal for 75 points and free time in the computer lab for 100 points. Researchers who study people often have to provide varied rewards to keep their participants motivated. A common solution is to offer lottery tickets for a chance to win prizes, such as gift certificates or pizzas. What about animals? Orcas (killer whales) at one SeaWorld never knew which reinforcers would come next: stroking and scratching, social attention, toys, and so on. The result was that "the shows can be run almost entirely without the standard fish reinforcers; the animals get their food at the end of the day."
Animals with variety in their lives are healthier and happier, just like people. Environmental stimulants like toys are best varied, and those that produce unpredictable movements tend to be especially valuable and long-lasting. One big success at one zoo was a sturdy swinging bag, hardly naturalistic, but rubbed and butted for hours by a rhino (see chapter 13 for more examples).
THE POSITIVE SIDE OF PROBLEMS
Monkeys can so love solving mechanical and other "educational" puzzles that they will sometimes do so without any additional reward. The activity itself is intrinsically reinforcing. Karen Pryor reported on a challenge for a rough-toothed porpoise: it had to learn to pick, among several choices, the one that matched a sample. In the session where the porpoise first made notable progress, it continued working even when it was no longer eating the fish it had earned—as if it enjoyed something about the learning itself, just as we do. (Of course, social approval from the trainer certainly helped.) Along these lines, here's an anecdote about biopsychologist Donald Hebb: A chimpanzee working on challenging categorization problems hoarded its banana slices until Hebb ran out. But the chimp continued to work—and after solving the problems gave Hebb the banana slices!
Excerpted from The Science of Consequences by SUSAN M. SCHNEIDER Copyright © 2012 by Susan M. Schneider. Excerpted by permission of Prometheus Books. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of Contents
Part 1 Consequences and how Nature-Nurture Really Works
Chapter 1 Consequences Everywhere 19
Origins and Definitions 20
Waltzing Pigeons and Roller-Coaster Fish: Consequences across Species 21
Getting Stimulated: Sensory Consequences 22
The Spice of Life: Variety as a Consequence 23
The Creative Consequence 24
Taking Advantage of Variety 26
The Positive Side of Problems 27
Taking Control 27
Chapter 2 Consequences and Evolution: The Cause that works Backward 31
Dance of the Balloons 32
Flexible Instincts 33
Bugs That Learn 36
Which Came First? 37
The Evolution of Consequences 40
Bird Beaks Pointing the Way: How Consequences Lead Evolution 41
The Cause That Works Backward 43
Chapter 3 Genes and Consequences 47
Meet Your Genome 48
Getting Turned On 49
The Genetics of Consequences 50
Interactions Everywhere 52
What's Inherited-and What Isn't 54
Epigenetics: New Kid in the Neighborhood 56
Chapter 4 Neuroscience and Consequences 59
Enrichment on the Brain 60
Neurons and Connections 62
Rewarding Chemicals: Dopamine and Its Cousins 64
Pleasure Centers 66
The Sky's the Limit: Neuroplasticity and Real-Life Applications 68
Part 2 There's a Science of Consequences?
Chapter 5 Consequences on Schedule: Simple Principles with Surprising Outcomes 75
False Consequences 76
Consequences on Schedule 77
Work-Based Schedules and the Power of Unpredictability 78
Consequences on Time 80
Progress and Perseverance 82
Making the Most of Schedules 83
Schedules Everywhere 84
Chapter 6 The Dark Side of Consequences 87
Shades of Gray 88
Choosing Pain 92
Making Negatives Work-Positively 95
Chapter 7 Choices and Signals 99
The Matching Game 100
So What Can the Matching Law Do? 103
Winning Matches 104
Getting the Signal 105
A Smorgasbord of Signals 107
Of Signal Importance 108
Chapter 8 Pavlov and Consequences: an Essential Partnership 111
Compensating Reactions and Drug Tolerance 113
Not All in Your Head: The Placebo Effect and Other Mind-Body Surprises 115
Getting Emotional 116
Value, Anticipation, and Learned Consequences 119
Learned and Unlearned 121
Chapter 9 Observing and Attending 125
The Many Roles of Attention 125
Not-So-Simple Observations 128
Beneath the Radar: Consequences without Awareness 131
It's Automatic 132
Observing Others 135
The Ultimate in Observing: Imitation 137
Chapter 10 Thinking and Communicating 141
Categories Large and Small 142
Simple Communication 143
The Understanding Animal: Simple Language 145
Human Languages and Its Consequences 147
Same Word, Different Consequence 148
Babble On 150
Language Learning in Real Life 151
Strictly Private 153
Making Up the Rules 155
Language and Biology 157
Part 3 Shaping Destinies
Chapter 11 Everyday Consequences 161
Creating Rewards 162
How We Treat Each Other 164
Shaping the Future 167
The Challenging Side of Parenting 171
What Marriage Can Be 173
Real Self-Esteem 174
Chapter 12 Fighting the Impulse: Self-Control, Anyone? 177
Detecting Delays 177
The Disappearing Reward 179
The Marshmallow and the Kid 181
Fighting the Impulse: Using What We Know 183
Taking Charge of Weight 188
Chapter 13 Endangered Species, Undercover Crows, and the Family Dog: Applications for Animals 191
Animal Companions 192
At the Zoo: Animal Care the Easy Way 195
Life at the Zoo 197
From Endangered Species to Farm Animals 198
Animals That Save Our Lives 201
Chapter 14 The Rewards of Education and Work 205
There are No Shortcuts 206
Consequences in Classroom Management 207
Maximizing Potential 210
Successful Programs 213
More on Motivation 215
Consequences at Work 216
Chapter 15 Help for Addiction, Autism, and Other Conditions 225
Churchill's "Black Dog": Depression 225
Anxiety and Fear 229
Getting Unhooked: Addiction 231
Attention Deficit Hyperactivity Disorder: Drugs or Consequences? 236
Brief Notes 239
Chapter 16 Consequences on a Grand Scale: Society, the Long Term, and the Planet 243
Obedience and Disobedience 244
Overcoming Prejudice 246