- Want it by Thursday, October 25? Order by 12:00 PM Eastern and choose Expedited Shipping at checkout.
Why psychology is in peril as a scientific disciplineand how to save it
Psychological science has made extraordinary discoveries about the human mind, but can we trust everything its practitioners are telling us? In recent years, it has become increasingly apparent that a lot of research in psychology is based on weak evidence, questionable practices, and sometimes even fraud. The Seven Deadly Sins of Psychology diagnoses the ills besetting the discipline today and proposes sensible, practical solutions to ensure that it remains a legitimate and reliable science in the years ahead.
In this unflinchingly candid manifesto, Chris Chambers draws on his own experiences as a working scientist to reveal a dark side to psychology that few of us ever see. Using the seven deadly sins as a metaphor, he shows how practitioners are vulnerable to powerful biases that undercut the scientific method, how they routinely torture data until it produces outcomes that can be published in prestigious journals, and how studies are much less reliable than advertised. He reveals how a culture of secrecy denies the public and other researchers access to the results of psychology experiments, how fraudulent academics can operate with impunity, and how an obsession with bean counting creates perverse incentives for academics. Left unchecked, these problems threaten the very future of psychology as a sciencebut help is here.
Outlining a core set of best practices that can be applied across the sciences, Chambers demonstrates how all these sins can be corrected by embracing open science, an emerging philosophy that seeks to make research and its outcomes as transparent as possible.
|Publisher:||Princeton University Press|
|Product dimensions:||6.40(w) x 9.30(h) x 1.20(d)|
About the Author
Chris Chambers is professor of cognitive neuroscience in the School of Psychology at Cardiff University and a contributor to the Guardian science blog network.
Read an Excerpt
The Seven Deadly Sins of Psychology
A Manifesto for Reforming the Culture of Scientific Practice
By Chris Chambers
PRINCETON UNIVERSITY PRESSCopyright © 2017 Chris Chambers
All rights reserved.
The Sin of Bias
The human understanding when it has once adopted an opinion ... draws all things else to support and agree with it.
— Francis Bacon, 1620
History may look back on 2011 as the year that changed psychology forever. It all began when the Journal of Personality and Social Psychology published an article called "Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect." The paper, written by Daryl Bem of Cornell University, reported a series of experiments on ITLψITL or "precognition," a supernatural phenomenon that supposedly enables people to see events in the future. Bem, himself a reputable psychologist, took an innovative approach to studying psi. Instead of using discredited parapsychological methods such as card tasks or dice tests, he selected a series of gold-standard psychological techniques and modified them in clever ways.
One such method was a reversed priming task. In a typical priming task, people decide whether a picture shown on a computer screen is linked to a positive or negative emotion. So, for example, the participant might decide whether a picture of kittens is pleasant or unpleasant. If a word that "primes" the same emotion is presented immediately before the picture (such as the word "joy" followed by the picture of kittens), then people find it easier to judge the emotion of the picture, and they respond faster. But if the prime and target trigger opposite emotions then the task becomes more difficult because the emotions conflict (e.g., the word "murder" followed by kittens). To test for the existence of precognition, Bem reversed the order of this experiment and found that primes delivered after people had responded seemed to influence their reaction times. He also reported similar "retroactive" effects on memory. In one of his experiments, people were overall better at recalling specific words from a list that were also included in a practice task, with the catch that the so-called practice was undertaken after the recall task rather than before. On this basis, Bem argued that the participants were able to benefit in the past from practice they had completed in the future.
As you might expect, Bem's results generated a flood of confusion and controversy. How could an event in the future possibly influence someone's reaction time or memory in the past? If precognition truly did exist, in even a tiny minority of the population, how is it that casinos or stock markets turn profits? And how could such a bizarre conclusion find a home in a reputable scientific journal?
Scrutiny at first turned to Bem's experimental procedures. Perhaps there was some flaw in the methods that could explain his results, such as failing to randomize the order of events, or some other subtle experimental error. But these aspects of the experiment seemed to pass muster, leaving the research community facing a dilemma. If true, precognition would be the most sensational discovery in modern science. We would have to accept the existence of time travel and reshape our entire understanding of cause and effect. But if false, Bem's results would instead point to deep flaws in standard research practices — after all, if accepted practices could generate such nonsensical findings, how can any published findings in psychology be trusted? And so psychologists faced an unenviable choice between, on the one hand, accepting an impossible scientific conclusion and, on the other hand, swallowing an unpalatable professional reality.
The scientific community was instinctively skeptical of Bem's conclusions. Responding to a preprint of the article that appeared in late 2010, the psychologist Joachim Krueger said: "My personal view is that this is ridiculous and can't be true." After all, extraordinary claims require extraordinary evidence, and despite being published in a prestigious journal, the statistical strength of Bem's evidence was considered far from extraordinary.
Bem himself realized that his results defied explanation and stressed the need for independent researchers to replicate his findings. Yet doing so proved more challenging than you might imagine. One replication attempt by Chris French and Stuart Ritchie showed no evidence whatsoever of precognition but was rejected by the same journal that published Bem's paper. In this case the journal didn't even bother to peer review French and Ritchie's paper before rejecting it, explaining that it "does not publish replication studies, whether successful or unsuccessful." This decision may sound bizarre, but, as we will see, contempt for replication is common in psychology compared with more established sciences. The most prominent psychology journals selectively publish findings that they consider to be original, novel, neat, and above all positive. This publication bias, also known as the "file-drawer effect," means that studies that fail to show statistically significant effects, or that reproduce the work of others, have such low priority that they are effectively censored from the scientific record. They either end up in the file drawer or are never conducted in the first place.
Publication bias is one form of what is arguably the most powerful fallacy in human reasoning: confirmation bias. When we fall prey to confirmation bias, we seek out and favor evidence that agrees with our existing beliefs, while at the same time ignoring or devaluing evidence that doesn't. Confirmation bias corrupts psychological science in several ways. In its simplest form, it favors the publication of positive results — that is, hypothesis tests that reveal statistically significant differences or associations between conditions (e.g., A is greater than B; A is related to B, vs. A is the same as B; A is unrelated to B). More insidiously, it contrives a measure of scientific reproducibility in which it is possible to replicate but never falsify previous findings, and it encourages altering the hypotheses of experiments after the fact to "predict" unexpected outcomes. One of the most troubling aspects of psychology is that the academic community has refused to unanimously condemn such behavior. On the contrary, many psychologists acquiesce to these practices and even embrace them as survival skills in a culture where researchers must publish or perish.
Within months of appearing in a top academic journal, Bem's claims about precognition were having a powerful, albeit unintended, effect on the psychological community. Established methods and accepted publishing practices fell under renewed scrutiny for producing results that appear convincing but are almost certainly false. As psychologist Eric-Jan Wagenmakers and colleagues noted in a statistical demolition of Bem's paper: "Our assessment suggests that something is deeply wrong with the way experimental psychologists design their studies and report their statistical results." With these words, the storm had broken.
A Brief History of the "Yes Man"
To understand the different ways that bias influences psychological science, we need to take a step back and consider the historical origins and basic research on confirmation bias. Philosophers and scholars have long recognized the "yes man" of human reasoning. As early as the fifth century BC, the historian Thucydides noted words to the effect that "[w]hen a man finds a conclusion agreeable, he accepts it without argument, but when he finds it disagreeable, he will bring against it all the forces of logic and reason." Similar sentiments were echoed by Dante, Bacon, and Tolstoy. By the mid-twentieth century, the question had evolved from one of philosophy to one of science, as psychologists devised ways to measure confirmation bias in controlled laboratory experiments.
Since the mid-1950s, a convergence of studies has suggested that when people are faced with a set of observations (data) and a possible explanation (hypothesis), they favor tests of the hypothesis that seek to confirm it rather than falsify it. Formally, what this means is that people are biased toward estimating the probability of data if a particular hypothesis is true, p(data|hypothesis) rather than the opposite probability of it being false, p(data| ~hypothesis). In other words, people prefer to ask questions to which the answer is "yes," ignoring the maxim of philosopher Georg Henrik von Wright that "no confirming instance of a law is a verifying instance, but ... any disconfirming instance is a falsifying instance."
Psychologist Peter Wason was one of the first researchers to provide laboratory evidence of confirmation bias. In one of several innovative experiments conducted in the 1960s and 1970s, he gave participants a sequence of numbers, such as 2-4-6, and asked them to figure out the rule that produced it (in this case: three numbers in increasing order of magnitude). Having formed a hypothesis, participants were then allowed to write down their own sequence, after which they were told whether their sequence was consistent or inconsistent with the actual rule. Wason found that participants showed a strong bias to test various hypotheses by confirming them, even when the outcome of doing so failed to eliminate plausible alternatives (such as three even numbers). Wason's participants used this strategy despite being told in advance that "your aim is not simply to find numbers which conform to the rule, but to discover the rule itself."
Since then, many studies have explored the basis of confirmation bias in a range of laboratory-controlled situations. Perhaps the most famous of these is the ingenious Selection Task, which was also developed by Wason in 1968. The Selection Task works like this. Suppose I were to show you four cards on a table, labeled D, B, 3, and 7 (see figure 1.1). I tell you that if the card shows a letter on one side then it will have a number on the other side, and I provide you with a more specific rule (hypothesis) that may be true or false: "If there is a D on one side of any card, then there is a 3 on its other side." Finally, I ask you to tell me which cards you would need to turn over in order to determine whether this rule is true or false. Leaving an informative card unturned or turning over an uninformative card (i.e., one that doesn't test the rule) would be considered an incorrect response. Before reading further, take a moment and ask yourself, which cards would you choose and which would you avoid?
If you chose D and avoided B then you're in good company. Both responses are correct and are made by the majority of participants. Selecting D seeks to test the rule by confirming it, whereas avoiding B is correct because the flip side would be uninformative regardless of the outcome.
Did you choose 3? Wason found that most participants did, even though 3 should be avoided. This is because if the flip side isn't a D, we learn nothing — the rule states that cards with D on one side are paired a 3 on the other, not that D is the only letter to be paired with a 3 (drawing such a conclusion would be a logical fallacy known as "affirming the consequent"). And even if the flip side is a D then the outcome would be consistent with the rule but wouldn't confirm it, for exactly the same reason.
Finally, did you choose 7 or avoid it? Interestingly, Wason found that few participants selected 7, even though doing so is correct — in fact, it is just as correct as selecting D. If the flip side to 7 were discovered to be a D then the rule would be categorically disproven — a logical test of what's known as the "contrapositive." And herein lies the key result: the fact that most participants correctly select D but fail to select 7 provides evidence that people seek to test rules or hypotheses by confirming them rather than by falsifying them.
Wason's findings provided the first laboratory-controlled evidence of confirmation bias, but centuries of informal observations already pointed strongly to its existence. In a landmark review, psychologist Raymond Nickerson noted how confirmation bias dominated in the witchcraft trials of the middle ages. Many of these proceedings were a foregone conclusion, seeking only to obtain evidence that confirmed the guilt of the accused. For instance, to test whether a person was a witch, the suspect would often be plunged into water with stones tied to her feet. If she rose then she would be proven a witch and burned at the stake. If she drowned then she was usually considered innocent or a witch of lesser power. Either way, being suspected of witchcraft was tantamount to a death sentence within a legal framework that sought only to confirm accusations. Similar biases are apparent in many aspects of modern life. Popular TV programs such as CSI fuel the impression that forensic science is bias-free and infallible, but in reality the field is plagued by confirmation bias. Even at the most highly regarded agencies in the world, forensic examiners can be biased toward interpreting evidence that confirms existing suspicions. Doing so can lead to wrongful convictions, even when evidence is based on harder data such as fingerprints and DNA tests.
Confirmation bias also crops up in the world of science communication. For many years it was assumed that the key to more effective public communication of science was to fill the public's lack of knowledge with facts — the so-called deficit model. More recently, however, this idea has been discredited because it fails to take into account the prior beliefs of the audience. The extent to which we assimilate new information about popular issues such as climate change, vaccines, or genetically modified foods is susceptible to a confirmation bias in which evidence that is consistent with our preconceptions is favored, while evidence that flies in the face of them is ignored or attacked. Because of this bias, simply handing people more facts doesn't lead to more rational beliefs. The same problem is reflected in politics. In his landmark 2012 book, the Geek Manifesto, Mark Henderson laments the cherry-picking of evidence by politicians in order to reinforce a predetermined agenda. The resulting "policy-based evidence" is a perfect example of confirmation bias in practice and represents the antithesis of how science should be used in the formulation of evidence-based policy.
If confirmation bias is so irrational and counterproductive, then why does it exist? Many different explanations have been suggested based on cognitive or motivational factors. Some researchers have argued that it reflects a fundamental limit of human cognition. According to this view, the fact that we have incomplete information about the world forces us to rely on the memories that are most easily retrieved (the so-called availability heuristic), and this reliance could fuel a bias toward what we think we already know. On the other hand, others have argued that confirmation bias is the consequence of an innate "positive-test strategy" — a term coined in 1987 by psychologists Joshua Klayman and Young-Won Ha. We already know that people find it easier to judge whether a positive statement is true or false (e.g., "there are apples in the basket") compared to a negative one ("there are no apples in the basket"). Because judgments of presence are easier than judgments of absence, it could be that we prefer positive tests of reality over negative ones. By taking the easy road, this bias toward positive thoughts could lead us to wrongly accept evidence that agrees positively with our prior beliefs.
Against this backdrop of explanations for why an irrational bias is so pervasive, psychologists Hugo Mercier and Dan Sperber have suggested that confirmation bias is in fact perfectly rational in a society where winning arguments is more important than establishing truths. Throughout our upbringing, we are taught to defend and justify the beliefs we hold, and less so to challenge them. By interpreting new information according to our existing preconceptions we boost our self-confidence and can argue more convincingly, which in turn increases our chances of being regarded as powerful and socially persuasive. This observation leads us to an obvious proposition: If human society is constructed so as to reward the act of winning rather than being correct, who would be surprised to find such incentives mirrored in scientific practices?
Neophilia: When the Positive and New Trumps the Negative but True
The core of any research psychologist's career — and indeed many scientists in general — is the rate at which they publish empirical articles in high-quality peer-reviewed journals. Since the peer-review process is competitive (and sometimes extremely so), publishing in the most prominent journals equates to a form of "winning" in the academic game of life.
Journal editors and reviewers assess submitted manuscripts on many grounds. They look for flaws in the experimental logic, the research methodology, and the analyses. They study the introduction to determine whether the hypotheses are appropriately grounded in previous research. They scrutinize the discussion to decide whether the paper's conclusions are justified by the evidence. But reviewers do more than merely critique the rationale, methodology, and interpretation of a paper. They also study the results themselves. How important are they? How exciting? How much have we learned from this study? Is it a breakthrough? One of the central (and as we will see, lamentable) truths in psychology is that exciting positive results are a key factor in publishing — and often a requirement. The message to researchers is simple: if you want to win in academia, publish as many papers as possible in which you provide positive, novel results.
Excerpted from The Seven Deadly Sins of Psychology by Chris Chambers. Copyright © 2017 Chris Chambers. Excerpted by permission of PRINCETON UNIVERSITY PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of Contents
1 The Sin of Bias 1
A Brief History of the “Yes Man” 4
Neophilia: When the Positive and New Trumps the Negative but True 8
Replicating Concepts Instead of Experiments 13
Reinventing History 16
The Battle against Bias 20
2 The Sin of Hidden Flexibility 22
Peculiar Patterns of p 29
Ghost Hunting 34
Unconscious Analytic “Tuning” 35
Biased Debugging 39
Are Research Psychologists Just Poorly Paid Lawyers? 40
Solutions to Hidden Flexibility 41
3 The Sin of Unreliability 46
Sources of Unreliability in Psychology 48
Reason 1: Disregard for Direct Replication 48
Reason 2: Lack of Power 55
Reason 3: Failure to Disclose Methods 61
Reason 4: Statistical Fallacies 63
Reason 5: Failure to Retract 65
Solutions to Unreliability 67
4 The Sin of Data Hoarding 75
The Untold Benefits of Data Sharing 77
Failure to Share 78
Secret Sharing 80
How Failing to Share Hides Misconduct 81
Making Data Sharing the Norm 84
Grassroots, Carrots, and Sticks 88
Unlocking the Black Box 91
Preventing Bad Habits 94
5 The Sin of Corruptibility 96
The Anatomy of Fraud 99
The Thin Gray Line 105
When Junior Scientists Go Astray 112
Kate’s Story 117
The Dirty Dozen: How to Get Away with Fraud 122
6 The Sin of Internment 126
The Basics of Open Access Publishing 128
Why Do Psychologists Support Barrier-Based Publishing? 129
Hybrid OA as Both a Solution and a Problem 132
Calling in the Guerrillas 136
An Open Road 147
7 The Sin of Bean Counting 149
Roads to Nowhere 151
Impact Factors and Modern-Day Astrology 151
Wagging the Dog 160
The Murky Mess of Academic Authorship 163
Roads to Somewhere 168
8 Redemption 171
Solving the Sins of Bias and Hidden Flexibility 174
Registered Reports: A Vaccine against Bias 174
Preregistration without Peer Review 196
Solving the Sin of Unreliability 198
Solving the Sin of Data Hoarding 202
Solving the Sin of Corruptibility 205
Solving the Sin of Internment 208
Solving the Sin of Bean Counting 210
Concrete Steps for Reform 213