Placebo cures. Global warming. Extraterrestrial life. Psychokinesis. In a time when scientific claims can sound as strange as science fictionand can have a profound effect on individual life or public policyassessing the merits of a far-out, supposedly scientific idea can be as difficult as it is urgent. Into the breach between helpless gullibility and unyielding skepticism steps physicist Robert Ehrlich, with an indispensable guide to making sense of "scientific" claims. A series of case studies of some of the most controversial (and for the judging public, deeply vexing) topics in the natural and social sciences, Ehrlich's book serves as a primer for evaluating the evidence for the sort of strange-sounding ideas that can shape our lives.
A much-anticipated follow-up to his popular Nine Crazy Ideas in Science, this book takes up issues close to readers' everyday realityissues such as global warming, the dangers of cholesterol, and the effectiveness of placebosas well as questions that resonate through (and beyond) civic life: Is intelligent design a scientific alternative to evolution? Is homosexuality primarily innate? Are people getting smarter or dumber? In each case, Ehrlich shows readers how to use the tools of science to judge the accuracy of strange ideas and the trustworthiness of ubiquitous "experts." As entertaining as it is instructive, his book will make the work of living wisely a bit easier and more reliable for scientists and nonscientists alike.
|Publisher:||Princeton University Press|
|Product dimensions:||5.72(w) x 8.70(h) x 1.08(d)|
About the Author
Robert Ehrlich is Professor of Physics at George Mason University and a Fellow of the American Physical Society. He is the author of twenty books, including Nine Crazy Ideas in Science: A Few Might Even Be True, Why Toast Lands Jelly-Side Down, and Turning the World Inside Out & 174 Other Simple Physics Demonstrations (all Princeton).
Read an Excerpt
Eight Preposterous PropositionsFrom the Genetics of Homosexuality to the Benefits of Global Warming
By Robert Ehrlich
Princeton University PressRobert Ehrlich
All right reserved.
WE LIVE IN AN AGE when the boundaries between science and science fiction are becoming increasingly blurred. It sometimes seems that nothing is too strange to be true. How can we decide which of the outlandish ideas that are constantly bombarding us by might be true and which are complete nonsense? It's particularly tough to come to an informed judgment regarding ideas that have a scientific component where there are also huge political and economic stakes involved, such as global warming. In such a case, it's too easy to go by our political preferences or by the opinions of the last expert we saw on TV. This book will help you look at the "evidence" critically and judge for yourself. It is similar to Nine Crazy Ideas in Science-A Few Might Even Be True. In this book, though, we'll be looking at topics that are a bit less "sciency" and closer to everyday life. But we will still be using the tools of science to analyze each topic.
Scientists try to take as little for granted as possible. They are constantly asking how we know whether something is true, and whether alternative explanations are possible. Scientists also seem to be more comfortable with uncertainty than other people. Many scientists may believe that a given theory-say, that of human-caused global warming-is likely to be true, but be far from certain about it. Such uncertainty is disquieting to most people who want to have as clear a picture as possible of our planet's future. Policy-makers, in particular, want informed scientific guidance before undertaking expensive solutions to problems of uncertain severity.
People facing large uncertainties and potentially large risks may be forced to rely on their gut instincts. That's OK in one sense. Making judgments based on our emotions is understandable if our large uncertainties reflect those that are inherent in an issue. But if the uncertainties are simply a matter of our own laziness in not looking into an issue deeply enough, then relying on our instincts is very unfortunate. Public policy on important issues may be left by default to those competing interests that are best able to manipulate public opinion.
Scientists approach a controversial theory from the opposite perspective of lawyers. Lawyers are paid to make the best case possible for their client, a person they may believe to be guilty. They want to know all the evidence on the other side, but only in order to refute it better and make their case stronger. If they can't refute the evidence, they'll try any trick in the book to make the jury discount it. Scientists, on the other hand, are more like jury members. While science does include its share of cheaters, self-deceivers, and self-promoters, scientists should judge a theory by fairly weighing all the evidence on each side.
Like juries, people who approach topics from a scientific perspective also may require different levels of certainty ("beyond a reasonable doubt" or the less strict "preponderance of the evidence"), depending on what the theory involves. I suspect that on global warming, for example, the preponderance of the evidence might be enough certainty for most people. But, unlike a jury, which has to make up its mind once and for all, scientists must continue to remain open-minded to contrary evidence even after they have accepted a theory as being probably true.
Why did I choose this particular set of eight topics? As in Nine Crazy Ideas, I wanted to explore some topics that are controversial, and I hope that I've included enough controversial ones to offend just about everyone. I also wanted topics that have important public policy implications in the real world and have ample scientific data on each side of the issue. Although initially I was reluctant to tackle any topics involving paranormal or psychic phenomena, I wound up including one (the possibility that objects can be influenced by thought alone) because a great deal of experimental data actually exists on this matter.
Although in many cases I found the evidence for an idea not to be credible, this is not a debunking book. I tried to look at each idea as objectively as possible. Even though I did have some initial biases with most of the eight ideas, I also found myself changing my opinion on them-in some cases two or three times! As a general rule, I believe that in approaching a new idea it is very important not to make up your mind too quickly.
Is there a formula to apply to a given controversial idea to see if it might be true? No, but you should ask yourself questions: How do the proponents of a new idea claim to know that it's true? How might the data be interpreted differently? How can the theory be tested? Let me give you an example from a topic that I almost decided to write about, but didn't. Many people believe that violence in the media causes children who view it to become violent. There is no question that a connection exists at some level between media and real-world violence, given the copycat phenomenon. Even some terrorists are said to have gotten ideas from viewing action-adventure movies! But here we're considering the more general question of whether media violence causes children exposed to it to become violent later in life. Not having looked at the research, I am agnostic on the question of whether this view is correct. Most social scientists believe it is true, but there are some who don't.1 What are some of the kinds of questions you might ask yourself to decide whether the theory that media violence causes real life violence is true? Why don't you actually stop reading for a few minutes and make a list. You could then later compare your list to mine at the end of this chapter.
In evaluating a controversial idea it is useful to consult a wide range of sources. The Internet makes this quite easy, but it also makes it easy to wind up at web sites of pseudo-experts. One very helpful summary of methods to check that the information on a web site is reliable has been prepared by Jim Kapoun, an instruction librarian at Southwest State University-see his site at ala.org/acrl/undwebev.html. Kapoun's checklist was prepared to help college students evaluate the reliability of any web site, but its methods are appropriate for anyone. You can hone your skills at recognizing the real and pseudo-experts on any topic by answering the specific questions that Kapoun raises about any web site. Here are some additional criteria you can use. More often than not, the pseudo-expert's sites share these features. (1) Pseudo-experts are usually certain of everything they claim, (2) cite their own research frequently, (3) try to impress you with fancy titles, (4) describe the suppression of their ideas by the establishment, and (5) have a clear agenda and maybe even a financial incentive. But aside from the pitfalls of being swayed by fancy-looking web sites that actually contain nonsense, the web is a fantastic tool for gaining valid information on virtually any topic.
It may strike you that I am being hypocritical in stressing the importance of being wary of pseudo-experts on any given topic. After all, am I-a physics professor-not just a pseudo-expert on many of the topics I'm writing about in this book? For that matter, how can you or I be expected to find the truth about controversial subjects that lie way outside our fields of expertise, when even the real experts disagree? I think it is actually possible for outsiders to do a competent job of analyzing evidence in many areas-but only if they have done their homework.
That homework involves learning a bit of statistics (the favorite tool of people who want to distort the truth) and enough of the basic science and vocabulary in a field to clearly understand the basis for what is being claimed. As I've already said, just follow the evidence, ask how each claim is demonstrated to be true, and don't make up your mind too quickly. If you decide too quickly, you could fall into the trap of filtering all evidence through your preconceived view and not giving contrary evidence sufficient weight-which we all do far too frequently. That trap is the very essence of prejudice.
The eight topics discussed in this book have varying degrees of credibility. In each case, after discussing the evidence on each side I give the idea a rating at the end of the chapter, according to how well the case for it has been demonstrated. In a previous book I used a "cuckoo" rating scale, which went from zero to four cuckoos, based on how crazy I considered an idea. Here I've abandoned the cuckoo scale in favor of a "flakiness" scale. An example may help clarify the difference between craziness and flakiness. Many of the ideas of modern physics are completely crazy, especially some of the paradoxical ideas of quantum physics. In fact, when one of the pioneers in this field, Wolfgang Pauli, made a presentation on one of his new crazy theories, he was told by the great physicist Niels Bohr: "We are all agreed that your theory is crazy. The question, which divides us, is whether it is crazy enough to have a chance of being correct. My own feeling is that it is not crazy enough."
Bohr understood that great revolutionary advances in science always will sound crazy at the beginning, partly because they challenge conventional wisdom, and partly because they will initially be presented in a confusing incomplete form. The tidying-up done after the fact makes great scientific advances seem much more logical, but it obscures the important roles of intuition, imagination, chance, and even aesthetics in making scientific discoveries. The true scientific method is not quite as neat and logical as we sometimes portray it to be.
Although many of the ideas of quantum physics are still crazy after all these years (since the 1930s), they are certainly not "flaky," i.e., lacking in empirical evidence or internal consistency. Conversely, new theories may be flaky, but not sound crazy at all, if they fit into your view of how the world works. My new rating scheme based on flakiness goes from zero flakes, meaning a reasonable degree of confidence that the idea is true based on the evidence, to four flakes, meaning no credible evidence for the idea. A summary of my ratings for each idea can be found in the epilogue. Obviously, these ratings are subjective and influenced by my own biases, but I will reveal those biases if I have any, so that you can decide for yourself how honestly I've dealt with each idea.
Questions to Ask in Judging Whether A Really Causes B
This section illustrates some questions you might ask to decide whether a theory claiming that A causes B is well supported by the evidence. For specificity, we'll assume that A is media violence to which children are exposed and B is real-life violence that they later commit, but the same questions could serve as a template for just about any other topic. I suspect you may be able to come up with a number of additional questions to those listed.
- How exactly do the studies looking for a connection between A and B define and measure A? How do they define and measure B? Do their definitions seem reasonable?
- Have studies shown there is a correlation between A and B? How strong is the correlation?2
- If only some of the studies on A and B show there is a correlation between them, which studies seem better designed? Which studies have the greater statistical significance? (Just because a study finds a negative result, it doesn't mean A and B Are unrelated-for example, the study sample may have been very small.
- If some studies show that there is a strong correlation between A and B, can that correlation be explained in ways other than A being the cause of B?3
- How do the studies exploring the connection between A and B control for confounding variables, such as other possible causes of B? What are those other causes, and do the studies investigate how important they are compared to A?
- Do the studies compare situations in which A is present with "control groups" in which A is not present? If so, how can we be sure the control groups are representative?
- Do the studies show that the relation between A and B is a continuous one, that is, the more A is present, the more B is found later?
- If A really causes B, can we explain why in some places A is common but B is not? (Japan, for example, has a great deal of media violence, but little real-life violence.)
- If A really causes B, can we explain why A becomes at some times more prevalent while B becomes less prevalent? (Media violence has probably been increasing in the United States over time, but real-life violent crime has been decreasing for a number of years, at least until 2001.)
- Do the individuals doing studies on the connection between A and B appear to have strong ideological biases?
- Who is funding a given researcher's study?4
This is the first of many boxes throughout the book. Boxes contain information that can be skipped without disturbing the flow. Sometimes the information in the box may be a little more technical than the text. Other times it may provide definitions or examples of ideas discussed in the chapter. This box concerns the important subject of statistical significance. The statistical significance of any study is judged by the likelihood that the results could have been due to chance. Researchers in different fields have different standards regarding how small this likelihood should be for a result to be considered "significant." In many fields the criterion is "95 percent confidence," meaning that there is a 5 percent probability (often written p 3 0.05) that the result could be due to chance, and therefore bogus. Clearly, the smaller the probability or p value, the higher the level of statistical significance and the more confidence we can have in a study's results. The simplest way that a study can achieve a higher level of statistical significance is to amass more data, which is sometimes costly or not feasible.(Continues...)
Excerpted from Eight Preposterous Propositions by Robert Ehrlich Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of Contents
1. Introduction 1
2. Is Homosexuality Primarily Innate? 8
3. Is Intelligent Design a Scientific Alternative to Evolution? 41
4. Are People Getting Smarter or Dumber? 78
5. Can We Influence Matter by Thought Alone? 104
6. Should You Worry about Global Warming? 138
7. Is Complex Life in the Universe Very Rare? 188
8. Can a Sugar Pill Cure You? 222
9. Should You Worry about Your Cholesterol? 263
10. Epilogue 305
What People are Saying About This
Robert Ehrlich's Eight Preposterous Propositions, the sequel to his cleverly conceived and brilliantly executed Nine Crazy Ideas in Science, is sure to both infuriate and delight readers at the same time! If there isn't something in this book that you already agree and disagree with then you will by the time you finish it because these are among the most politically and culturally controversial ideas in all of science. I am simply staggered at both the depth and scope of Ehrlich's research, yet at the same time I am struck by how fair he is to all sides in these contentious issues. If you want to get your mind around a complex issue in a modest amount of time then Eight Preposterous Propositions is for you. Every college course in critical thinking should assign this book as a model of balanced treatment and fair mindedness.
Michael Shermer, Publisher of Skeptic magazine, monthly columnist for Scientific American, author of "Why People Believe Weird Things"