Expert Political Judgment: How Good Is It? How Can We Know?

Paperback (Print)
Rent
Rent from BN.com
$10.05
(Save 68%)
Est. Return Date: 09/24/2014
Used and New from Other Sellers
Used and New from Other Sellers
from $17.41
Usually ships in 1-2 business days
(Save 43%)
Other sellers (Paperback)
  • All (19) from $17.41   
  • New (8) from $23.27   
  • Used (11) from $17.40   

Overview

"This book is a landmark in both content and style of argument. It is a major advance in our understanding of expert judgment in the vitally important and almost impossible task of political and strategic forecasting. Tetlock also offers a unique example of even-handed social science. This may be the first book I have seen in which the arguments and objections of opponents are presented with as much care as the author's own position."—Daniel Kahneman, Princeton University, recipient of the 2002 Nobel Prize in economic sciences

"This book is a major contribution to our thinking about political judgment. Philip Tetlock formulates coding rules by which to categorize the observations of individuals, and arrives at several interesting hypotheses. He lays out the many strategies that experts use to avoid learning from surprising real-world events."—Deborah W. Larson, University of California, Los Angeles

"This is a marvelous book—fascinating and important. It provides a stimulating and often profound discussion, not only of what sort of people tend to be better predictors than others, but of what we mean by good judgment and the nature of objectivity. It examines the tensions between holding to beliefs that have served us well and responding rapidly to new information. Unusual in its breadth and reach, the subtlety and sophistication of its analysis, and the fair-mindedness of the alternative perspectives it provides, it is a must-read for all those interested in how political judgments are formed."—Robert Jervis, Columbia University

"This book is just what one would expect from America's most influential political psychologist: Intelligent, important, and closely argued. Both science and policy are brilliantly illuminated by Tetlock's fascinating arguments."—Daniel Gilbert, Harvard University

Read More Show Less

Editorial Reviews

PsycCRITIQUES
Phillip E. Tetlock does a remarkable job . . . applying the high-end statistical and methodological tools of social science to the alchemistic world of the political prognosticator. The result is a fascinating blend of science and storytelling, in the the best sense of both words.
— William D. Crano
PsysCRITIQUES

Phillip E. Tetlock does a remarkable job . . . applying the high-end statistical and methodological tools of social science to the alchemistic world of the political prognosticator. The result is a fascinating blend of science and storytelling, in the the best sense of both words.
— William D. Crano
The New Yorker - Louis Menand
It is the somewhat gratifying lesson of Philip Tetlock's new book . . . that people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us. When they're wrong, they're rarely held accountable, and they rarely admit it, either. . . . It would be nice if there were fewer partisans on television disguised as "analysts" and "experts". . . . But the best lesson of Tetlock's book may be the one that he seems most reluctant to draw: Think for yourself.
Financial Times - Gavyn Davies
The definitive work on this question. . . . Tetlock systematically collected a vast number of individual forecasts about political and economic events, made by recognised experts over a period of more than 20 years. He showed that these forecasts were not very much better than making predictions by chance, and also that experts performed only slightly better than the average person who was casually informed about the subject in hand.
Toronto Star - Jim Coyle
Before anyone turns an ear to the panels of pundits, they might do well to obtain a copy of Phillip Tetlock's new book Expert Political Judgment: How Good Is It? How Can We Know? The Berkeley psychiatrist has apparently made a 20-year study of predictions by the sorts who appear as experts on TV and get quoted in newspapers and found that they are no better than the rest of us at prognostication.
Fortune - Geoffrey Colvin
[This] book . . . Marshals powerful evidence to make [its] case. Expert Political Judgment . . . Summarizes the results of a truly amazing research project. . . . The question that screams out from the data is why the world keeps believing that "experts" exist at all.
Australian Financial Review - Paul Monk
Philip Tetlock has just produced a study which suggests we should view expertise in political forecasting—by academics or intelligence analysts, independent pundits, journalists or institutional specialists—with the same skepticism that the well-informed now apply to stockmarket forecasting. . . . It is the scientific spirit with which he tackled his project that is the most notable thing about his book, but the findings of his inquiry are important and, for both reasons, everyone seriously concerned with forecasting, political risk, strategic analysis and public policy debate would do well to read the book.
PsysCRITIQUES - William D. Crano
Phillip E. Tetlock does a remarkable job . . . applying the high-end statistical and methodological tools of social science to the alchemistic world of the political prognosticator. The result is a fascinating blend of science and storytelling, in the the best sense of both words.
Financial Times - John Kay
Mr. Tetlock's analysis is about political judgment but equally relevant to economic and commercial assessments.
Business Times - Leon Hadar
Why do most political experts prove to be wrong most of time? For an answer, you might want to browse through a very fascinating study by Philip Tetlock . . . who in Expert Political Judgment contends that there is no direct correlation between the intelligence and knowledge of the political expert and the quality of his or her forecasts. If you want to know whether this or that pundit is making a correct prediction, don't ask yourself what he or she is thinking—but how he or she is thinking.
From the Publisher
Winner of the 2006 Grawemeyer Award for Ideas Improving World Order

Winner of the 2006 Woodrow Wilson Foundation Award, American Political Science Association

Winner of the 2006 Woodrow Wilson Foundation Award, American Political Science Association

Winner of the 2006 Robert E. Lane Award, Political Psychology Section of the American Political Science Association

"It is the somewhat gratifying lesson of Philip Tetlock's new book . . . that people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us. When they're wrong, they're rarely held accountable, and they rarely admit it, either. . . . It would be nice if there were fewer partisans on television disguised as "analysts" and "experts". . . . But the best lesson of Tetlock's book may be the one that he seems most reluctant to draw: Think for yourself."—Louis Menand, The New Yorker

"The definitive work on this question. . . . Tetlock systematically collected a vast number of individual forecasts about political and economic events, made by recognised experts over a period of more than 20 years. He showed that these forecasts were not very much better than making predictions by chance, and also that experts performed only slightly better than the average person who was casually informed about the subject in hand."—Gavyn Davies, Financial Times

"Before anyone turns an ear to the panels of pundits, they might do well to obtain a copy of Phillip Tetlock's new book Expert Political Judgment: How Good Is It? How Can We Know? The Berkeley psychiatrist has apparently made a 20-year study of predictions by the sorts who appear as experts on TV and get quoted in newspapers and found that they are no better than the rest of us at prognostication."—Jim Coyle, Toronto Star

"Tetlock uses science and policy to brilliantly explore what constitutes good judgment in predicting future events and to examine why experts are often wrong in their forecasts."—Choice

"[This] book . . . Marshals powerful evidence to make [its] case. Expert Political Judgment . . . Summarizes the results of a truly amazing research project. . . . The question that screams out from the data is why the world keeps believing that "experts" exist at all."—Geoffrey Colvin, Fortune

"Philip Tetlock has just produced a study which suggests we should view expertise in political forecasting—by academics or intelligence analysts, independent pundits, journalists or institutional specialists—with the same skepticism that the well-informed now apply to stockmarket forecasting. . . . It is the scientific spirit with which he tackled his project that is the most notable thing about his book, but the findings of his inquiry are important and, for both reasons, everyone seriously concerned with forecasting, political risk, strategic analysis and public policy debate would do well to read the book."—Paul Monk, Australian Financial Review

"Phillip E. Tetlock does a remarkable job . . . applying the high-end statistical and methodological tools of social science to the alchemistic world of the political prognosticator. The result is a fascinating blend of science and storytelling, in the the best sense of both words."—William D. Crano, PsysCRITIQUES

"Mr. Tetlock's analysis is about political judgment but equally relevant to economic and commercial assessments."—John Kay, Financial Times

"Why do most political experts prove to be wrong most of time? For an answer, you might want to browse through a very fascinating study by Philip Tetlock . . . who in Expert Political Judgment contends that there is no direct correlation between the intelligence and knowledge of the political expert and the quality of his or her forecasts. If you want to know whether this or that pundit is making a correct prediction, don't ask yourself what he or she is thinking—but how he or she is thinking."—Leon Hadar, Business Times

Toronto Star
Before anyone turns an ear to the panels of pundits, they might do well to obtain a copy of Phillip Tetlock's new book Expert Political Judgment: How Good Is It? How Can We Know? The Berkeley psychiatrist has apparently made a 20-year study of predictions by the sorts who appear as experts on TV and get quoted in newspapers and found that they are no better than the rest of us at prognostication.
— Jim Coyle
Choice
Tetlock uses science and policy to brilliantly explore what constitutes good judgment in predicting future events and to examine why experts are often wrong in their forecasts.
Fortune
[This] book . . . Marshals powerful evidence to make [its] case. Expert Political Judgment . . . Summarizes the results of a truly amazing research project. . . . The question that screams out from the data is why the world keeps believing that "experts" exist at all.
— Geoffrey Colvin
Australian Financial Review
Philip Tetlock has just produced a study which suggests we should view expertise in political forecasting—by academics or intelligence analysts, independent pundits, journalists or institutional specialists—with the same skepticism that the well-informed now apply to stockmarket forecasting. . . . It is the scientific spirit with which he tackled his project that is the most notable thing about his book, but the findings of his inquiry are important and, for both reasons, everyone seriously concerned with forecasting, political risk, strategic analysis and public policy debate would do well to read the book.
— Paul Monk
Financial Times
Mr. Tetlock's analysis is about political judgment but equally relevant to economic and commercial assessments.
— John Kay
Business Times
Why do most political experts prove to be wrong most of time? For an answer, you might want to browse through a very fascinating study by Philip Tetlock . . . who in Expert Political Judgment contends that there is no direct correlation between the intelligence and knowledge of the political expert and the quality of his or her forecasts. If you want to know whether this or that pundit is making a correct prediction, don't ask yourself what he or she is thinking—but how he or she is thinking.
— Leon Hadar
The New Yorker
It is the somewhat gratifying lesson of Philip Tetlock's new book . . . that people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us. When they're wrong, they're rarely held accountable, and they rarely admit it, either. . . . It would be nice if there were fewer partisans on television disguised as "analysts" and "experts". . . . But the best lesson of Tetlock's book may be the one that he seems most reluctant to draw: Think for yourself.
— Louis Menand
Fortune
[This] book . . . Marshals powerful evidence to make [its] case. Expert Political Judgment . . . Summarizes the results of a truly amazing research project. . . . The question that screams out from the data is why the world keeps believing that "experts" exist at all.
— Geoffrey Colvin
Toronto Star
Before anyone turns an ear to the panels of pundits, they might do well to obtain a copy of Phillip Tetlock's new book Expert Political Judgment: How Good Is It? How Can We Know? The Berkeley psychiatrist has apparently made a 20-year study of predictions by the sorts who appear as experts on TV and get quoted in newspapers and found that they are no better than the rest of us at prognostication.
— Jim Coyle
The New Yorker
It is the somewhat gratifying lesson of Philip Tetlock's new book . . . that people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us. When they're wrong, they're rarely held accountable, and they rarely admit it, either. . . . It would be nice if there were fewer partisans on television disguised as "analysts" and "experts". . . . But the best lesson of Tetlock's book may be the one that he seems most reluctant to draw: Think for yourself.
— Louis Menand
Read More Show Less

Product Details

  • ISBN-13: 9780691128719
  • Publisher: Princeton University Press
  • Publication date: 7/31/2006
  • Pages: 352
  • Sales rank: 264,904
  • Product dimensions: 6.00 (w) x 9.10 (h) x 0.90 (d)

Meet the Author

Philip E. Tetlock is Mitchell Professor of Leadership at the University of California, Berkeley. His books include "Counterfactual Thought Experiments in World Politics" (Princeton).

Read More Show Less

Read an Excerpt

Expert Political Judgment

How Good is It? How Can We Know?
By Philip E. Tetlock

Princeton University Press

Princeton University Press
All right reserved.

ISBN: 0-691-12871-5


Chapter One

Quantifying the Unquantifiable

I do not pretend to start with precise questions. I do not think you can start with anything precise. You have to achieve such precision as you can, as you go along. -BERTRAND RUSSELL

EVERY DAY, countless experts offer innumerable opinions in a dizzying array of forums. Cynics groan that expert communities seem ready at hand for virtually any issue in the political spotlight-communities from which governments or their critics can mobilize platoons of pundits to make prepackaged cases on a moment's notice.

Although there is nothing odd about experts playing prominent roles in debates, it is odd to keep score, to track expert performance against explicit benchmarks of accuracy and rigor. And that is what I have struggled to do in twenty years of research of soliciting and scoring experts' judgments on a wide range of issues. The key term is "struggled." For, if it were easy to set standards for judging judgment that would be honored across the opinion spectrum and not glibly dismissed as another sneaky effort to seize the high ground for a favorite cause, someone would have patented the process long ago.

The current squabble over "intelligence failures" preceding the American invasion ofIraq is the latest illustration of why some esteemed colleagues doubted the feasibility of this project all along and why I felt it essential to push forward anyway. As I write, supporters of the invasion are on the defensive: their boldest predictions of weapons of mass destruction and of minimal resistance have not been borne out.

But are hawks under an obligation-the debating equivalent of Marquis of Queensbury rules-to concede they were wrong? The majority are defiant. Some say they will yet be proved right: weapons will be found-so, be patient-or that Baathists snuck the weapons into Syria-so, broaden the search. Others concede that yes, we overestimated Saddam's arsenal, but we made the right mistake. Given what we knew back then-the fragmentary but ominous indicators of Saddam's intentions-it was prudent to over- rather than underestimate him. Yet others argue that ends justify means: removing Saddam will yield enormous long-term benefits if we just stay the course. The know-it-all doves display a double failure of moral imagination. Looking back, they do not see how terribly things would have turned out in the counterfactual world in which Saddam remained ensconced in power (and France wielded de facto veto power over American security policy). Looking forward, they do not see how wonderfully things will turn out: freedom, peace, and prosperity flourishing in lieu of tyranny, war, and misery.

The belief system defenses deployed in the Iraq debate bear suspicious similarities to those deployed in other controversies sprinkled throughout this book. But documenting defenses, and the fierce conviction behind them, serves a deeper purpose. It highlights why, if we want to stop running into ideological impasses rooted in each side's insistence on scoring its own performance, we need to start thinking more deeply about how we think. We need methods of calibrating expert performance that transcend partisan bickering and check our species' deep-rooted penchant for self-justification.

The next two sections of this chapter wrestle with the complexities of the process of setting standards for judging judgment. The final section previews what we discover when we apply these standards to experts in the field, asking them to predict outcomes around the world and to comment on their own and rivals' successes and failures. These regional forecasting exercises generate winners and losers, but they are not clustered along the lines that partisans of the left or right, or of fashionable academic schools of thought, expected. What experts think matters far less than how they think. If we want realistic odds on what will happen next, coupled to a willingness to admit mistakes, we are better off turning to experts who embody the intellectual traits of Isaiah Berlin's prototypical fox-those who "know many little things," draw from an eclectic array of traditions, and accept ambiguity and contradiction as inevitable features of life-than we are turning to Berlin's hedgehogs-those who "know one big thing," toil devotedly within one tradition, and reach for formulaic solutions to ill-defined problems. The net result is a double irony: a perversely inverse relationship between my prime exhibit indicators of good judgment and the qualities the media prizes in pundits-the tenacity required to prevail in ideological combat-and the qualities science prizes in scientists-the tenacity required to reduce superficial complexity to underlying simplicity

HERE LURK (THE SOCIAL SCIENCE EQUIVALENT OF) DRAGONS

It is a curious thing. Almost all of us think we possess it in healthy measure. Many of us think we are so blessed that we have an obligation to share it. But even the savvy professionals recruited from academia, government, and think tanks to participate in the studies collected here have a struggle defining it. When pressed for a precise answer, a disconcerting number fell back on Potter Stewart's famous definition of pornography: "I know it when I see it." And, of those participants who ventured beyond the transparently tautological, a goodly number offered definitions that were in deep, even irreconcilable, conflict. However we set up the spectrum of opinion-liberals versus conservatives, realists versus idealists, doomsters versus boomsters-we found little agreement on either who had it or what it was.

The elusive it is good political judgment. And some reviewers warned that, of all the domains I could have chosen-many, like medicine or finance, endowed with incontrovertible criteria for assessing accuracy-I showed suspect scientific judgment in choosing good political judgment. In their view, I could scarcely have chosen a topic more hopelessly subjective and less suitable for scientific analysis. Future professional gatekeepers should do a better job stopping scientific interlopers, such as the author, from wasting everyone's time-perhaps by posting the admonitory sign that medieval mapmakers used to stop explorers from sailing off the earth: hic sunt dragones.

This "relativist" challenge strikes at the conceptual heart of this project. For, if the challenge in its strongest form is right, all that follows is for naught. Strong relativism stipulates an obligation to judge each worldview within the framework of its own assumptions about the world-an obligation that theorists ground in arguments that stress the inappropriateness of imposing one group's standards of rationality on other groups. Regardless of precise rationale, this doctrine imposes a blanket ban on all efforts to hold advocates of different worldviews accountable to common norms for judging judgment. We are barred from even the most obvious observations: from pointing out that forecasters are better advised to use econometric models than astrological charts or from noting the paucity of evidence for Herr Hitler's "theory" of Aryan supremacy or Comrade Kim II Sung's juche "theory" of economic development.

Exasperation is an understandable response to extreme relativism. Indeed, it was exasperation that, two and a half centuries ago, drove Samuel Johnson to dismiss the metaphysical doctrines of Bishop Berkeley by kicking a stone and declaring, "I refute him thus." In this spirit, we might crankily ask what makes political judgment so special. Why should political observers be insulated from the standards of accuracy and rigor that we demand of professionals in other lines of work?

But we err if we shut out more nuanced forms of relativism. For, in key respects, political judgment is especially problematic. The root of the problem is not just the variety of viewpoints. It is the difficulty that advocates have pinning each other down in debate. When partisans disagree over free trade or arms control or foreign aid, the disagreements hinge on more than easily ascertained claims about trade deficits or missile counts or leaky transfer buckets. The disputes also hinge on hard-to-refute counterfactual claims about what would have happened if we had taken different policy paths and on impossible-to-refute moral claims about the types of people we should aspire to be-all claims that partisans can use to fortify their positions against falsification. Without retreating into full-blown relativism, we need to recognize that political belief systems are at continual risk of evolving into self-perpetuating worldviews, with their own self-serving criteria for judging judgment and keeping score, their own stocks of favorite historical analogies, and their own pantheons of heroes and villains.

We get a clear picture of how murky things can get when we explore the difficulties that even thoughtful observers run into when they try (as they have since Thucydides) to appraise the quality of judgment displayed by leaders at critical junctures in history. This vast case study literature underscores-in scores of ways-how wrong Johnsonian stone-kickers are if they insist that demonstrating defective judgment is a straightforward "I refute him thus" exercise. To make compelling indictments of political judgment-ones that will move more than one's ideological soul mates-case study investigators must show not only that decision makers sized up the situation incorrectly but also that, as a result, they put us on a manifestly suboptimal path relative to what was once possible, and they could have avoided these mistakes if they had performed due diligence in analyzing the available information.

These value-laden "counterfactual" and "decision-process" judgment calls create opportunities for subjectivity to seep into historical assessments of even exhaustively scrutinized cases. Consider four examples of the potential for partisan mischief:

a. How confident can we now be-sixty years later and after all records have been declassified-that Harry Truman was right to drop atomic bombs on Japan in August 1945? This question still polarizes observers, in part, because their answers hinge on guesses about how quickly Japan would have surrendered if its officials had been invited to witness a demonstration blast; in part, because their answers hinge on values-the moral weight we place on American versus Japanese lives and on whether we deem death by nuclear incineration or radiation to be worse than death by other means; and, in part, because their answers hinge on murky "process" judgments-whether Truman shrewdly surmised that he had passed the point of diminishing returns for further deliberation or whether he acted impulsively and should have heard out more points of view.

b. How confident can we now be-forty years later-that the Kennedy administration handled the Cuban missile crisis with consummate skill, striking the perfect blend of firmness to force the withdrawal of Soviet missiles and of reassurance to forestall escalation into war? Our answers hinge not only on our risk tolerance but also on our hunches about whether Kennedy was just lucky to have avoided dramatic escalation (critics on the left argue that he played a perilous game of brinkmanship) or about whether Kennedy bol-lixed an opportunity to eliminate the Castro regime and destabilize the Soviet empire (critics on the right argue that he gave up more than he should have).

c. How confident can we now be-twenty years later-that Reagan's admirers have gotten it right and the Star Wars initiative was a stroke of genius, an end run around the bureaucracy that destabilized the Soviet empire and hastened the resolution of the cold war? Or that Reagan's detractors have gotten it right and the initiative was the foolish whim of a man already descending into senility, a whim that wasted billions of dollars and that could have triggered a ferocious escalation of the cold war? Our answers hinge on inevitably speculative judgments of how history would have unfolded in the no-Reagan, rerun conditions of history.

d. How confident can we be-in the spring of 2004-that the Bush administration was myopic to the threat posed by Al Qaeda in the summer of 2001, failing to heed classified memos that baldly announced "bin Laden plans to attack the United States"? Or is all this 20/20 hindsight motivated by desire to topple a president? Have we forgotten how vague the warnings were, how vocal the outcry would have been against FBI-CIA coordination, and how stunned Democrats and Republicans alike were by the attack?

Where then does this leave us? Up to a disconcertingly difficult to identify point, the relativists are right: judgments of political judgment can never be rendered politically uncontroversial. Many decades of case study experience should by now have drummed in the lesson that one observer's simpleton will often be another's man of principle; one observer's groupthink, another's well-run meeting.

But the relativist critique should not paralyze us. It would be a massive mistake to "give up," to approach good judgment solely from first-person pronoun perspectives that treat our own intuitions about what constitutes good judgment, and about how well we stack up against those intuitions, as the beginning and end points of inquiry.

This book is predicated on the assumption that, even if we cannot capture all of the subtle counterfactual and moral facets of good judgment, we can advance the cause of holding political observers accountable to independent standards of empirical accuracy and logical rigor. Whatever their allegiances, good judges should pass two types of tests:

1. Correspondence tests rooted in empiricism. How well do their private beliefs map onto the publicly observable world? 2. Coherence and process tests rooted in logic. Are their beliefs internally consistent? And do they update those beliefs in response to evidence?

In plain language, good judges should both "get it right" and "think the right way."

This book is also predicated on the assumption that, to succeed in this ambitious undertaking, we cannot afford to be parochial. Our salvation lies in multimethod triangulation-the strategy of pinning down elusive constructs by capitalizing on the complementary strengths of the full range of methods in the social science tool kit. Our confidence in specific claims should rise with the quality of converging evidence we can marshal from diverse sources. And, insofar as we advance many interdependent claims, our confidence in the overall architecture of our argument should be linked to the sturdiness of the interlocking patterns of converging evidence.

Of course, researchers are more proficient with some tools than others. As a research psychologist, my comparative advantage does not lie in doing case studies that presuppose deep knowledge into the challenges confronting key players at particular times and places. It lies in applying the distinctive skills that psychologists collectively bring to this challenging topic: skills honed by a century of experience in translating vague speculation about human judgment into testable propositions. Each chapter of this book exploits concepts from experimental psychology to infuse the abstract goal of assessing good judgment with operational substance, so we can move beyond anecdotes and calibrate the accuracy of observers' predictions, the soundness of the inferences they draw when those predictions are or are not borne out, the evenhandedness with which they evaluate evidence, and the consistency of their answers to queries about what could have been or might yet be.

(Continues...)



Excerpted from Expert Political Judgment by Philip E. Tetlock Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Read More Show Less

Table of Contents

Acknowledgments ix
Preface xi
Chapter 1: Quantifying the Unquantifiable 1
Chapter 2: The Ego-deflating Challenge of Radical Skepticism 25
Chapter 3: Knowing the Limits of One's Knowledge: Foxes Have Better Calibration and Discrimination Scores than Hedgehogs 67
Chapter 4: Honoring Reputational Bets: Foxes Are Better Bayesians than Hedgehogs 121
Chapter 5: Contemplating Counterfactuals: Foxes Are More Willing than Hedgehogs to Entertain Self-subversive Scenarios 144
Chapter 6: The Hedgehogs Strike Back 164
Chapter 7: Are We Open-minded Enough to Acknowledge the Limits of Open-mindedness? 189
Chapter 8: Exploring the Limits on Objectivity and Accountability 216
Methodological Appendix 239
Technical Appendix: Phillip Rescober and Philip E. Tetlock 273
Index 313

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)