Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process
Scientific evidence is crucial in a burgeoning number of litigated cases, legislative enactments, regulatory decisions, and scholarly arguments. Evaluating Scientific Evidence explores the question of what counts as scientific knowledge, a question that has become a focus of heated courtroom and scholarly debate, not only in the United States, but in other common law countries such as the United Kingdom, Canada and Australia. Controversies are rife over what is permissible use of genetic information, whether chemical exposure causes disease, whether future dangerousness of violent or sexual offenders can be predicted, whether such time-honored methods of criminal identification (such as microscopic hair analysis, for example) have any better foundation than ancient divination rituals, among other important topics. This book examines the process of evaluating scientific evidence in both civil and criminal contexts, and explains how decisions by nonscientists that embody scientific knowledge can be improved.
1117324541
Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process
Scientific evidence is crucial in a burgeoning number of litigated cases, legislative enactments, regulatory decisions, and scholarly arguments. Evaluating Scientific Evidence explores the question of what counts as scientific knowledge, a question that has become a focus of heated courtroom and scholarly debate, not only in the United States, but in other common law countries such as the United Kingdom, Canada and Australia. Controversies are rife over what is permissible use of genetic information, whether chemical exposure causes disease, whether future dangerousness of violent or sexual offenders can be predicted, whether such time-honored methods of criminal identification (such as microscopic hair analysis, for example) have any better foundation than ancient divination rituals, among other important topics. This book examines the process of evaluating scientific evidence in both civil and criminal contexts, and explains how decisions by nonscientists that embody scientific knowledge can be improved.
80.0 In Stock
Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process

Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process

by Erica Beecher-Monas
Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process

Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process

by Erica Beecher-Monas

Hardcover(First Edition)

$80.00 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

Scientific evidence is crucial in a burgeoning number of litigated cases, legislative enactments, regulatory decisions, and scholarly arguments. Evaluating Scientific Evidence explores the question of what counts as scientific knowledge, a question that has become a focus of heated courtroom and scholarly debate, not only in the United States, but in other common law countries such as the United Kingdom, Canada and Australia. Controversies are rife over what is permissible use of genetic information, whether chemical exposure causes disease, whether future dangerousness of violent or sexual offenders can be predicted, whether such time-honored methods of criminal identification (such as microscopic hair analysis, for example) have any better foundation than ancient divination rituals, among other important topics. This book examines the process of evaluating scientific evidence in both civil and criminal contexts, and explains how decisions by nonscientists that embody scientific knowledge can be improved.

Product Details

ISBN-13: 9780521859271
Publisher: Cambridge University Press
Publication date: 11/20/2006
Series: Law in Context
Edition description: First Edition
Pages: 270
Product dimensions: 6.14(w) x 9.17(h) x 0.79(d)

About the Author

Erica Beecher-Monas teaches at Wayne State University Law School. She received her MS from the University of Miami School of Medicine and JD from the University of Miami School of Law, and earned an LLM and a JSD from Columbia University School of Law. Prior to entering academia, she clerked for the Honorable William M. Hoeveler, United States District Court Judge in the Southern District of Florida, and was an associate at Fried, Frank, Harris, Shriver, and Jacobson in New York. She writes in the areas of judgment and decisionmaking, with applications to scientific evidence and corporate governance, and has been published in numerous law reviews.

Read an Excerpt

Evaluating scientific evidence
Cambridge University Press
978-0-521-85927-1 - Evaluating scientific evidence : an interdisciplinary framework for intellectual due process - by Erica Beecher-Monas
Excerpt

Introduction

Scientific evidence pervades modern legal decisions, whether the decision is made in the courtroom, during the regulatory process, or through legislation. The question of what counts as scientific knowledge has become a focus of heated courtroom and sholarly debate, not only in the United States but also in other common-law countries such as the United Kingdom, Canada, and Australia. Controversies are rife about what is permissible use of genetic information, if chemical exposure causes disease, and whether future dangerousness of violent or sexual offenders can be predicted, among other important topics. Many time-honored methods of criminal identification, such as hair analysis, voice spectography, and bitemark identification, to name a few, have turned out to have no better foundation than ancient divination rituals. This book examines the process of evaluating scientific evidence in both civil and criminal contexts and explains how decisions by nonscientists that embody scientific knowledge can be improved. This is a timely and important subject for anyone interested in the impact of law and science on society.

   Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process emphasizes the unifying themes of probabilistic reasoning, hypothesis testing, andinterdisciplinarity, and it is intended to provide the guidance that judges and the lawyers advising them need to make scientifically legitimate admissibility determinations. Moreover, scholars who turn to interdisciplinary arguments are confronted with an urgent need for a framework to evaluate scientific argument.

   Evaluating Scientific Evidence is intended to provide this guidance to scholars, judges, lawyers, and students of law. The heuristic it proposes consists of five parts and emphasizes underlying principles common to all fields of science. To meet the requirements of intellectual due process, anyone evaluating scientific information must be able to do five things: (1) identify and examine the proffered theory and hypothesis for their power to explain the data; (2) examine the data that supports (and undermines) the proffered theory; (3) employ supportable assumptions to fill the inevitable gaps between data and theory; (4) examine the methodology; and (5) engage in probabilistic assessment of the link between the data and the hypothesis. To demonstrate how using this heuristic would improve the evaluation of the scientific evidence at issue, my book uses real examples of recorded courtroom battles and scholarly debates as to what counts as valid science.

   In the United States, both the category and the content of scientific knowledge are controversial. During little more than the past decade, in a trio of landmark cases beginning with Daubert v. Merrell Dow Pharmaceuticals, Inc., the U. S. Supreme Court placed the burden of an early evaluation of the validity of scientific testimony – called “gatekeeping” – squarely on federal trial judges. An amendment to the Federal Rules of Evidence followed in short order and – although many states continue to apply a rule under which only a scientific consensus is required for admissibility and others use some combination of these approaches – scientific validity has become and remains an important question.

   In Great Britain, the Sally Clark case has similarly ignited controversy over the use of experts testifying about science, a debate that has prompted the reopening of hundreds of cases.

   Notwithstanding its significant alteration to the legal landscape, the U.S. Supreme Court has done little to guide judges in the necessary assessment. Evaluating Scientific Evidence argues that because most scientific studies and the conclusions culled from them are imperfect, the assessment process needs to include more than a knowledge of optimal experimental design. No study is perfect, no matter how well designed. What judges and lawyers – and anyone attempting to understand the validity of scientific information – need to know is not how to design the best scientific study but how to assess an imperfect one. Assessing imperfect studies – that is, the scientific validity of conclusions drawn from imperfect knowledge – is precisely the goal of the heuristic provided in this book.

   Although a substantial literature about scientific evidence has appeared in the past decade, most of these treatises have discussed discrete areas of scientific evidence and their attendant problems in litigation. They have not offered a discussion of unifying principles that can make sense of areas beyond the topics they specifically address. In its novel approach, Evaluating Scientific Evidence takes a more philosophical bent that is intended for a broader audience and that addresses the underlying principles of scientific argument in a unifying manner.

   Throughout, Evaluating Scientific Evidence draws on the rationalist tradition in evidence sholarship and its main epistemological assumptions. A tradition of aspirational rationality in the legal system is the inspiration for this book. Recognizing that concerns about evidence and inference are not limited to law and that issues of logic, probability, and knowledge are common to many disciplines, any study of scientific evidence inevitably becomes a multidisciplinary subject. In this book, common themes of logic, probability, and knowledge are emphasized and fine-tuned to scientific information used in legal decision making.

   The premise of Evaluating Scientific Evidence is that critiquing scientific information in both civil and criminal systems is within the capability of judges, lawyers, and scholars, armed with the framework for analysis that this book provides. In presenting an updated philosophy of science, examining the relationship between facts and values, and exploring the question of how nonscientists properly can use scientific information to make sound and persuasive arguments and decisions, it takes a global perspective on how courts evaluate scientific evidence and builds on the comparative enterprise to address normative structures for the valid use of scientific information within the framework of the rule of law.

   The heuristic advanced and applied in Evaluating Scientific Evidence draws from the U.S. Supreme Court’s guidelines; the Federal Judicial Center’s Manual for Scientific Evidence; guidelines proposed by the U.S. Environmental Protection Agency to assess scientific validity; the philosophy of science; and my own experience as a research scientist. The intention of this book is to offer insights into scientific process that will produce legal judgments and decisions that are intellectually defensible and fairer to the litigants. In doing so, it aims to put the interdisciplinary use of scientific information on a solid and reliable foundation. Its contention is that understanding the process of science and the nature of probabilistic reasoning will enable the proper use of science in the courts. This work carries on the tradition of Law in Context and complements other publications in the series (including Twining’s Rethinking Evidence and Anderson and Twining’s Analysis of Evidence, as well as Eggleston’s Evidence, Proof and Probability, now out of print).

   Those in our legal systems with responsibility for judgments and decisions are far from the only outsiders who must evaluate scientific evidence. Scientists working outside of a given field routinely critique each others’ work. By taking information gleaned from one discipline and applying it to another, scientists’ new insights make developments in science possible. Scientists can do this, even without intimate knowledge of the type of research being discussed, because underlying all scientific disciplines are common understandings about probabilistic and analogy-based reasoning. Even nonscientists can learn this kind of reasoning. By emphasizing the unifying themes of probabilistic reasoning, hypothesis testing, and interdisciplinarity, this book will guide legal participants to formulate scientifically adequate legal arguments and will illustrate the process through critique of a number of areas in which scientific information is invoked in legal argument. Empowering judges and lawyers to reliably evaluate the science confronting them can only enhance credibility of the judicial process, soundness of scholarly debate, and – in the end – a proper functioning of law.





1

Triers of science

Scientific evidence is an inescapable facet of modern litigation. It is fundamental to criminal justice and to civil litigation. What counts as science, however, who gets to make this decision, and how they should go about it are all hotly contested. Nor is this contest limited to the United States. The issue of scientific reliability is a hot topic in England and other Commonwealth jurisdictions, as well as in continental European systems.

   In the United States, legislatures, federal, and many state courts have placed the responsibility for evaluating the validity of scientific testimony squarely on judges.1 Other states continue to use a general-consensus standard for scientific validity, in which it is the scientific community that makes that decision.2 In those jurisdictions where judges must evaluate scientific validity, the result is that judges – traditionally triers of law, occasionally pressed into service as triers of fact – now must also be triers of science in cases where experts proffer scientific evidence.

   Predictably, not everyone is pleased with this new state of affairs, and many question judicial competence in this area. Years after Daubert v. Merrell Dow Pharmaceuticals, Inc.3 and the subsequent amendments to the Federal Rules of Evidence made federal judges responsible for assessing scientific validity, judges and lawyers are still grappling with the fact that they can no longer merely count scientific noses4 but must instead analyze whether expert testimony meets the criteria of good science. Many judges, however, are stymied by the science component of their gatekeeping duties, focusing instead on rules of convenience that have little scientific justification. As a result, judges make unwarranted decisions at both ends of the spectrum: by rejecting even scientifically uncontroversial evidence that would have little trouble finding admissibility under a general-consensus standard and by admitting evidence that is scientifically baseless. But judges need not be unarmed for these decisions.

   The U.S. Supreme Court gave judges some rudimentary guidelines in Daubert and its progeny, outlining the notions of scientific validity and fit. In addition, the Federal Judicial Center publishes a reference manual (periodically updated) for evaluating scientific evidence, outlining basic theory and optimal practices in a given field.5 Courses have sprung up to help familiarize judges with scientific issues, and the trial court may appoint its own experts for advice.6 Federal regulatory agencies like the U.S. Environmental Protection Agency (EPA) also have useful guidelines. These are particularly salient because, like judges, most agency decision makers are not trained scientists, yet they must make creditable scientific validity assessments. Despite these attempts at guidance, however, no coherent conceptual framework has emerged to guide the legal treatment of scientific knowledge. This book seeks to provide that framework.

   Throughout this book, I argue that judges are capable of providing intellectual due process to litigants on issues of scientific evidence but that an integral part of that process is the requirement that judges explicitly give the basis for their decision in the form of written opinions, educate themselves about the kinds of evidence before them, and make default assumptions that are justifiable on scientific and policy grounds. The underlying principles of reasoning are not different in law and science, although context and culture determine their application. When discussing rationality, I include inductive, deductive, and abductive reasoning because all three forms are important tools in analysis. In short, for deductive argument to be valid, the truth of the premises must guarantee the truth of the conclusion; the paradigmatic form of the deductive argument is the Aristotelian syllogism.7 By inductive reasoning, I mean both inductive generalization, involving probabilistic generalization from the particular, and inductive analogy, in which one concludes that some particular instance will have the aggregate characteristics given in the premises.8 Exemplary reasoning is sometimes referred to as abduction.9 The theory of abduction was introduced by Charles Sanders Peirce to explain how scientists select a relatively small number of hypotheses to test from a large number of logically possible explanations for their observations.10

   To aid nonscientists in this complex reasoning process, I set out a framework for analysis of scientific argument in Chapter 3. The heuristic proposed in Chapter 3 consists of five basic parts and emphasizes the underlying principles common to all fields of science. To meet the requirements of such intellectual due process, I suggest that judges (and the lawyers and scholars who educate them about their cases) must be able to do five things: (1) identify and examine the proffered theory and hypothesis for their power to explain the data; (2) examine the data that supports (and undermines) the expert’s theory; (3) use supportable assumptions to fill the inevitable gaps between data and theory; (4) examine the methodology; and (5) engage in probabilistic assessment of the link between the data and the hypothesis.

What’s wrong with counting scientific noses?

When scientific evidence surfaces as the focus of a courtroom dispute, it neither should – nor can – be left to the scientists to decide. Determining legal admissibility based on the scientific community’s assessment of validity is troubling in both theory and practice. The rule of law is often described as a search for truth in a system that aspires to rationality.11 Although the meanings of truth and rationality are subject to debate in an open society, a structured reasoning process relating sensory input to theoretical explanation is fundamental. This requires accurate information and justifiable inferences. It is the necessity of a structured reasoning process that argues for a gatekeeper to assess the scientific validity of expert testimony. The object of demystifying scientific argument and making it more accessible to lawyers and judges is not to transform lawyers and judges into amateur scientists12 but to help them resolve a legal policy issue: whether, given the state of knowledge about a particular scientific hypothesis proffered by experts, that hypothesis is useful in resolving a legal dispute. The purpose of the admissibility inquiry is not to decide whose expert is correct but whether the expert can provide information to help the factfinder resolve an issue in a legal case. This is a decision that is quintessentially legal. In sum, the reason we need gatekeepers is to ensure that the statements offered into evidence comport with permissible legal theories, embedded as they are in cultural systems of belief, assumptions, and claims about the world. Although what we seek to know are the facts, facts are inevitably theory-laden.13 Therefore, in an adversary system, it is the judge whose role it is to manage coherence by reference to what is relevant to the legal determination.

   Nor is this an impossible task to place on the judge. Requiring judges to act as evidentiary gatekeepers – analyzing proffered testimony for the soundness of its underlying theory, technique, and application, and analyzing that testimony in light of the issues posed by the case – does not seem like an insurmountable judicial task. After all, judges are supposed to direct legal proceedings based on logical analysis and considered judgment. Moreover, judges are far from the only outsiders who must evaluate scientific evidence. Scientists who work outside of a given field critique each others’ work all the time – that is how science advances: by taking information gleaned from one discipline and applying it to another. This is possible, even without intimate knowledge of the type of research being discussed, because underlying all scientific disciplines are common understandings about probabilistic and analogy-based reasoning.

   The framework provided here is based on unifying themes common to scientific thinking of all stripes. Understanding the language and structure of scientific argument and the way “science” is produced provides an invaluable tool in deciphering the logic behind scientific testimony. The framework proposed here is intended to resolve some major issues on which the courts are still foundering. Even nonscientists can learn this kind of reasoning. By emphasizing the unifying themes of probabilistic reasoning, hypothesis testing, and interdisciplinarity, this book seeks to provide the guidance legal participants need for scientifically adequate legal arguments. It illustrates the process through a critique of a number of areas in which scientific information is invoked in legal argument.

   Not only is counting scientific noses bad for theoretical reasons, it does not work well in practice either. In practice, the general-consensus standard devolved into a meaningless exercise because it was nearly always possible to define the expert’s field so narrowly that consensus by a cohort of the expert’s was virtually guaranteed. Thus, the general-consensus standard often resulted in a cursory inquiry into the expert’s credentials without any screening of the substance of the testimony. In this way, voiceprints, bitemark and handwriting analysis, and a whole cornucopia of questionable exercises masquerading as science crept into litigation.

Admissibility of expert testimony Pre-Daubert

Daubert14 emerged against the backdrop of immense public controversy about the perceived flood of “junk science” that, according to some popular critics, threatened to inundate the courts.15 For years, Frye v. United States16 was the predominant standard for the admissibility of scientific evidence. Frye was a murder case involving expert testimony based on an early version of the polygraph technique, which the court found inadmissible because polygraph testing had not achieved general acceptance in the relevant scientific community. The Frye test asked whether the proffered expert polygraph evidence – including the conclusions reached – was generally accepted in a relevant community of experts. Frye thus offered a standard of admissibility based on the general acceptance of the proposed testimony by a relevant community of experts and permitted peer review and publication to substitute for any attempt at analysis by the court.

   Although a majority of courts in the United States applied the general acceptance standard, its results were anything but uniform. Some courts applying the general acceptance test did little more than “count noses,” while others performed in-depth analyses. Thus, the apparently straightforward standard provoked a number of controversies. For one thing, it frequently was unclear which facets of the testimony or underlying rationale had to be generally accepted. For another, the Frye standard failed to account for the phenomenon that much knowledge slips into general acceptance without any careful examination, especially where that knowledge has been accepted for a long time. Most controversial of all, however, was the Frye test’s substitution of peer consensus and publication for any detailed analysis by the court. In effect, this permitted nonjudicial actors to make what is essentially a judicial policy decision and deflected responsibility away from the judge.

   Consequently, at a time when scientific evidence was becoming increasingly important in resolving legal disputes, the standards for its courtroom use were anything but certain. Not surprisingly, criticism of the legal system’s ability to cope with scientific evidence mounted. Among the various solutions proposed were separate science courts, special administrative tribunals, and an interdisciplinary council established to advise the courts. It was against this background that the U.S. Supreme Court granted certiorari in Daubert.

The Daubert analysis

Daubert v. Merrell Dow Pharmaceuticals, Inc. was a civil case involving claims that Bendectin, a morning-sickness remedy that the plaintiffs’ mothers had taken during pregnancy, had caused the plaintiffs’ limb-reduction birth defects. The evidence at issue consisted of epidemiological reanalyses, in which data obtained in previously published studies was reanalyzed and proffered to support the plaintiffs’ claims. The trial court found the plaintiffs’ proffer insufficient to withstand the defendants’ motion for summary judgment because it did not meet with general acceptance in the field to which it belongs. The Ninth Circuit affirmed, holding an expert opinion inadmissible absent general acceptance of the underlying technique, and the U.S. Supreme Court granted certiorari to resolve “the proper standard for the admission of expert testimony.”

   The U.S. Supreme Court dispatched the general acceptance test in a few paragraphs, finding it an “austere standard” that was superseded by adoption of the Federal Rules of Evidence in 1975. The Court explained that the two-pronged test of Rule 702 requires judges to assume a gatekeeping role by inquiring into the reliability of the evidence and its helpfulness to the jury. This requires the trial judge to conduct an independent inquiry into the scientific validity, reliability, and relevance of the proposed testimony. To guide this scrutiny, the Court outlined four nondefinitive factors: the trial judge should consider whether the theory can be and has been tested, its error rate, whether it has been subjected to peer review and publication, and whether it has met with general acceptance in the scientific community.

   Many judges question judicial abilities to assess scientific validity. Chief Justice Rehnquist, for example, in his Daubert dissent, felt the majority was requiring district judges to become “amateur scientists.” On remand, the Ninth Circuit’s Judge Kozinski was openly sarcastic about the feasibility of the effort.17 Notwithstanding such skepticism, however, many judges have risen amply to the occasion. A fair number of them were engaging in a validity analysis long before Daubert required it,18 thus demonstrating that judges can indeed learn to think like scientists, or at least become adept at recognizing faulty logic when they hear it.

   In the years since Daubert, the results admittedly have been uneven. Judges comfortable with analyzing scientific validity before Daubert continue to do so; those who are too discomforted by the new analysis are finding ways to circumvent it. Some of the avoidance techniques include the erection of barriers by insisting the evidence meet requirements that have little to do with its inherent logic. For example, in the Daubert remand, Judge Kozinski added an unwarranted and illogical new admissibility factor that he found to trump those listed by the U.S. Supreme Court: whether the research was conducted independent of the litigation. Even Judge Kozinski recognized the problematic nature of his new factor for criminal evidence, where most of the research involved is generated only for litigation. There is virtually no other “market” than litigation for identification tests.

   Although the ostensible difference between Frye and Daubert is that the gatekeeping responsibility has been explicitly shifted to the judge from the scientific community, it is unclear whether the practical consequence of this difference will mean that cases will be decided differently using the Daubert analysis than they were under Frye. Some skeptics contend that Frye and Daubert essentially are indeterminate and cannot account for the results in particular cases. Other critics contend that Daubert has changed little about the way courts handle scientific evidence other than changing the label. At least one major survey of judicial competence to evaluate scientific evidence has been conducted, which concluded that judges lack the scientific literacy necessary to do so.19 However, such criticism fails to consider the changes that criminal laboratories are already beginning to undertake as a result of the increased scrutiny of laboratory protocols and techniques. In addition, the scientific validity of previously accepted identification techniques is now being challenged with some success. Absent the heightened scrutiny required under Daubert, it is doubtful that such changes would have emerged.





© Cambridge University Press

Table of Contents

Introduction; 1. Triers of science; 2. Intellectual due process; 3. A framework of analysis; 4. Toxic torts and the causation conundrum; 5. Criminal identification evidence; 6. Future dangerousness testimony: the epistemology of prediction; 7. Barefoot or Daubert? A cognitive perspective on vetting future dangerousness testimony; 8. Future dangerousness and sexual offenders; 9. Models of rationality: evaluating social psychology; 10. Evaluating battered woman syndrome; Conclusion.
From the B&N Reads Blog

Customer Reviews