ISBN-10:
0471955329
ISBN-13:
9780471955320
Pub. Date:
01/28/1995
Publisher:
Wiley
Statistics and the Evaluation of Evidence for Forensic Scientists / Edition 1

Statistics and the Evaluation of Evidence for Forensic Scientists / Edition 1

by Colin Aitken

Hardcover

Current price is , Original price is $141.95. You
Select a Purchase Option (Older Edition)
  • purchase options

Product Details

ISBN-13: 9780471955320
Publisher: Wiley
Publication date: 01/28/1995
Series: Statistics in Practice Series , #3
Edition description: Older Edition
Pages: 276
Product dimensions: 6.14(w) x 9.19(h) x 0.88(d)

Read an Excerpt

Statistics and the Evaluation of Evidence for Forensic Scientists


By C.G.G. Aitken Franco Taroni

John Wiley & Sons

Copyright © 2004 John Wiley & Sons, Ltd
All right reserved.

ISBN: 0-470-84367-5


Chapter One

Uncertainty in Forensic Science

1.1 INTRODUCTION

The purpose of this book is to discuss the statistical and probabilistic evaluation of scientific evidence for forensic scientists. For the most part the evidence to be evaluated will be so-called transfer or trace evidence.

There is a well-known principle in forensic science known as Locard's principle which states that every contact leaves a trace:

tantôt le malfaiteur a laissé sur les lieux les marques de son passage, tantôt, par une action inverse, il a emporté sur son corps ou sur ses vêtements, les indices de son séjour ou de son geste. (Locard, 1920)

Iman and Rudin (2001) translate this as follows:

either the wrong-doer has left signs at the scene of the crime, or, on the other hand, has taken away with him - on his person (body) or clothes - indications of where he has been or what he has done.

The principle was reiterated using different words in 1929:

Les débris microscopiques qui recouvrent nos habits et notre corps sont les témoins muets, assurés et fidèles de chacun de nos gestes et de chacun de nos recontres. (Locard, 1929)

This may be translated as follows:

Traces which are present on our clothes or our person are silent, sure and faithful witnesses of every action we do and of every meeting we have.

Transfer evidence and Locard's principle may be illustrated as follows. Suppose a person gains entry to a house by breaking a window and assaults the man of the house, during which assault blood is spilt by both victim and assailant. The criminal may leave traces of his presence at the crime scene in the form of bloodstains from the assault and fibres from his clothing. This evidence is said to be transferred from the criminal to the scene of the crime. The criminal may also take traces of the crime scene away with him. These could include bloodstains from the assault victim, fibres of their clothes and fragments of glass from the broken window. Such evidence is said to be transferred to the criminal from the crime scene. A suspect is soon identified, at a time at which he will not have had the opportunity to change his clothing. The forensic scientists examining the suspect's clothing find similarities amongst all the different types of evidence: blood, fibres and glass fragments. They wish to evaluate the strength of this evidence. It is hoped that this book will enable them so to do.

Quantitative issues relating to the distribution of characteristics of interest will be discussed. However, there will also be discussion of qualitative issues such as the choice of a suitable population against which variability in the measurements of the characteristics of interest may be compared. Also, a brief history of statistical aspects of the evaluation of evidence is given in Chapter 4.

1.2 STATISTICS AND THE LAW

The book does not focus on the use of statistics and probabilistic thinking for legal decision making, other than by occasional reference. Also, neither the role of statistical experts as expert witnesses presenting statistical assessments of data nor their role as consultants preparing analyses for counsel is discussed. There is a distinction between these two issues (Fienberg, 1989; Tribe, 1971). The main focus of this book is on the assessment of evidence for forensic scientists, in particular for identification purposes. The process of addressing the issue of whether or not a particular item came from a particular source is most properly termed individualisation. 'Criminalistics is the science of individualisation' (Kirk, 1963), but established forensic and judicial practices have led to it being termed identification. The latter terminology will be used throughout this book. An identification, however, is more correctly defined as 'the determination of some set to which an object belongs or the determination as to whether an object belongs to a given set' (Kingston, 1965a). Further discussion is given in Kwan (1977) and Evett et al. (1998a).

For example, in a case involving a broken window, similarities may be found between the refractive indices of fragments of glass found on the clothing of a suspect and the refractive indices of fragments of glass from the broken window. The assessment of this evidence, in associating the suspect with the scene of the crime, is part of the focus of this book (and is discussed in particular in Section 10.4.2).

For those interested in the issues of statistics and the law beyond those of forensic science, in the sense used in this book, there are several books available and some of these are discussed briefly.

'The Evolving Role of Statistical Assessments as Evidence in the Courts' is the title of a report, edited by Fienberg (1989), by the Panel on Statistical Assessments as Evidence in the Courts formed by the Committee on National Statistics and the Committee on Research on Law Enforcement and the Administration of Justice of the USA, and funded by the National Science Foundation. Through the use of case studies the report reviews the use of statistics in selected areas of litigation, such as employment discrimination, antitrust litigation and environment law. One case study is concerned with identification in a criminal case. Such a matter is the concern of this book and the ideas relevant to this case study, which involves the evidential worth of similarities amongst human head hair samples, will be discussed in greater detail later (Sections 4.5.2 and 4.5.5). The report makes various recommendations, relating to the role of the expert witness, pre-trial discovery, the provision of statistical resources, the role of court-appointed experts, the enhancement of the capability of the fact-finder and statistical education for lawyers. Two books which take the form of textbooks on statistics for lawyers are Vito and Latessa (1989) and Finkelstein and Levin (2001). The former focuses on the presentation of statistical concepts commonly used in criminal justice research. It provides criminological examples to demonstrate the calculation of basic statistics. The latter introduces rather more advanced statistical techniques and again uses case studies to illustrate such techniques.

The area of discrimination litigation is covered by a set of papers edited by Kaye and Aickin (1986). This starts by outlining the legal doctrines that underlie discrimination litigation. In particular, there is a fundamental issue relating to discrimination in hiring. The definition of the relevant market from which an employer hires has to be made very clear. For example, consider the case of a man who applies, but is rejected for, a secretarial position. Is the relevant population the general population, the representation of men amongst secretaries in the local labour force or the percentage of male applicants? The choice of a suitable reference population is also one with which the forensic scientist has to be concerned. This is discussed at several points in this book.

Another textbook, which comes in two volumes, is Gastwirth (1988a, b). The book is concerned with civil cases and 'is designed to introduce statistical concepts and their proper use to lawyers and interested policymakers' (1988a, p. xvii). Two areas are stressed which are usually given less emphasis in most statistical textbooks. The first area is concerned with measures of relative or comparative inequality. These are important because many legal cases are concerned with issues of fairness or equal treatment. The second area is concerned with the combination of results of several related statistical studies. This is important because existing administrative records or currently available studies often have to be used to make legal decisions and public policy; it is not possible to undertake further research. Gastwirth (2000) has also edited a collection of essays on statistical science in the courtroom, some of which are directly relevant to this current book and will be referred to as appropriate.

A collection of papers on Statistics and Public Policy has been edited by Fairley and Mosteller (1977). One issue in the book which relates to a particularly infamous case, the Collins case, is discussed in detail later (Section 4.4). Other articles concern policy issues and decision making.

The remit of this book is one which is not covered by these others in great detail. The use of statistics in forensic science in general is discussed in a collection of essays edited by Aitken and Stoney (1991). The remit of this book is to describe statistical procedures for the evaluation of evidence for forensic scientists. This will be done primarily through a modern Bayesian approach. This approach has its origins in the work of I.J. Good and A.M. Turing as code-breakers at Bletchley Park during World War II. A brief review of the history is given in Good (1991). An essay on the topic of probability and the weighing of evidence is Good (1950). This also refers to entropy (Shannon, 1948), the expected amount of information from an experiment, and Good remarks that the expected weight of evidence in favour of a hypothesis H and against its complement [bar.H] (read as 'H-bar') is equal to the difference of the entropies assuming H and [bar.H], respectively. A brief discussion of a frequentist approach and the problems associated with it is given in Section 4.6.

It is of interest to note that a high proportion of situations involving the formal objective presentation of statistical evidence uses the frequentist approach with tests of significance (Fienberg and Schervish, 1986). However, Fienberg and Schervish go on to say that the majority of examples cited for the use of the Bayesian approach are in the area of identification evidence. It is this area which is the main focus of this book and it is Bayesian analyses which will form the basis for the evaluation of evidence as discussed here. Examples of the applications of such analyses to legal matters include Cullison (1969), Fairley (1973), Finkelstein and Fairley (1970, 1971), Lempert (1977), Lindley (1977a, b), Fienberg and Kadane (1983) and Anderson and Twining (1998).

Another approach which will not be discussed here is that of Shafer (1976, 1982). This concerns so-called belief functions (see Section 4.1). The theory of belief functions is a very sophisticated theory for assessing uncertainty which endeavours to answer criticisms of both the frequentist and Bayesian approaches to inference. Belief functions are non-additive in the sense that belief in an event A (denoted Bel(A)) and belief in the opposite of A (denoted Bel(bar.A)) do not sum to 1. See also Shafer (1978) for a historical discussion of non-additivity. Further discussion is beyond the scope of this book. Practical applications are few. One such, however, is to the evaluation of evidence concerning the refractive index of glass (Shafer, 1982).

It is very tempting when assessing evidence to try to determine a value for the probability of the so-called probandum of interest (or the ultimate issue) such as the guilt of a suspect, or a value for the odds in favour of guilt, and perhaps even to reach a decision regarding the suspect's guilt. However, this is the role of the jury and/or judge. It is not the role of the forensic scientist or statistical expert witness to give an opinion on this (Evett, 1983). It is permissible for the scientist to say that the evidence is 1000 times more likely, say, if the suspect is guilty than if he is innocent. It is not permissible to interpret this to say that, because of the evidence, it is 1000 times more likely that the suspect is guilty than innocent. Some of the difficulties associated with assessments of probabilities are discussed by Tversky and Kahneman (1974) and by Kahneman et al. (1982) and are further described in Section 3.3. An appropriate representation of probabilities is useful because it fits the analytic device most used by lawyers, namely the creation of a story. This is a narration of events 'abstracted from the evidence and arranged in a sequence to persuade the fact-finder that the story told is the most plausible account of "what really happened" that can be constructed from the evidence that has been or will be presented' (Anderson and Twining, 1998, p. 166). Also of relevance is Kadane and Schum (1996), which provides a Bayesian analysis of evidence in the Sacco-Vanzetti case (Sacco, 1969) based on subjectively determined probabilities and assumed relationships amongst evidential events. A similar approach is presented in Chapter 14.

1.3 UNCERTAINTY IN SCIENTIFIC EVIDENCE

Scientific evidence requires considerable care in its interpretation. Emphasis needs to be put on the importance of asking the question 'what do the results mean in this particular case?' (Jackson, 2000). Scientists and jurists have to abandon the idea of absolute certainty in order to approach the identification process in a fully objective manner. If it can be accepted that nothing is absolutely certain then it becomes logical to determine the degree of confidence that may be assigned to a particular belief (Kirk and Kingston, 1964).

There are various kinds of problems concerned with the random variation naturally associated with scientific observations. There are problems concerned with the definition of a suitable reference population against which concepts of rarity or commonality may be assessed. There are problems concerned with the choice of a measure of the value of the evidence.

The effect of the random variation can be assessed with the appropriate use of probabilistic and statistical ideas. There is variability associated with scientific observations. Variability is a phenomenon which occurs in many places. People are of different sexes, determination of which is made at conception. People are of different heights, weights and intellectual abilities, for example. The variation in height and weight is dependent on a person's sex. In general, females tend to be lighter and shorter than males. However, variation is such that there can be tall, heavy females and short, light males. At birth, it is uncertain how tall or how heavy the baby will be as an adult. However, at birth, it is known whether the baby is a boy or a girl. This knowledge affects the uncertainty associated with the predictions of adult height and weight.

People are of different blood groups. A person's blood group does not depend on the age or sex of the person but does depend on the person's ethnicity. The refractive index of glass varies within and between windows.

Continues...


Excerpted from Statistics and the Evaluation of Evidence for Forensic Scientists by C.G.G. Aitken Franco Taroni Copyright © 2004 by John Wiley & Sons, Ltd. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Uncertainty in Forensic Science.
The Evaluation of Evidence.
Variation.
Historical Review.
Transfer Evidence.
Discrete Data.
Continuous Data.
DNA Profiling.
References.
Indexes.

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews