Read an Excerpt
Organ Transplants and the Reinvention of Death
By Margaret Lock
UNIVERSITY OF CALIFORNIA PRESSCopyright © 2002 the Regents of the University of California
All rights reserved.
Boundary Transgressions and Moral Uncertainty
The exotic charm of another system of thought is the limitation of our own, the stark impossibility of thinking that.
Michel Foucault, The Order of Things
In this book I show how brain death is associated with different sets of assumptions about what constitutes the end of human life in Japan and North America. I also highlight what conditions are thought by some to be as "good-as-dead" and ask if and when it is appropriate to make utilitarian use of body parts. Differing assumptions in the two regions yield different answers to these questions. They touch on boundaries between nature and culture, life and death, self and other, person and body. Medical science is undoubtedly one of the principal arbiters of these judgments, but it should not be thought of as inevitably determining decisions about human death.
Biological death is recognized in society and in law by the standards of medical science, but what exactly constitutes death of a body, and what does this death signify with respect to death of the person? Does irreversible brain damage count as biological death, even if on occasion signs of life remain in the brain and other parts of the body continue to function, albeit aided by medical technology? And if irreversible brain damage counts as biological death, does this mean that the person too has died? Moreover, does the law recognize brain death as human death? Only when consensus is reached on these points can a brain-dead body be thought of as cadaverlike and made available for commodification.
The position of the North American public, largely ignorant of the issues, remains obscure. Among North American physicians, brain death has been broadly recognized as an indicator of both biological and personal death; in Japan, there is no consensus. In recent years, in several countries, a few clinicians (most of them neurologists), legal commentators, and philosophers have argued that irreversible brain damage should not be thought of as biological death, but that it nevertheless represents the end of meaningful life. I review here several of the key concepts that inform my critical reading of this entire debate.
Moral Economies of Science and Styles of Reasoning
A dominant approach in the "modern" world argues that nature, including the human body in life and death, functions according to scientific laws and is, therefore, autonomous and independent of social context and the moral order. When nature is understood this way, boundaries between nature and culture appear self-evident and pose few, if any, philosophical problems. Humans, however, are often characterized as tool makers, a pursuit that has permitted us over millennia to transform the natural environment and harvest its riches. The usual explanation given for such activities, this cultural modification of the natural, is that they are essential to meet "basic" human needs. In other words, nature must be reworked on the basis of intellectual and technological innovation, but nevertheless functions according to laws that ensure its continued autonomy from culture.
With the formation of the biological sciences in the nineteenth century, systematic examination, classification, and manipulation of the environment and of the "natural" objects that inhabit it, including human, animal, and plant materials, expanded enormously. By the second half of the century, the idea of improving on nature, and thus of providing for far more than basic needs, was firmly established. At the same time, the language of needs expanded into one of rights. Today, for people in the so-called developed world, lifelong good health is clearly included in these expectations (even though the vast majority of the world's population, including many residents of the United States and Canada, will not enjoy this assurance without major economic reforms).
The dominant ideology of an autonomous nature is increasingly challenged by philosophers, historians, social scientists, and natural scientists themselves. The making of stone tools and the laboratory replication of DNA alike require application of the human imagination. All knowledge about the natural world and its transformation must inevitably be mediated by our senses, making the conceptualization of nature, including the specification of its relationship to human society, contingent. Moreover, meanings attributed to both nature and society change through time and space (Cronon 1996; Daston 1992; Latour 1993; Lock et al. 2000). Cronon argues, for example, that nature is a human idea with a long and complicated cultural history that has led human beings to conceive of the natural world in very different ways (1996:20). Following this line of argument, the distinction between life (associated with culture) and death (associated with nature), although usually regarded as unproblematic, is necessarily blurred.
Sophisticated challenges to the epistemology of science, including that of biology, with its appeal to objectivity, do not dispute the reality of the material world. Nor is it asserted that morals, judgments, and assessments are present in every aspect of the scientific endeavor in exactly the same way as they are in other areas of human life. Scientific reasoning is not conceptualized by these critics, whose ideas I share, as a form of human conversation, as Richard Rorty has argued (1988), but neither is science understood as exempt from interrogation about its truth claims.
Lorraine Daston, a historian of science, posits what she calls a "moral economy" of science. She notes that the ideal of scientific objectivity insists on "the existence and impenetrability" of boundaries between facts and values, between emotions and rationality, but she insists that this ideal is based on an illusion (Daston 1995:3). Certain forms of empiricism, quantification, and notions about objectivity itself require a moral economy to sustain them. By moral economy, Daston means "a web of affect-saturated values that stand and function in well-defined relationship to one another" (1995:4). Objects or actions are valorized and form part of a balanced system of emotional forces, with equilibrium points and constraints. "Although it is a contingent, malleable thing of no necessity, a moral economy has a certain logic to its composition and operations. Not all conceivable combinations of affects and values are in fact possible" (1995:4). Daston is not arguing that ideologies or political self-interest inevitably penetrate the scientific endeavor (although, at times, clearly they do), nor is she suggesting that science is merely socially constructed. Even though moral economies in science "draw routinely and liberally upon the values and affects of ambient culture, the reworking that results usually becomes the peculiar property of scientists" (1995:7). This is, Daston argues, a special instance of hegemony, often solidified slowly but relentlessly over, sometimes, hundreds of years.
Moral economies are not limited to one particular discipline or sub-discipline of science. Belief in the powers of quantification, empiricism, and objectivity, agreement as to what counts as evidence, and so on, are common across almost all facets of the scientific endeavor. During the nineteenth century, once death was made into a medical rather than primarily a religious matter, the assessment of death was transformed into a rigorous scientific endeavor. In hospitals and other medical settings, individual death was stripped of much of its social significance and remade as a biological event. Doctors acquired the authority to pronounce death because lay people did not have the required expertise or objectivity. As chapter 2 shows, large segments of the European and North American public became deeply fearful of medical authority in this new domain. Anxieties abounded about premature pronouncement of death and burial alive. The medical world responded by attempting to apply science more rigorously. By contrast, in Japan, where medical authority over death has been relatively weak until recent years, the social significance of individual death has not been subordinated to a medicalized, objective death, even in medical settings and despite the fact that Japanese doctors participate in essentially the same moral economy of science as those in the West.
Whereas a moral economy is common across the sciences, the philosopher Ian Hacking focuses on a narrower perspective when he argues for a "disunity" of science. The sciences should be grouped together, Hacking suggests, "in terms of one of their disunities, their styles" (1996:74)—thus avoiding any absolute conception of reality. He asks what it is about certain styles of reasoning that make them endure while others falter, and how and why such styles of reasoning become authenticated as truthful and accurate. Because some arguments are clearly more effective than others, attention must be paid, Hacking insists, to the "self-stabilizing techniques peculiar to a given style of reasoning" (1996:73). Among the techniques characteristic of contemporary science Hacking includes modification of hypotheses, the rebuilding of instruments, reconsideration of data analysis, and so on. It is these self-stabilizing techniques, together with processes of vindication, that Hacking sees as distinguishing scientific reasoning from most humanistic and moral thought.
The philosopher Arnold Davidson, writing about the history of psychiatry, suggests that as styles of reasoning develop, they bring with them "new categories of possible true-or-false statements" (1996:79). He cites an example used originally by Ian Hacking to make this point:
Consider the following statement that you might find in a Renaissance medical textbook: "Mercury salve is good for syphilis because mercury is signed by the planet Mercury which signs the marketplace, where syphilis is contracted." Hacking argues, correctly I think, that our best description of this statement is not as false or as incommensurable with current medical reasoning, but rather as not even a possible candidate for truth-or-falsehood, given our currently accepted styles of reasoning. But a style of reasoning central to the Renaissance, based on the concepts of resemblance and similitude, brings with it the candidacy of such a statement for the status of true-or-false. Categories of statements get their status as true-or-false vis-à-vis historically specifiable styles of reasoning.
Davidson elaborates on how most psychiatrists working today habitually draw on familiar analogies and make predictable inferences significantly different from those used in earlier decades. However, when it comes to the "soft" part of psychiatry, to the creation of taxonomies of illness (attention deficit and hyperactivity disorder [ADHD], for example), and to psychotherapeutics, then different, competing styles of reasoning and schools of thought within the discipline are apparent.
It is tempting to extend Davidson's argument cross-culturally. Styles of reasoning then differ not only through time but also through space—that is, the space of culture. But this is to oversimplify matters. Science, including medical science, makes use in effect of a globalized moral economy, and often styles of reasoning may be more or less commensurate around the world. My research in intensive care units (ICUs) in Japan and North America suggests that the styles of reasoning used by neurologists and intensivists1 in these two locations to determine brain death are remarkably similar. There is virtually universal agreement, for instance, that the condition of brain death, accurately assessed, is irreversible (although continuing advances in trauma medicine may prevent many patients from progressing to this state).
However, even this seemingly firm end point is sometimes seriously disputed. In a recent report to a committee investigating the low rate of organ donation in Canada, Ruth Oliver, a psychiatrist, claimed that she was declared clinically dead twenty-two years previously. She insisted that she is "living testimony that people survive clinical death and brain death" even when labeled by some as "'irreversibly and inevitably' dying." A perceptive newspaper reporter noted that Oliver would probably not have been diagnosed as brain-dead today, thanks to improved investigative procedures (McIlroy 1999). In this relatively short period the technology has not changed radically, but cumulative experience, and with it systematization of the methods and reasoning used to determine brain death, have. The reporter, not surprisingly, interprets this change as an "improvement," as progress, and these are changes in which neurologists in Japan, North America, and other parts of the world have all participated.
Beyond consensus about the irreversibility of brain death and how to determine it (guidelines in Europe, North America, and Japan show small methodological discrepancies), differences clearly exist about the significance of a brain-death diagnosis. However, they do not fall neatly along cultural or geographic divisions. It cannot be said categorically that Japanese neurologists and emergency medicine doctors see things one way and North American neurologists and intensivists another. Certainly the majority of North American neurologists agree that brain death represents both biological death and death of the person, whereas much less certainty exists among Japanese clinicians. But complete agreement does not exist in North America, and indeed recent empirical findings have introduced considerable disquiet into the discussion—so much that the fundamental reasoning may well have to be modified.
A recent editorial in Neurology argues, for example, that with technological developments permitting the survival of a few brain-dead patients for months and even years, "even the 'dead' are not terminally ill any more" (Cranford 1998:1530). A double meaning is at work here, because the definition of "terminally ill," at least in United States Medicare regulations, indicates that a patient will in all probability be dead in six months. Cranford may well be highlighting the ambiguity that these empirical findings raise both for being terminally ill and being dead.
All along, the brain-death debate has hinged on several crucial questions: What is a person? What is the relationship of person to body? Does the person cease to exist when the physical body dies? And perhaps the most fundamental, most obdurate question of all: What exactly is death—physical, personal, and social? Obviously answers to these questions depend on values articulated in the broader social milieu. They do not involve conflicts in styles of clinical reasoning about the determination of brain death, but they do result in fundamental differences in clinical practice and patient care, and above all in conflicting ideas about the commodification of living cadavers.
It is the ambiguity of the brain-dead body that permits such varied responses to it. The result has been a complex, messy relationship among clinical practice, the styles of medical reasoning common to neurologists, the moral economy of contemporary medicine, with its emphasis on objectivity, and the social milieus in which these clinical practices are embedded. Clearly, if ideas about the nature of persons, individuals, human essences, and souls are implicated, then we are concerned with beliefs and concepts of great historical depth, drawn from the humanities, religion, and metaphysics as well as from everyday commonsense and from medical science. These concepts are further complicated as they mingle over time and space. The idea of the person as an autonomous individual has its origin in Europe, for example, but it has made deep inroads into Japanese thought. Although such values and concepts are outside the style of reasoning fundamental to neurobiology, they nevertheless profoundly influence clinical responses to the brain-dead.
We must also be concerned with the way in which ideas about the worth of persons and bodies, alive or dead, are employed to legitimize arguments for and against the recognition of brain death as the end of human life. Once again, the ambiguity associated with a brain-dead body permits this type of rhetoric to flourish. Such rhetoric does not have the power of self-authentication that Hacking assigns to styles of reasoning used in science. For one thing it must be convincing to several audiences: politicians, lawyers, physicians, the media, and the public. Competing discourse and rhetoric in the public domain in turn influence the way in which brain death is debated, institutionalized, managed, and modified in clinical settings.
In North America it has proved possible to claim that brain death is, for all intents and purposes, the end of recognizable human life. In Japan, this view has been repeatedly challenged. Both medical objectivity and diagnostic precision have come under fire from the media, from the legal profession, and from medicine itself. Even though Japanese neurologists concur about the irreversibility of brain death, and the vast majority of them are convinced that they can diagnose this condition reliably, they nevertheless remain reluctant to cooperate with organ procurement. Even now, many hesitate to encourage relatives to think of brain-dead patients as dead. Thus, even though a reasonably stable and similar style of medical reasoning exists in the two locations (though always subject to challenge in light of new medical knowledge and technologies), this does not lead to a congruence of clinical outcomes.
Excerpted from Twice Dead by Margaret Lock. Copyright © 2002 the Regents of the University of California. Excerpted by permission of UNIVERSITY OF CALIFORNIA PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.