Calculated Risks: The Toxicity and Human Health Risks of Chemicals in our Environment / Edition 2

Calculated Risks: The Toxicity and Human Health Risks of Chemicals in our Environment / Edition 2

by Joseph V. Rodricks
ISBN-10:
0521788781
ISBN-13:
2900521788785
Pub. Date:
11/23/2006
Publisher:
Calculated Risks: The Toxicity and Human Health Risks of Chemicals in our Environment / Edition 2

Calculated Risks: The Toxicity and Human Health Risks of Chemicals in our Environment / Edition 2

by Joseph V. Rodricks
$42.59 Current price is , Original price is $68.99. You
$68.99 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.


Overview

Safeguarding economic prosperity, whilst protecting human health and the environment, is at the forefront of scientific and public interest. This book provides a practical and balanced view on toxicology, control, risk assessment, and risk management, addressing the interplay between science and public health policy. This fully revised and updated new edition provides a detailed analysis of chemical and by-product exposure, how they enter the body, and the suitability of imposed safety limits. New chapters on dose, with particular emphasis on children and vulnerable subpopulations, reproductive and developmental toxicants, and toxicity testing are included. With updated and comprehensive coverage of international developments in risk management and safety, this book will have broad appeal to researchers and professionals involved in chemical safety and regulation, as well as to the general reader interested in environmental pollution and public health.

Product Details

ISBN-13: 2900521788785
Publication date: 11/23/2006
Pages: 358
Product dimensions: 6.00(w) x 1.25(h) x 9.00(d)

About the Author

Dr. Joseph V. Rodricks was a Founding Principal of ENVIRON International Corporation, a consultancy firm on environmental and health issues. Since 1980 he has consulted for the World Health Organisation and in 2005, Dr Rodricks received the Outstanding Practitioner Award from the Society for Risk Analysis. The first edition of Calculated Risks won an 'Honourable Mentions' award from the American Medical Writers Association.

Read an Excerpt

CALCULATED RISKS
Cambridge University Press
978-0-521-78308-8 - CALCULATED RISKS - The Toxicity and Human Health Risks of Chemicals in Our Environment - by Joseph V. Rodricks
Index

Prologue – groundnuts, cancer,
and a small red book

In the fall of 1960 thousands of turkey poults and other animals started dying throughout southern England. Veterinarians were at first stymied about the cause of what they came to label “Turkey Ⅹ disease,” but because so many birds were affected, a major investigation into its origins was undertaken. In 1961 a report from three scientists at London’s Tropical Products Institute and a veterinarian at the Ministry of Agriculture’s laboratory at Weybridge, entitled “Toxicity associated with certain samples of groundnuts,” was published in the internationally prominent scientific journal, Nature. Groundnuts, as everyone in America knows, are actually peanuts, and peanut meal is an important component of animal feed. It appeared that the turkeys had been poisoned by some agent present in the peanut meal component of their feed. The British investigators found that the poisonous agent was not a component of the peanuts themselves, but was found only in peanuts that had become contaminated with a certain mold.

   It also became clear that the mold itself – identified by the mold experts (mycologists) as the fairly common species Aspergillus flavus – was not directly responsible for the poisoning. Turkey Ⅹ diseasecould be reproduced in the laboratory not only when birds were fed peanut meal contaminated with living mold, but also when fed the same meal after the mold had been killed.

   Chemists have known for a long time that molds are immensely productive manufacturers of organic chemical agents. Perhaps the best known mold product is penicillin, but this is only one of thousands of such products that can be produced by molds. Why molds are so good at chemical synthesis is not entirely clear, but they surely can produce an array of molecules whose complexities are greatly admired by the organic chemist.

   In fact Turkey Ⅹ disease was by no means the first example of a mold-related poisoning. Both the veterinary and public health literature contain hundreds of references to animal and human poisonings associated with the consumption of feeds or foods that had molded, not only with Aspergillus flavus, but also with many other mold species. Perhaps the largest outbreaks of human poisonings produced by mold toxins occurred in areas of the Soviet Union just before and during the Second World War. Cereal grains left in the fields over the winter, for lack of sufficient labor to bring them in, became molded with certain varieties that grow especially well, and produce their toxic products, in the cold and under the snow. Consumption of molded cereals in the following springtime led to massive outbreaks of human poisonings characterized by hemorrhaging and other dreadful effects. The Soviet investigators dubbed the disease alimentary toxic aleukia (ATA). The mold chemicals, or mycotoxins (“myco” is from the Greek word for fungus, mykes), responsible for ATA are now known to fall into a class of extremely complex organic molecules called trichothecenes, although toxicologists are still at work trying to reconstruct the exact causes of this condition. Veterinary, but probably not human, poisonings with this class of mycotoxins still take place in several areas of the world.

   Even older than ATA is ergotism. Ergot poisoning was widespread in Europe throughout the Middle Ages, and has occurred episodically on a smaller scale many times since. The most notable recent outbreak occurred in France in 1951. This gruesome intoxication is produced by chemical products of Claviceps purpurea, a purple-colored mold that grows especially well on rye, wheat, and other grains. Most of the ergot chemicals are in a class called alkaloids, one member of which can be easily modified to produce the hallucinogenic agent, LSD, which of course came into popular use as a recreational drug during the 1960s. Ergot poisons produce a wide spectrum of horrible effects, including extremely painful convulsions, blindness, and gangrene. Parts of the body afflicted with gangrenous lesions blacken, shrink, dry up and may even fall off. The responsible mold is, unlike many others, fairly easy to spot, and normal care in the processing of grain into flour can eliminate the problem.

   These and dozens more cases of mycotoxin poisonings were known to the investigators at the time they began delving into the causes of Turkey Ⅹ disease, so finding that a mold toxin was involved was no great surprise. But some new surprises were in store.

   Investigations into the identity of the chemical agent responsible for Turkey Ⅹ disease continued throughout the early 1960s at laboratories in several countries. At the Massachusetts Institute of Technology (MIT) a collaborative effort involving a group of toxicologists working under the direction of Gerald Wogan and a team of organic chemists headed by George Büchi had solved the mystery by 1965. The work of these scientists was a small masterpiece of the art of chemical and toxicological experimentation. After applying a long series of painstakingly careful extraction procedures to peanuts upon which the Aspergillus flavus mold had been allowed to grow, the research team isolated very small amounts of the substances that were responsible for the groundnut meal’s poisonous properties. As is the custom among chemists, these substances were given a simple name that gave a clue to their source. Thus, from Aspergillus flavus toxin came the name aflatoxin.

   Organic chemists are never satisfied with simply isolating and purifying such natural substances; their work is not complete until they identify the molecular structures of the substances they isolate. The case of aflatoxin presented a formidable challenge to the MIT team, because they were able to isolate only about 70 milligrams (mg) of purified aflatoxin with which to work (a milligram is one-thousandth of a gram, and a gram is about 1/30th of an ounce). But the team overcame this problem through a masterful series of experimental studies, and in 1965 published details about the molecular structure of aflatoxin. It is shown in Chapter 6 (“Identifying carcinogens”).

   It turned out that aflatoxin was actually a mixture of four different but closely related chemicals. All possessed the same molecular backbone of carbon, hydrogen, and oxygen atoms (which backbone was quite complex and not known to be present in any other natural or synthetic chemicals), but differed from one another in some minor details. Two of the aflatoxins emitted a blue fluorescence when they were irradiated with ultraviolet light, and so were named aflatoxin B1 and B2; the names aflatoxin G1 and G2 were assigned to the green-fluorescing compounds. The intense fluorescent properties of the aflatoxins would later prove an invaluable aid to chemists interested in measuring the amount of these substances present in various foods, because the intensity of the fluorescence was related to the amount of chemical present.

   While all this elegant investigation was underway, it became clear that the aflatoxins were not uncommon contaminants of certain foods. A combination of the efforts of veterinarians investigating outbreaks of farm animal poisonings, survey work carried out by the Ministry of Agriculture in England, the US Department of Agriculture (USDA) and the Food and Drug Administration (FDA), and the investigations of individual scientists in laboratories throughout the world, revealed during the 1960s and 1970s that aflatoxins can be found fairly regularly in peanuts and certain peanut products, corn grown in certain geographical areas, and even in some varieties of tree nuts. Cottonseed grown in regions of the southwestern United States, but not in the southeast, was discovered to be susceptible. While peanut, corn, and cottonseed oils processed from contaminated products did not seem to carry the aflatoxins, these compounds did remain behind in the so-called “meals” made from these products. These meals are fed to poultry and livestock and, if they contain sufficiently high levels of aflatoxins, the chemical agents can be found in the derived food products – meat, eggs, and especially milk. The frequency of occurrence of the aflatoxins and the amounts found vary greatly from one geographical area to another, and seem to depend upon climate and agricultural and food storage practices.

   While this work was underway, toxicologists were busy in several laboratories in the United States and Europe attempting to acquire a complete profile of aflatoxins’ poisonous properties. These substances did seem to be responsible for several outbreaks of liver poisoning, sometimes resulting in death, in farm animals, but there was no evidence that aflatoxins reaching humans through various food products were causing similar harm. The most likely reason for this lack of evidence was the fact that the amounts of aflatoxins reaching humans through foods simply did not match the relatively large amounts that may contaminate animal feeds. Of course, if aflatoxins were indeed causing liver disease in people, it would be extremely difficult to find this out unless, as in the case of ATA or ergotism, the signs and symptoms were highly unusual and occurring relatively soon after exposure.

   In experimental studies in laboratory settings, aflatoxins proved not only to be potent liver poisons, but also – and this was the great surprise – capable of producing malignant tumors, sometimes in great abundance, in rats, ferrets, guinea pigs, mice, monkeys, sheep, ducks, and rainbow trout (trout are exquisitely sensitive to aflatoxin-induced carcinogenicity). Several early studies from areas of the world in which human liver cancer rates are unusually high turned up evidence suggesting, but not clearly establishing, a role for the aflatoxins. Aflatoxin’s cancer-producing properties were uncovered and reported in the scientific literature during the period 1961–1976, the same period during which these substances were discovered to be low-level but not infrequent contaminants of certain human foods.

   What was to be done? Were the aflatoxins a real threat to the public health? How many cases of cancer could be attributed to them? Why was there no clear evidence that aflatoxins could produce cancers in exposed humans? How should we take into account the fact that the amounts of aflatoxins people might ingest through contaminated foods were typically very much less than the amounts that could be demonstrated experimentally to poison the livers of rodents, and to increase the rate of occurrence of malignancies in these several species? And if aflatoxins were indeed a public health menace, what steps should be taken to control or eliminate human exposure to them? Indeed, because aflatoxins occurred naturally, was it possible to control them at all?

   These and other questions were much in the air during the decade from 1965 to 1975 at the Food and Drug Administration (FDA) – the public health and regulatory agency responsible for enforcing federal food laws and ensuring the safety of the food supply. Scientists and policy-makers from the FDA consulted aflatoxin experts in the scientific community, food technologists in affected industries, particularly those producing peanut, corn, and dairy products, and experts in agricultural practices. The agency decided that limits needed to be placed on the aflatoxin content of foods. In the 1960s, the FDA declared that peanut products containing aflatoxins in excess of 30 parts aflatoxin per billion parts of food (ppb) would be considered unfit for human consumption; a few years later the agency lowered the acceptable limit to 20 ppb. This ppb unit refers to the weight of aflatoxin divided by the weight of food; for one kilogram of peanut butter (about 2.2 lb), the 20 ppb limit restricts the aflatoxin content to 20 micrograms (one microgram is one-millionth of one gram – more will be said about these units later).

   The FDA’s decision was based on the conclusion that no completely safe level of human intake could be established for a cancer-causing chemical. This position led, in turn, to the position that if analytical chemists could be sure aflatoxins were present in a food, then the food could not be consumed without threatening human health. The question then was what is the smallest amount of aflatoxin that analytical chemists can reliably detect?: by 1968 this amount – or, more accurately, concentration – was 30 ppb, and because of improvements in analytical technology, the detection limit later dropped to 20 ppb. The analytical chemist dictated the FDA’s position on acceptable aflatoxin limits.

   It turned out that meeting a 20 ppb limit was not excessively burdensome on major manufacturers of peanut butter and other peanut products, at least in the United States; aflatoxin tended to concentrate in discolored or otherwise irregular peanuts, which, fortunately, could be picked up and rejected by modern electronic sorting machines. Manufacturers did, however, have to institute substantial additional quality control procedures to meet FDA limits, and many smaller manufacturers had trouble meeting a 20 ppb limit. An extensive USDA program of sampling and analysis of raw peanuts, which continues to this day, was also put into place as the first line of attack on the problem.

   Did this FDA position make any scientific sense? It implied that if aflatoxin could be detected by reliable analysis, it was too risky to be consumed by humans, but that if the aflatoxin happened to be present below the minimum detectable concentration it was acceptable. (Analytical chemists can never declare that a chemical is not present. The best that can be done is to show that it is not present above some level – 20 ppb in the case of aflatoxins, and other, widely varying, levels in the case of other chemicals in the environment.) To be fair to the FDA, perhaps the word “acceptable” should be withdrawn; the agency’s position was not so much that all concentrations of aflatoxin up to 20 ppb were acceptable, but that nothing much could be done about them, because the chemists could not determine whether they were truly present in a given lot of food until the concentration exceeded 20 ppb.

   Was the FDA’s position scientifically defensible? Let us offer two responses that might reflect the range of possible scientific opinion:

   Yes. FDA clearly did the right thing, and perhaps did not go far enough. Aflatoxins are surely potent cancer-causing agents in animals. We don’t have significant human data, but this is very hard to get and we shouldn’t wait for it before we institute controls. We know from much study that animal testing gives a reliable indication of human risk. We also know that cancer-causing chemicals are a special breed of toxicants – they can threaten health at any level of intake. We should therefore eliminate human exposure to such agents whenever we can, and, at the least, reduce exposure to the lowest possible level whenever we’re not sure how to eliminate it.

   No. The FDA went too far. Aflatoxins can indeed cause liver toxicity in animals and are also carcinogenic. But they produce these adverse effects only at levels far above the FDA set limit. We should ensure some safety margin to protect humans, but 20 ppb is unnecessarily low and the policy that there is no safe level is not supported by scientific studies. Indeed, it’s not even certain that aflatoxins represent a cancer risk to humans because animal testing is not known to be a reliable predictor of human risk. Moreover, the carcinogenic potency of aflatoxins varies greatly even among the several animal species in which they have been tested. Human evidence that aflatoxins cause cancer is unsubstantiated. There is no sound scientific basis for the FDA’s position.

The whole matter of protective limits for aflatoxin became more complex in the early-to-mid 1970s when it became clear that analytical chemists could do far better than a 20 ppb detection limit. In several laboratories, aflatoxins could easily be detected as low as 5 ppb, and in some laboratories 1 ppb became almost routine. If the FDA was to follow a consistent policy, the agency would have had to call for these lower limits. But it did no such thing. It had become obvious to the FDA by the mid 1970s that a large fraction of the peanut butter produced by even the most technically advanced manufacturers would fail to meet a 1 ppb limit, and it was also apparent that other foods – corn meal and certain other corn products, some varieties of nuts (especially Brazils and pistachios) – would also fail the 1 ppb test pretty frequently. The economic impact of a 20 ppb policy was not great. The impact of a 1 ppb limit could be very large for these industries. Did it still make scientific sense to pursue an “analytical detection limit” goal, at any cost? Was the scientific evidence about cancer risks at very low intakes that certain?

   Here we come to the heart of the problem we shall explore in this book: just how certain is our science on matters such as this? And how should public health officials deal with the uncertainties? We shall be exploring the two responses to the FDA’s position that were set out earlier and learn what we can about their relative scientific merits; not specifically in connection with the aflatoxin problem, but in a more general sense. We shall also be illustrating how regulators react to these various scientific responses, and others as well, using some examples where the economic stakes are very high. One would like to believe that the size of the economic stakes would not influence scientific thinking, but it surely influences scientists and policy-makers when they deal with scientific uncertainties.

   In the meantime keep in mind that, although considerable progress has been made in reducing aflatoxin exposures, these mold products are still present in some foods, and you have probably ingested a few nanograms (billionths of a gram!) recently. Indeed in many areas of the world, particularly in lesser developed countries, aflatoxin contamination of foods and feeds is widespread. Moreover, the evidence that aflatoxins are a cause of human liver cancer, particularly in individuals affected by hepatitis B virus, has strengthened considerably since the 1970s. The question of the magnitude of the health risk posed by the aflatoxins, and its overall public health significance, remains an important one.

   In 1983 a committee of the National Research Council – National Academy of Sciences issued a relatively brief report with red covers entitled: “Risk Assessment in the Federal Government: Managing the Process.” The so-called “Red Book” committee had been organized to respond to a request from the US Congress to examine the scientific work of those regulatory agencies that had been given responsibility for enforcing federal laws aimed at guarding the health of people using or otherwise exposed to chemical products of all types, and to chemical contaminants of the environment. These many laws required the regulators to make decisions regarding the introduction of some classes of new chemicals, and to begin setting limits (of the type just described for aflatoxin) on contaminants of air, water, and food, and of workplace environments. The development of knowledge regarding the toxic properties of commercially produced chemicals, and of the by-products of their production, use, and disposal, had accelerated considerably during the decade of the 1970s, as did knowledge of the many chemical by-products of energy production. At the same time, as analytical chemistry improved, knowledge of human exposures to these many products expanded at an even greater rate than did knowledge of their possible adverse health effects. Regulators were activated, and scientists in the various regulatory agencies began turning out what soon came to be called “risk assessments” – documents that attempted to integrate epidemiological and experimental information related to chemical toxicity, with information on human exposures to chemicals, for purposes of evaluating the public health consequences of these exposures. Completion of most risk assessments required the use not only of scientific data, but also the use of various incompletely tested assumptions – about low-dose effects, for example, or about the relevance of experimental animal data to humans. Scientific controversies of many types arose during this time, and scientists in regulatory agencies were often accused of manipulating risk assessment results (by arbitrary adoption of whatever assumptions yielded the preferred result) to satisfy the desires of regulatory decision-makers (to regulate or not, depending upon the political context and climate). Science, it was alleged, both by representatives of regulated industries and by consumer and environmental advocates, was being perverted.

   Certain members of the US Congress became convinced that close inspection of “regulatory science” and its uses in the making of regulations was necessary. In fact, some suggested that the risk assessment activities of federal agencies be institutionally separated from the decision-making processes of those agencies; in this way scientists could operate in environments free of political contamination, and simply serve up highly objective risk assessments for use by regulators. As is frequently the case when difficult science policy questions arise, the National Academy of Sciences was asked to offer its opinion. Thus came the Red Book on risk assessment, issued in 1983 after an 18-month study.

   The Red Book did much to clear the air, and its influence has been profound. The committee offered clear definitions of risk assessment and of the analytic steps that comprise it, and those definitions and their conceptual underpinnings remain for the most part in place today, not only in the United States but also around the world. The committee clarified the relationships between research, risk assessment, and the set of activities it described as risk management. Risk assessment, the committee insisted, was the critical link between research and decisions about the use of research results for public health protection (through regulation and other means of policy development as well). Most questions regarding risks to health are not answered directly by research scientists. Someone – the risk assessor – needs to evaluate and integrate often diverse and sometimes conflicting sets of research data, and to create a picture of what is known and what is not known about specific health risks, in a form that is (to use an overly fashionable term) transparent, and that is also useful to the risk management context.

   Perhaps the most important contribution of the Academy committee came in the area of what its report called “science policy.” The term was not used, as it is typically, to describe issues of, for example, the public funding of scientific research, or the priorities given to various research endeavors. In the context of the committee’s report, the phrase was used to describe the considerations to be given to the choice of scientific assumptions that are necessary to complete a risk assessment – necessary, because scientific knowledge is incomplete and decisions must be made. Not to act because knowledge is incomplete could clearly jeopardize public health. But to act on the basis of incomplete knowledge could also lead to unnecessary (and often very costly) regulations. Risk assessments conducted using the best available knowledge and, where necessary, assumptions not chosen on an arbitrary, case-by-case basis, but adopted for general application, could bring a greater degree of objectivity to the decision-making process. The Red Book offered a guide to the selection of those general assumptions. The recommendations found in the 1983 report (which is still available from the National Academy and which is a critical resource for acquiring an understanding of the current world of risk analysis policy) provides much of the framework for this book. This book is far more heavily devoted than is the Red Book to the scientific underpinnings for risk assessment, but its discussion of how risk assessment draws upon that science and arrives at results useful for risk management is heavily under the report’s influence.

   By the way, the committee rejected the suggestion that risk assessment activities be institutionally separated from the risk management activities of regulatory agencies. It recognized the potential problem of the distortion of science, but proposed other, less drastic means to minimize that problem. The committee’s thinking on this important matter will emerge in the later chapters of the book.




© Cambridge University Press

Table of Contents


Preface to the first edition     vii
Preface to the second edition     xiii
List of abbreviations     xvi
Prologue     1
Chemicals and chemical exposures     11
From exposure to dose     28
From dose to toxic response     54
Toxic agents and their targets     91
Carcinogens     136
Identifying carcinogens     162
Risk assessment I: some concepts and principles     202
Risk assessment II: applications     215
Risk assessment III: new approaches, new problems     250
Risk assessment IV: the courtroom     273
The management of risk     282
A look ahead     312
Sources and recommended reading     320
Index     327
From the B&N Reads Blog

Customer Reviews