Stochastic Modelling Of Drinking Water Treatment In Quantitative Microbial Risk Assessment

Stochastic Modelling Of Drinking Water Treatment In Quantitative Microbial Risk Assessment

by Patrick W. M. H. Smeets

Paperback

$132.00
Eligible for FREE SHIPPING
  • Want it by Wednesday, October 24  Order now and choose Expedited Shipping during checkout.

Overview

Stochastic Modelling Of Drinking Water Treatment In Quantitative Microbial Risk Assessment by Patrick W. M. H. Smeets

Special Offer: KWR Drinking Water Treatment Set - Buy all five books together and save a total £119!

Safe drinking water is a basic need for all human beings. Preventing microbial contamination of drinking water is of primary concern since endemic illness and outbreaks of infectious diseases can have significant social and economic consequences. Confirming absence of indicators of faecal contamination by water analysis only provides a limited verification of safety. By measuring pathogenic organisms in source water and modelling their reduction by treatment, a higher level of drinking water safety can be verified.

This book provides stochastic methods to determine reduction of pathogenic microorganisms by drinking water treatment. These can be used to assess the level and variability of drinking water safety while taking uncertainty into account. The results can support decisions by risk managers about treatment design, operation, monitoring, and adaptation. Examples illustrate how the methods can be used in water safety plans to improve and secure production of safe drinking water.

More information about the book can be found on theWater Wikiin an article written by the author here: http://www.iwawaterwiki.org/xwiki/bin/view/Articles/Quantifyingmicro-organismremovalforsafedrinkingwatersupplies

Product Details

ISBN-13: 9781843393740
Publisher: IWA Publishing
Publication date: 11/30/2010
Series: KWR Watercycle Research Institute Series
Pages: 196
Product dimensions: 6.12(w) x 9.25(h) x 0.75(d)

Read an Excerpt

CHAPTER 1

Introduction

P.W.M.H. Smeets, L.C. Rietveld, J.C. van Dijk, G.J. Medema, W.A.M. Hijnen and T.A. Stenström

HISTORY OF MICROBIALLY SAFE DRINKING WATER

From the beginning of time man has learned to carefully choose drinking water in order to reduce the risk of illness. Drinking water supply started with the rise of civilisations. The population in cities and communities needed to be provided with safe drinking water, while in the mean time water was increasingly polluted by the communities. This led to waterborne outbreaks of infectious disease, which were already recorded by the Egyptians in 3180 BC (Rose and Masago, 2007). Outbreaks continued to occur through the ages, as the relationship between faecal pollution of the water and outbreaks had not been recognised.

Drinking water treatment of surface water was originally started to improve the aesthetic properties of drinking water. By the time of the Egyptians (15th–13th century BC) and Romans (300 BC–200 AC) settling was applied to reduce turbidity and in the 5th century B.C. Hippocrates, the Father of Medicine, invented the 'Hippocrates Sleeve', a cloth bag to strain rainwater. Supply of settled and filtered water in modern times started in 1804 (Scotland) and 1806 (Paris). Initially slow sand filters were used to provide a more aesthetic product and soon filtration was recognised to reduce outbreaks of typhoid and cholera. In the 1870's Robert Koch studied water filtration systems that were effective in removal of bacteria after the Hamburg cholera outbreak of 1892. In his biography of Koch's work, Brock (1988) states that 'water filtration has probably saved more lives than immunization and chemotherapy combined'. In 1906 the first ozonation plant for disinfection was started in France. John Snow already promoted chlorination after his pioneering epidemiologic studies during London's cholera outbreaks of the 1850's. Still chlorination became common practice only around 1910. From 1920 the combination of sedimentation, filtration and chlorination virtually eliminated epidemics of classical waterborne diseases, such as cholera and typhoid, in areas so supplied (AWWA, 2006). However, outbreaks of waterborne disease due to poor drinking water quality still occur today, even when treatment is in place. From 1974 to 2002, 26 out of 35 outbreaks in the USA and Canada, as reported by Hrudey and Hrudey (2004), were due to surface water treatment failure or inadequate treatment to deal with sudden peak increases of pathogen concentrations in source water. Some major outbreaks like that of cryptosporidiosis in Milwaukee where treatment efficiency was compromised would have been prevented or the impact on human health reduced, by adequate treatment. So despite modern water treatment, means to verify that the water is safe to drink are still required.

By the end of the nineteenth century, the presence of specific bacteria in drinking water was recognized as an indicator of unsafe water. The use of coliforms as indicator organisms to judge the microbial safety of drinking water was initiated (Greenwood and Yule, 1917). The absence of indicator organisms such as Escherichia coli in drinking water is still part of most legislation today. In the 1970's the shortcomings of coliforms became clear. Newly recognized waterborne pathogens, such as viruses and protozoa turned out to be more resistant to drinking water treatment processes such as chlorination than coliforms. The search for other, more resistant indicator organisms such as bacterial spores and bacteriophages was started. Their applicability turned out to be limited, as outbreaks continued to occur even when no indicator organisms were detected (Hrudey and Hrudey, 2004). Large drinking water related outbreaks were generally picked up by epidemiology, but the prevalence of endemic illness caused by drinking water was so low in most developed countries that epidemiology was not sensitive enough to identify the source (Taubes, 1995). Apart from monitoring drinking water for the absence of indicator organisms, other ways to protect the drinking water consumer were sought. In the 1970's the National Academy of Sciences initiated chemical risk assessment for drinking water resulting in the 'Safe drinking water act' in 1974 (SDWA, 1974). Analogous to the chemical risk targets, a target for risk of infection (not illness) below 104 per person per year was being advocated in the USA.

Between 1983 and 1991 quantitative microbial risk assessment (QMRA) was used sporadically to assess microbial risks in drinking water (Haas, 1983; Gerba and Haas, 1988; Regli et al., 1991 and Rose et al., 1991). These first assessments were focussed on producing a reliable dose-response relationship for the very low pathogen doses expected in drinking water. These led to the 'single hit theory' stating that exposure to a single pathogenic organism could lead to infection and subsequently illness. The studies calculated the risk of infection from the monitored or estimated pathogen concentrations in drinking water. These studies recognised the limitations of drinking water monitoring for QMRA. Regli et al. (1991) concluded that: 'Inordinately large numbers of high-volume samples (generally a total volume of > 100,000 to 1,000,000 L) are required to ascertain whether a potable water is below the 10-4 risk level. Thus, finished-water monitoring is only practical to determine whether a very high level of risk exists, not whether a supply is reasonably safe.' Hrudey and Hrudey (2004) showed that the occurrence of false positives makes it virtually impossible to estimate indicator bacteria concentrations in drinking water by monitoring at the observed low level. However, direct monitoring of pathogens in drinking water has been applied. The statutory Cryptosporidium monitoring (DWI, 1999) in the UK has been the most extensive monitoring program for pathogens in drinking water and is further discussed in Chapter 3.

To overcome the shortcomings of drinking water monitoring, computational methods were applied in QMRA. Regli et al. (1991) stated that: 'Determining pathogen concentration (or demonstrating its absence) in source waters and estimating the percentage-removal or inactivation by treatment allow for risk estimates of pathogen occurrence in finished water and the associated risk of infection.' Subsequent studies found that quantifying treatment efficacy introduced substantial uncertainty in QMRA (Teunis et al., 1997; Gibson III et al., 1999 and Payment et al., 2000). From the outbreaks it had become clear that short hazardous events could have a significant impact on public health. In addition, the financial consequences of an outbreak may well make these events important to identify and advert (Signor and Ashbolt, 2007). Although counteracting peak events is necessary to prevent outbreaks, sufficient treatment during baseline (normal) conditions is also required to achieve an acceptable level of endemic infections. In specific situations the sporadic cases (during baseline conditions) appeared to represent a greater proportion of waterborne disease than outbreaks (Nichols, 2003). This was also a conclusion reached for a water supply system in Gothenburg, based on failure reporting and QMRA (Westrell et al., 2003).

STATE OF THE ART OF QMRA IN 2002

Treatment assessment for QMRA

Regli et al. (1991) first suggested monitoring pathogens in source water and modelling the removal by treatment. Initially rules of thumb and engineering guidelines were used to provide a point estimate of treatment efficacy. Rose et al. (1991) used QMRA to determine the required treatment efficacy to reach health-based targets, rather than actually assess the efficacy. As more research was performed, it became clear that treatment efficacy could vary substantially between treatment sites. LeChevallier et al. (1991) assessed treatment efficacy for Cryptosporidium and found substantial differences in treatment efficacy at very similar sites. These could not be explained by treatment characteristics such as filter to waste practice or choice of coagulant. Payment et al. (1993) studied removal and inactivation of viruses and indicator organisms. He used the mean of the observed concentrations before and after treatment steps to quantify treatment efficacy, thus disregarding the effect of treatment variations. Other QMRA studies did not model treatment but started from a concentration in treated water, such as Haas et al. (1993) who based virus concentrations in drinking water on Payment (1985). Similarly Crabtree et al. (1997) did not estimate treatment efficacy for virus removal but assumed concentrations in drinking water of 1/1000 and 1/100 virus per litre. Gerba et al. (1996) assumed 4 log reduction of rotavirus by treatment based on the Surface Water Treatment Rule (SWTR, USEPA, 1989) credits. Teunis et al. (1997) incorporated the variation in time and the uncertainty with regard to the efficacy at a specific site in a stochastic QMRA by the use of PDFs to describe the concentrations of microorganisms and treatment efficacy. Microbial monitoring data before and after treatment were paired by date to provide a set of reduction values, and the PDF was fitted to these. Their conclusion was that (variation of) reduction by treatment dominated the uncertainty of this risk. Haas et al. (1999) provided an overview of methods for QMRA both in drinking water and other fields such as recreational waters and food. They found that identification of distributional form may be subject to error if a limited amount of data points are used. Consequently the risk analysis should not put too much weight on the tails of these distributions which would represent rare events of poor treatment. Haas et al. (1999) also discussed the use of monitoring data (virus removal by lime treatment) and process models (for virus decay in groundwater and chemical inactivation) to assess treatment efficacy. Teunis and Havelaar (1999) performed a full QMRA, including quantification of treatment efficacy using monitored reduction of Spores of Sulphite-reducing Clostridia (SSRC) as a surrogate for Cryptosporidium removal. Variability of filtration was modelled by a two-phase model: 'good removal' and 'poor removal'. Medema et al. (1999) applied similar methods. Variability of ozonation was modelled by running an inactivation model with monitored ozone concentrations. Payment et al. (2000) used log credits from the SWTR in risk assessment of Giardia since 'Attempting to actually enumerate indicator microorganisms or pathogens under actual plant conditions rarely provides useful data'. Dewettinck et al. (2001) assessed the safety of drinking water production from municipal wastewater based on treatment efficacy reported in literature. Fewtrel et al. (2001) assessed the uncertainties in drinking water QMRA and found that treatment contributed the least uncertainty. However, this was based on a single experiment of Cryptosporidium removal by treatment. A 2001 USEPA study on Cryptosporidium removal (USEPA, 2001) found large ranges of removal (typically over 3 log) and generally less removal at full-scale than at laboratory or pilot scale. In an extensive literature review of treatment efficacy by LeChevallier and Au (2004), large variations in treatment efficacy between studies was found. Masago et al. (2002) applied QMRA to assess the risk from Cryptosporidium, including the effect of rare events. Treatment was modelled bimodally with good removal (99.96%) or poor removal (70.6%).

In general it could be concluded that most QMRA studies used log credits to model treatment performance, which were not site specific. Site specific assessment of treatment efficacy for QMRA indicated that treatment efficacy at full-scale could be significantly higher or lower than the applied log credits (Teunis et al., 1997, 1999; Teunis and Havelaar, 1999 and Medema et al., 1999). Moreover, such an assessment could provide management strategies to be applied at the site to improve drinking water safety. Site specific assessment was complicated by the way pathogens were distributed in water, treatment variations and correlation between treatment steps.

Distribution of pathogens in water

Pipes et al. (1977) found that organism counts in 100 ml samples from a 10 L sample were not necessarily Poisson distributed, which would be expected if the organisms were randomly dispersed in the water. Gale et al. (1997) found that although Bacillus subtilus var. niger were Poisson distributed in raw water, this was not the case in treated water (within a 500 ml sample). He concluded that treatment could change the distribution of microorganisms in the water. As a consequence, in an outbreak overdispersion would lead to some individuals ingesting high numbers of pathogens and some not receiving any. In combination with the dose-response relationship, this might have an impact on the assessed risk. At low doses the risk of infection would be determined by the arithmetic mean concentration. The arithmetic mean is dominated by the rare high concentrations when organisms are over-dispersed. Quantifying these high concentrations is problematic due to their rarity. Gale (2001) also showed that organisms are not completely dispersed in drinking water. The (lack of) relation between influent and effluent samples observed by Teunis et al. (2004) might partly be caused by the over dispersion of microorganisms. The change of distribution of microorganisms in water due to water treatment processes was likely to affect the observed reduction by treatment from microbial monitoring.

Treatment variation and rare events

From stochastic QMRA studies it became clear that when variations were incorporated, rare events of high pathogen concentrations or poor treatment could dominate the risk of infection. Haas and Trussell (1998) compared a system redundancy method to a stochastic method as a way of incorporating rare events of poor treatment. The system redundancy method was based on log credits per treatment step. Compliance of reduction by the total treatment was required even when one barrier failed completely (rare event). The stochastic method applied a probability density function (PDF) of likely performance to the separate barriers and combined these in a Monte Carlo simulation to predict total treatment efficacy for QMRA. The importance of good PDF fit for very skewed data was stressed, implying that high numbers of data points were required. Gibson III et al. (1999) identified exposure assessment (including treatment assessment) as one of the important fields of research for risk assessment of waterborne protozoa due to the uncertainty about and variability of protozoan reduction by treatment. Teunis et al. (1999, 2004) explored various methods to quantify variation of treatment efficacy. They found that the extremes of the distributions of treatment efficacy (and other factors such as recovery) dominated the assessed risk. The approach of statistical analysis of fractions was more appropriate than often used calculations based on the ratio between the (geometric) means 'before' and 'after' treatment. Masago et al. (2002) found that eliminating rare occurrences (<1% of time) of high concentrations exceeding 1/80 L was required to reduce the risk to 104 per person per year. This demonstrated the impact of rare events on average risk and the need to estimate the frequency and magnitude of rare events of poor treatment in QMRA.

(Continues…)



Excerpted from "Stochastic Modelling of Drinking Water Treatment in Quantitative Microbial Risk Assessment"
by .
Copyright © 2011 KWR Watercycle Research Institute.
Excerpted by permission of IWA Publishing.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Contents: Introduction and goal of the study; A stochastic pathogen reduction model for full-scale treatment; How can the UK statutory Cryptosporidium monitoring be used for quantitative risk assessment of Cryptosporidium in drinking water?; Inactivation of Escherichia coli by ozone under bench-scale plug flow and full-scale hydraulic conditions; Improved methods for modelling drinking water treatment in quantitative microbial risk assessment; a case study of Campylobacter reduction by filtration and ozonation; On the variability and uncertainty in quantitative microbial risk assessment of drinking water; Practical applications of quantitative microbial risk assessment for water safety plans; General discussion

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews