BN.com Gift Guide

Modeling in Medical Decision Making: A Bayesian Approach / Edition 1

Hardcover (Print)
Used and New from Other Sellers
Used and New from Other Sellers
from $100.00
Usually ships in 1-2 business days
(Save 33%)
Other sellers (Hardcover)
  • All (9) from $100.00   
  • New (5) from $116.01   
  • Used (4) from $100.00   

Overview

Medical decision making has evolved in recent years, as more complex problems are being faced and addressed based on increasingly large amounts of data. In parallel, advances in computing have led to a host of new and powerful statistical tools to support decision making. Simulation-based Bayesian methods are especially promising, as they provide a unified framework for data collection, inference, and decision making. In addition, these methods are simple to interpret, and can help to address the most pressing practical and ethical concerns arising in medical decision making.
* Provides an overview of the necessary methodological background, including Bayesian inference, Monte Carlo simulation, and utility theory.
* Driven by three real applications, presented as extensively detailed case studies.
* Case studies include simplified versions of the analysis, to approach complex modelling in stages.
* Features coverage of meta-analysis, decision analysis, and comprehensive decision modeling.
* Accessible to readers with only a basic statistical knowledge.
Primarily aimed at students and practitioners of biostatistics, the book will also appeal to those working in statistics, medical informatics, evidence-based medicine, health economics, health services research, and health policy.

Read More Show Less

Editorial Reviews

From the Publisher
"…good to use as one component in a graduate course…for established statisticians and biostatisticians, the book is a good way to get up to speed…" (Journal of the American Statistical Association, March 2007)

"…strongly recommend…[it] to clinical researchers and statisticians." (Journal of Statistical Computation & Simulation, May 2004)

"...I recommend his book." (Statistics in Medicine, 28 February 2003)

"...a comprehensive presentation of topics..." (Clinical Chemistry, Vol. 49, No. 4)

"…an indispensable volume owing to the clarity of its discussion…" (Journal of Drug Assessment, Vol.6, No.4, 2003)

"...another fine practical applications book..." (Technometrics, Vol. 44, No. 4, November 2002)

"…skillfully brings together sophisticated statistical models and detailed medical applications…" (Applied Clinical Trials, June 2002)

"...surveys inferential methods…features case studies..." (SciTech Book News, Vol. 26, No. 2, June 2002)

"...useful to research students in biostatistics...a welcome addition to any undergraduate library in statistics..." (The Statistician)

From The Critics
Due to the sheer volume and complexity of information bearing on healthcare decisions, quantitative modeling is assuming a more central role in evidence-based medicine. For practitioners and students, Parmigiani (Johns Hopkins U.) aims to "help bridge the gap between simulation-based Bayesian statistical methods and their use in medical decision making." He surveys inferential methods relevant to diagnosis, genetic counseling, future patient forecasting, expected utility theory, and simulations using Monte Carlo methods. The second part features case studies applying meta-analysis, decision trees, and chronic disease modeling to such areas as breast cancer screening (the author's dissertation focus). Annotation c. Book News, Inc., Portland, OR (booknews.com)
Read More Show Less

Product Details

  • ISBN-13: 9780471986089
  • Publisher: Wiley
  • Publication date: 3/20/2002
  • Series: Statistics in Practice Series
  • Edition number: 1
  • Pages: 280
  • Product dimensions: 6.28 (w) x 9.37 (h) x 0.82 (d)

Read an Excerpt

Modeling in Medical Decision Making

A Bayesian Approach
By Giovanni Parmigiani

John Wiley & Sons

ISBN: 0-471-98608-9


Chapter One

Meta-analysis

4.1 Summary

In this chapter, we illustrate the use of Bayesian meta-analysis in developing probability distributions on the magnitude of the effects of a medical treatment. These distributions can be used informally to support decision making on patient treatment and on allocation of resources to future trials. They can also become components of a formal decision analysis, of a comprehensive decision model, or of a stochastic optimization. The chapter begins with a brief overview of meta-analysis, with pointers to the extensive, fascinating, and controversial literature. This is followed by an introductory example which looks at the efficacy of tamoxifen in adjuvant treatment of early breast cancer and serves as an introduction to some of the key features of Bayesian meta-analysis. The case study of this chapter deals with the synthesis of evidence from several clinical trials comparing the effectiveness of commonly recommended prophylactic treatments for migraine headaches. The case study is based closely on Dominici et al. (1999).

4.2 Meta-analysis

The term meta-analysis originated in psychology. Glass (1976) used it to describe 'the statistical analysis of a large collection of results from individual studies for the purpose of integrating the findings'. In medicine, the practiceof formally integrating findings from different studies can be traced back at least to the 1950s (Beecher, 1955). Today, meta-analysis has become a key component of evidence-based medicine. A MEDLINE search of articles published in 1997 yielded 775 articles with the term meta-analysis' in the title, abstract or keywords. This is the result of the growth in the number of clinical trials (approximately 8000 new clinical trials begin every year, according to Olkin 1995a) and of the desire to use accruing evidence as early as possible in improving health care decisions. Meta-analysis is also becoming widely applied beyond randomized clinical trials, for example in epidemiological research. Interesting discussions of the role of meta-analysis in clinical research and decision making will be found in L' Abbé et al. (1987) and Gelber and Goldhirsch (1991).

Moher and Olkin (1995) summarize the reasons for the success of meta-analysis:

Why the dramatic increase in the number of published meta-analyses? Two examples may help address the question. A meta-analysis published in 1990 described the efficacy of corticosteroids given to mothers expected to deliver prematurely (Crowley et al., 1990). The results of the meta-analysis indicated that corticosteroids significantly reduced morbidity and mortality of these infants. This analysis convincingly showed that such evidence was available at least a decade earlier ([i.e.] 1980). Had a meta-analysis been conducted when the evidence became available, much unnecessary suffering might have been avoided.

In an article on meta-analyses published in JAMA in 1992 (Antman et al., 1992), the research team showed that textbook recommendations for the treatment of patients with suspected myocardial infarction often lagged behind the empirical evidence by as much as 10 years. The group also noted that, at times, the opinion of experts writing these books was in sharp contrast to the empirical evidence.

In general, meta-analyses can offer important advantages over more traditional narrative approaches to the overview of scientific evidence (Chambers and Altman, 1994). These advantages result primarily from the systematic, explicit, and quantitative nature of the synthesis provided by meta-analysis; from the possibility of assessing uncertainty about the results of the synthesis; and from the increase in sample sizes deriving from the combination of studies. From the point of view of the philosophy of science, meta-analysis is a novel paradigm for scientific investigation, reflecting the scientific community's adaptation to the information explosion of the last few decades. Goodman (1998) hails meta-analysis as 'one of the most important and controversial developments in the history of science.

As with many promising new paradigms, meta-analysis has given rise to misuses and controversy. Criticisms of meta-analysis are both conceptual and methodological. Randomized clinical trials are modeled on controlled experimentation and are generally agreed to address in a satisfactory way the question of the causal relationship between treatment and outcomes. Difficulties may arise: the sample size can be small, the measurable outcomes may problematic, the protocol may be difficult to implement. But the paradigm is thought to be sound. The design of a meta-analysis study is different from that of a randomized clinical trial. Some authors (such as Erwin, 1984) consider these differences to be sufficient to question the possibility of inferring causal relations from meta-analysis of clinical trials (even though it is possible to infer such relations from the individual trials). A related point is made by Charlton (1996), who writes that 'the prestige of meta-analysis is based upon a false model of scientific practice'; meta-analysis, in his view, cannot be considered a hypothesis testing activity, and should be confined to effect size estimation.

Another common criticism is epitomized by the slogan 'many bad studies Don't make a good one'. The argument is this: for a given clinical question there either is a single, critical, well-conducted, study, with a large enough sample, that can provide guidance to physicians, or there is not. If there is not, that is often because existing trials are conflicting, diverse, or not sufficiently well conducted. Meta-analysis, it is argued, should not be used to attempt to settle a clinical question in the presence of vexing primary data problems.

Methodologically, critical issues are search strategies, publication bias, and study heterogeneity. Some meta-analyses are conducted by gathering primary data from the study investigators (Simes, 1986; Early Breast Cancer Trialists Collaborative Group (EBCTCG), 1990). This is an appropriate and effective strategy, but it is not always feasible. Most meta-analyses are based on published results. The key elements of a meta-analysis of published results are the criteria used for searching for, evaluating, and selecting articles. Concerns about the quality of these criteria in applied research are common (Cook et al., 1995; Sacks et al., 1987; Chambers and Altman, 1994). Guidelines for rigorous procedural methodology have been put forward by several authors (Simes, 1986; Deeks et al., 1997) and by ad hoc working groups in clinical trials (Moher et al., 1999) and epidemiology (Stroup et al., 2000).

Publication bias arises because scientific journals selectively publish studies with statistically significant results. For example, in 1986-87 about 76% of the articles published in the New England Journal of Medicine used statistical tests; 88% of these tests rejected the null hypothesis. These proportions are even higher in the experimental psychology literature (Sterling et al., 1995). It is clear that the results of published studies are not a representative sample of the results of all studies, and that this may systematically bias the result of a meta-analysis which considers only published results; for example, a small treatment effect is likely to become artificially magnified. Diagnostics for the presence of publication bias are based on observing a relationship between sample size and effect size (Duval and Tweedie, 2000).

Because the aims of a meta-analysis are typically broader that those of the individual trials being reviewed, it is likely that any sizable meta-analysis will have several sources of between-trial heterogeneity. These include differences in specific treatment regimens, patient eligibility criteria, baseline disease severity, and outcomes. Complete homogeneity is not necessarily a desirable goal. Moher and Olkin (1995) note that 'sometimes, too much homogeneity of studies will stifle generalizations to a larger population. On the other hand, too much study heterogeneity will weaken the results. Thus, there has to be an understanding of the sources of the heterogeneity.' Similar views are expressed by Thompson (1994): 'discussion of heterogeneity in meta-analysis affects whether it is reasonable to believe in one overall estimate that applies to all the studies encompassed, implied by the so called fixed effect method of statistical analysis. Undue reliance may have been put on this approach in the past, causing overly simplistic and overly dogmatic interpretation.' Statistical models and techniques for quantifying heterogeneity and for developing and interpreting summary estimates are illustrated extensively in the rest of this chapter.

There are also issues surrounding the quality of reporting of meta-analyses. Jadad and McQuay (1996) carried out a systematic review of methodology in 74 meta-analyses of analgesic interventions. They found that: 'Ninety percent of the meta-analyses had methodological flaws that could limit their validity. The main deficiencies were lack of information on methods to retrieve and to assess the validity of primary studies and lack of data on the design of the primary studies'. They also found that 'meta-analyses of low quality produced significantly more positive conclusions'. Sacks et al. (1987) identify six content areas thought to be important in the conduct and reporting of meta-analyses: study design, combinability, control of bias, statistical analysis, sensitivity analysis, and problems of applicability. Moher and Olkin (1995) review the issues and lay the groundwork for developing standards for the reporting of meta-analyses.

In this chapter we will discuss some statistical tools for meta-analysis. What is their role amidst this controversy? I will take a pragmatic view: there are decisions to be made today, and we should make them using the best available evidence. The limitations of mathematical modeling as a means of synthesis can be serious, but a well-conducted analysis can often limit publication bias, incorporate heterogeneity, and provide practical guidance.

4.3 Bayesian meta-analysis

Statistical methods have been developed for meta-analysis and are continuously being refined in response to the increasing demand for meta-analysis and the increasing complexity of the meta-analyses performed. Olkin (1995b) provides a historical perspective; Sutton et al. (1998) provide a comprehensive and up-to-date review; Hasselblad and McCrory (1995) give a more concise practical guide; Stangl and Berry (2000) provide a collection of state-of-the-art applications. A classic and beautiful book on the subject is that by Hedges and Olkin (1985). Reviews of software include Sutton et al. (2000) and Normand (1995).

Here we will be concerned with Bayesian methods, whose use is well established in the statistical literature (DuMouchel and Harris, 1983; Berry, 1990; DuMouchel, 1990; Eddy et al., 1990) and is gaining acceptance in the medical literature as well (Baraff et al., 1993; Berry, 1998; Biggerstaff et al., 1994; Brophy and Joseph, 1995; Sorenson and Pace, 1992; Tweedie et al., 1996). Many interesting situations can now be modeled using software packages such as BUGS. Detailed applications of meta-analysis are illustrated by Smith et al. (1995; 2000).

Interest in Bayesian meta-analysis is motivated by several desiderata:

a) providing decision makers with summaries of evidence in the form of probability distributions, given all available evidence. This input is appropriate for a subsequent decision model. For example, Pallay and Berry (1999) demonstrate this using Bayesian meta-analysis to assess the worthiness of a phase III trial.

b) developing approaches for modeling trial heterogeneity, and devising summary measures that are relevant to decision making in the presence of trial heterogeneity. For example, it is important to address the question of the probability that a patient receiving drug A survives longer than a patient receiving drug B for a patient from a future, or unobserved, or hypothetical trial. This is a question of prediction. Bayesian random effects and hierarchical models (Lindley and Smith, 1972; Raudenbush and Bryk, 1985; Morris and Normand, 1992; Carlin and Louis, 2000) provide a flexible and practical framework for developing predictive models.

c) modeling unobserved aspects of the data generation and reporting processes. Examples include modeling publication bias (Silliman, 1997; Givens et al., 1997), missing covariates (Lambert et al., 1997), and partially reported results (Dominici et al., 1999).

d) realistically assessing uncertainty. Bayesian simulation-based methods do not need to rely on asymptotic approximations and can straightforwardly accommodate uncertainty about nuisance parameters, often leading to more conservative and accurate statements about uncertainty in the overall conclusions. For example, Carlin (1992) finds that, in the meta-analysis of 2 × 2 tables, Bayesian estimates of parameter uncertainty are more accurate than the corresponding empirical Bayes estimates.

The limitations of Bayesian meta-analysis are related primarily to the added complexity of implementation. Depending on the application and the state-of-the-art in the field, elicitation of prior information may also become complex or controversial.

4.4 Tamoxifen in early breast cancer

4.4.1 Background

To illustrate some of the interesting features of Bayesian meta-analysis in a simple and common situation, let us consider a case in which each study reports a 2 × 2 table of successes and failures for both a treatment and a control group. This situation is exemplified by the data of Table 4.1, taken from the overview of clinical trials of adjuvant tamoxifen for women with early breast cancer carried out by the EBCTCG (1998a). Since the mid-1980s this group has been responsible for thorough and influential reviews of randomized clinical trials of all treatments for early breast cancer. Their analysis is based on patient-level data obtained from study investigators, rather than on published summaries. As a result it is likely to be very robust to publication bias. Extensive effort has been devoted to the grouping of trial results according to relevant clinical characteristics, such as type and duration of treatment, dose of drug, use of other therapies in conjunction with tamoxifen, patient prognostic factors, and type of outcome (recurrence or death). Data collection and data checking procedures are described in detail in EBCTCG (1990;1998a).

The data of Table 4.1

Continues...


Excerpted from Modeling in Medical Decision Making by Giovanni Parmigiani Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Read More Show Less

Table of Contents

Preface.

PART I: METHODS.

1. Inference.

Summary.

Medical Diagnosis.

Genetic Counseling.

Estimating sensitivity and specificity.

Chronic disease modeling.

2. Decision making.

Summary.

Foundations of expected utility theory.

Measuring the value of avoiding a major stroke.

Decision making in health care.

Cost-effectiveness analyses in the μ SPPM.

Statistical decision problems.

3. Simulation.

Summary.

Inference via simulation.

Prediction and expected utility via simulation.

Sensitivity analysis via simulation.

Searching for strategies via simulation.

Part II: CASE STUDIES.

4. Meta-analysis.

Summary.

Meta-analysis.

Bayesian meta-analysis.

Tamoxifen in early breast cancer.

Combined studies with continuous and dichotomous responses.

Migraine headache.

5. Decision trees.

Summary.

Axillary lymph node dissection in early breast cancer.

A simple decision tree

A more complete decision tree for ALND

6. Chronic disease modeling.

Summary.

Model overview.

Natural history model.

Modeling the effects of screening.

Comparing screening schedules.

Model critique.

Optimizing screening schedule.

References

Index.

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)