The Handbook for Evidence-Based Practice in Communication Disorders / Edition 1

The Handbook for Evidence-Based Practice in Communication Disorders / Edition 1

by Christine A. Dollaghan
ISBN-10:
1557668701
ISBN-13:
9781557668707
Pub. Date:
07/01/2007
Publisher:
Brookes Publishing
ISBN-10:
1557668701
ISBN-13:
9781557668707
Pub. Date:
07/01/2007
Publisher:
Brookes Publishing
The Handbook for Evidence-Based Practice in Communication Disorders / Edition 1

The Handbook for Evidence-Based Practice in Communication Disorders / Edition 1

by Christine A. Dollaghan
$38.95 Current price is , Original price is $38.95. You
$38.95 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores
$20.63 
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.

    • Condition: Good
    Note: Access code and/or supplemental material are not guaranteed to be included with used textbook.

Overview

With this landmark textbook, speech-language pathologists will learn to apply current best evidence as they make critical decisions about the care of each individual they serve. The first text that covers this cutting-edge topic for the communication disorders field, this book introduces SLPs to the principles and process of evidence-based practice, thoroughly covering its three primary components: "external" evidence from systematic research, "internal" evidence from clinical practice, and evidence concerning patient preferences. SLPs will get the in-depth guidance they need to

  • construct the right questions about best evidence

  • find the reliable information they need to answer these questions

  • evaluate the validity of both internal and external evidence

  • weigh the importance of empirical evidence, whether from a research study or from an individual patient

  • help patients fully understand clinical options and express their preferences

  • conduct systematic, critical appraisals of treatment evidence, diagnosis and screening evidence, and evidence from systematic reviews or meta-analyses

Developed by Christine A. Dollaghan, one of the most highly respected researchers in the field of language acquisition and disorders, this text makes complex concepts understandable with its clear, reader-friendly language, vivid step-by-step examples of key processes, and illuminating figures and tables.

SLPs will come away with a solid, practical understanding of evidence-based practice—knowledge they'll use throughout their careers to make sound clinical decisions about the screening, diagnosis, and treatment of communication disorders.


Product Details

ISBN-13: 9781557668707
Publisher: Brookes Publishing
Publication date: 07/01/2007
Edition description: New Edition
Pages: 184
Product dimensions: 6.00(w) x 9.00(h) x 0.50(d)

About the Author

Christine Dollaghan, Ph.D, Professor, Callier Center for Communication Disorders, University of Texas at Dallas, 1966 Inwood Road, A.128, Dallas, TX 75235
Christine Dollaghan is a professor at the University of Texas at Dallas. Her research interests include child language development and disorders, the validity of diagnostic measures, and the latent structure of diagnostic categories. Her publications include The Handbook of Evidence-Based Practice in Communication Disorders (Paul H. Brookes Publishing Co., 2007). She was awarded the Honors of the American Speech-Language-Hearing Association in 2012.


Read an Excerpt

Excerpted from Chapter 1 of The Handbook for COMMUNICATION EVIDENCE–BASED PRACTICE in DISORDERS, by CHRISTINE A. DOLLAGHAN, PH.D., CCC–SLP

Copyright © 2007 by Paul H. Brookes Publishing Co. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

EVIDENCE–BASED PRACTICE: AN EXPANDED DEFINITION

It sometimes seems as though evidence–based practice (EBP) is taking over the world. EBP is a key topic of discussion (and controversy) in fields as diverse as clinical laboratory science (e.g., McQueen, 2001), nursing (e.g., Rutledge, 2005), physical medicine and rehabilitation (e.g., Cicerone, 2005), occupational therapy (e.g., Tse, Lloyd, Penman, King, & Bassett, 2004), psychology (e.g., Wampold, Lichtenberg, & Waehler, 2005), psychiatry (e.g., Hamilton, 2005), and education (e.g., Odom, Brantlinger, Gersten, Horner, Thompson, & Harris, 2005). Sessions on EBP began appearing on the program of the annual convention of the American Speech–Language–Hearing Association (ASHA) in 1999, and the move toward EBP has been endorsed in an ASHA technical report (ASHA, 2004) and position statement (ASHA, 2005a). An evidence–based orientation can even be found in a book about the success that resulted for a baseball team when prospective players were evaluated with objective performance measures in addition to the subjective impressions of baseball scouts (Lewis, 2003).

Its rapid spread notwithstanding, EBP has generated negative as well as positive reactions. Most criticisms can be traced to several problems with the way that the phrase has come to be understood. One problem is that the EBP "movement" seems to imply that until EBP came along, practitioners were basing their clinical decisions on something other than evidence, which is simply not true. In addition, most people seem to think that EBP involves only research evidence, which also is not true. Sackett, Rosenberg, Gray, Haynes, and Richardson (1996) originally defined evidence–based medicine as "...the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients..." [by] integrating individual clinical expertise with the best available external clinical evidence from systematic research" (p. 71). Both research evidence and clinical expertise were a part of this original definition, and a third component (the patient's perspective) was added to the subsequent definition of EBP as ". . . the integration of best research evidence with clinical expertise and patient values" (Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000, p. 1). Despite the inclusion of clinical expertise and patient values in definitions of EBP, it is clear that the emphasis on scientific evidence has overshadowed the other two components.

As a way of bringing all three components into focus, I'd like to suggest that we think of EBP as requiring not one but three kinds of evidence, and the abbreviation E3BP will be used to help keep all three types of evidence in mind. Accordingly, the definitions discussed previously will be adapted (Sackett et al., 1996, 2000), and in this handbook we will define E3BP as the conscientious, explicit, and judicious integration of 1) best available external evidence from systematic research, 2) best available evidence internal to clinical practice, and 3) best available evidence concerning the preferences of a fully informed patient.

This definition will allow us to circumvent several criticisms and confusions that have bedeviled the previous concept of evidence–based practice and its progenitor, evidence–based medicine (e.g., Cohen, Stavri, & Hersch, 2004; Rees, 2000). For one thing, the definition of E3BP clearly distinguishes between external and internal sources of strong evidence and highlights the importance of both for clinical decision making. Evidence internal to clinical practice with a particular patient is an important complement to external evidence from systematic research because although high–quality external evidence can reveal valuable information about average patterns of performance across groups of patients, its applicability to an individual patient is unknown (Bohart, 2005). Conversely, high–quality internal evidence from clinical practice with an individual patient is surely relevant to making decisions about that patient (Guyatt et al., 2000), although its applicability to other patients or groups of patients is likewise unknown.

Those who work in behavioral sciences, and especially in communication sciences and disorders, have an advantage when it comes to obtaining strong internal evidence because of experience with well–developed single–subject methodologies (e.g., Horner, Carr, Halle, McGee, Odom, & Wolery, 2005) for measuring change in individual patients. Although the need for such methods is clear, Sackett et al. (2000) acknowledged that the use of such methods in medicine is in its infancy. The expanded definition emphasizes that strong evidence internal to clinical practice can and must be incorporated to make E3BP a reality in communication disorders.

Finally, E3BP also requires strong evidence about our patients' beliefs, preferences, hopes, and fears (e.g., McMurtry & Bultz, 2005) concerning the clinical options that face them. To paraphrase Sullivan (2003, p. 1595), "facts known only by practitioners need to be supplemented by values known only by patients." Incorporating this third type of evidence requires that we develop a shared understanding of our clients' perspectives, as well as ensuring that clients comprehend their clinical alternatives so that they can express meaningful preferences.

You may have noticed that the expanded definition of E3BP does not refer specifically to clinical expertise, which was a key component of the original definitions by Sackett et al. (1996, 2000). That is because in my view clinical expertise is not a separate piece of the E3BP puzzle but rather the glue by which the best available evidence of all three kinds is integrated in providing optimal clinical care.

PRECONDITIONS TO E3BP

According to this expanded definition, successful E3BP has three preconditions:

  1. Uncertainty about whether a clinical action is optimal for a client
  2. Professional integrity (comprising honesty, respectfulness, awareness of one's biases, and openness to the need to change one's mind)
  3. Application of four principles that underpin clinical ethical reasoning: beneficence (maximizing benefit), nonmaleficence (minimizing harm), autonomy (self–determination), and justice (fairness)

Thus, E3BP requires honest doubt about a clinical issue, awareness of one's own biases, a respect for other positions, a willingness to let strong evidence alter what is already known, and constant mindfulness of ethical responsibilities to patients (e.g., Kaldjian, Weir, & Duffy, 2005; Miller, Rosenstein, & DeRenzo, 1998).

The crucial role of uncertainty in E3BP is worth emphasizing. Seeking evidence not in an effort to reduce honest uncertainty but rather in an effort to prove what one already believes is contrary to the fundamental thrust of E3BP. It is also likely to be a waste of time, because any contradictory evidence will be ignored or discounted in favor of evidence that supports one's point of view.

Awareness of the powerful and distorting effects of subjective bias has been a major impetus for the evidence–based orientation. All people have biases or preconceived notions that frame, organize, and often simplify their perception of the world. Francis Bacon was one of the first to address the ways in which biases can weaken the ability to seek, recognize, and acknowledge strong evidence.

"The human understanding, once it has adopted opinions, either because they were already accepted and believed, or because it likes them, draws everything else to support and agree with them. And though it may meet a greater number and weight of contrary instances, it will, with great and harmful prejudice, ignore or condemn or exclude them by introducing some distinction, in order that the authority of those earlier assumptions may remain intact and unharmed."(1620, cited in Meehl, 1997a, p. 94) In short, our strong preference for what we already believe to be true makes us poor judges of whether it actually is true.

How important is it to test the validity of subjective beliefs? That depends at least partly on their consequences. Testing the validity of a personal belief (say, in the existence of extraterrestrials) is not important if that belief has no impact on other people. But when a belief has the potential to harm other people, testing its validity is imperative. Meehl (1997a) provided an example in his description of the Malleus Malleficarum, a book published in 1487 that addressed an important diagnostic problem at the time: how to identify witches. Meehl noted that its authors were respected experts who grounded their detailed list of diagnostic indicators firmly in what was known at that time—that witches existed, that they were dangerous, and that they needed to be identified in order to protect innocent people. Despite the authors' good intentions, of course, their work actually led to enormous harm for many innocent people who were diagnosed as witches and then treated by drowning, burning at the stake, and so forth. Only when these widely accepted, expert views about witches were tested using more objective criteria did belief in them gradually wane.

As many have noted (e.g., Barrett–Connor, 2002; Meehl, 1997a; Sackett, Haynes, Guyatt, & Tugwell, 1991), it is not difficult to find more recent examples of clinical recommendations that, although based on the beliefs and good intentions of experts, turned out to be ineffective or even harmful when subjected to rigorous scientific testing. The newspaper this week probably contained at least one such report. Growing awareness of the limitations of subjective beliefs, even from experts, as a basis for decisions about patients' lives explains why external evidence from scientific research is one of the three cornerstones of EBP. Although complete objectivity may be an unrealistic goal (e.g., Hegelund, 2005), the idea that evidence should be as free as possible from the distorting effects of an observer's subjective bias(es) or expectations is one of the most important contributions of the evidence–based perspective.

The significance of avoiding subjective biases, especially when they might result in harm, also is related to four principles of clinical ethical reasoning (e.g., Kaldjian, Weir, & Duffy, 2005). The principle of beneficence obligates us to practice in a manner that is maximally beneficial to our patients. The principle of nonmaleficence obligates us to avoid harming our patients. The principle of autonomy (a relatively recent addition in the literature on medical ethics) obliges us to respect the right of patients to have the power to determine what happens to them based on a full understanding of the risks and benefits of the options that face them. The principle of justice obliges us to practice in a fair, nondiscriminatory manner. You might be familiar with these principles in connection with the protection of human subjects in research because balancing potential benefits and harms is a topic of well–publicized discussion when participants are harmed by research. However, these same principles apply to the protection of patients during clinical practice, and they can be particularly helpful in situations in which the best available evidence from external research, from clinical practice, and from a fully informed patient does not converge neatly on the same clinical decision.

ORIGINS AND EVOLUTION OF EVIDENCE–BASED PRACTICE

With the definition and preconditions of E3BP in mind, it is time for some historical context. How and why did the evidence–based orientation originate? The enormous increase in research literature, in medicine, and in other fields was one major impetus (Sackett et al., 2000). For example, a recent search for information on communication disorders in PubMed, an electronic database, yielded a total of nearly 40,000 citations, and about 8,000 of these were to articles published in the past 5 years. The sheer number of publications presents practitioners with a serious challenge in finding and using the information they need in making decisions about their patients' health. One of the original goals of evidence–based medicine was to assist practitioners in rapidly finding and using the wheat available in the literature and spending a minimum amount of time with the chaff.

The evidence–based framework has also been compelled by the lack of unanimity that is a common feature of the research literature concerning a particular clinical question. Because findings, conclusions, and clinical implications from different studies often vary, clinicians needed a systematic, defensible, and practicable process for deciding which external evidence should carry the most weight in their decision making. This need has resulted in more than 100 systems for rating external evidence (Lohr, 2004). All such evidence–ranking schemes involve a set of criteria for evaluating the quality of evidence so that the best current external evidence can be identified readily, consistently, and transparently.

Rating External Evidence: The Oxford Hierarchy

One of the first, most comprehensive, and most influential systems for rating evidence is the Oxford Centre for Evidence–Based Medicine Levels of Evidence (Phillips et al., 2001; http://www.cebm.net/levels_of_evidence.asp). One of the strengths of the Oxford system is its explicit acknowledgment that no single set of criteria applies to every kind of evidence, and different rating criteria are needed according to whether evidence concerns treatment, prognosis, diagnosis/screening, differential diagnosis, and health care economics.

The working group responsible for the Oxford system characterized it as a work in progress and acknowledged one weakness of the system—its emphasis on the internal validity of external evidence and particularly on research design as the primary basis for assigning evidence to a level. As shown in Table 1.1, for example, evidence concerning therapy that was derived from expert opinion without critical appraisal, or by induction or inference alone, is ranked lowest (Level 5) because of its potential to be affected by subjective bias and other sources of error. At the other end of the hierarchy (Level 1a) is evidence that has been found consistently across multiple studies conducted so as to minimize the potential for results to be contaminated by subjective bias or other sources of error.

Although these "quality extremes" are easy to recognize, most empirical evidence falls somewhere in–between, and the characteristics that distinguish among the intermediate levels are not always transparent. Research design (i.e., whether a study is a randomized controlled trial, a cohort study, a case study, and so forth) is the most salient factor that differentiates the levels, but qualifiers such as good quality, poor quality, and well conducted are also used to distinguish among levels in the Oxford system. Many of the additional quality indicators are discussed in the evidence–based medicine literature, but as of this writing they have not been incorporated explicitly into the original Oxford system. Accordingly, many subsequent evidence rating systems have failed to include factors other than research design in their criteria for evaluating evidence. In fact, research design is so heavily emphasized in extant evidencerating schemes that it is easy to make the mistake of thinking that certain kinds of designs guarantee high–quality evidence. In reality, studies with highly ranked designs can yield invalid or unimportant evidence just as studies with less highly rated designs can provide crucial evidence. It is increasingly clear that "even within randomized controlled trials, quality is an elusive metric" (Berlin & Rennie, 1999, p. 1083) and one that may not be reducible to a single numeric rating (Glasziou, Vandenbroucke, & Chalmers, 2004).

Later in this handbook you will learn how to weigh several dimensions of quality, not just research design, in deciding which external evidence should influence your clinical decision making. For now, two points about evidencelevel schemes need to be emphasized. First, assigning evidence to a level requires knowing more than the type of research design employed in the study from which the evidence came. Second, characterizing evidence with a single numerical rating too often encourages an all–or–none mentality in which evidence with a rank lower than the top is completely discounted and evidence from a study at the highest rank is assumed to be definitive. As we will see, neither one of these perspectives is accurate.

This is a good place to clarify another point of confusion about evidencebased practice—that it does not require or depend on clinical practice guidelines (CPGs) that have been defined as "systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances" (Institute of Medicine, 1990, p. 38). CPGs ideally result from the efforts of a broadly representative and skilled group of individuals who conduct a comprehensive literature search for evidence relevant to answering one or more well–formulated questions about a particular clinical condition, conduct independent critical appraisals of the existing literature using well–specified criteria, summarize the strength of the evidence concerning the benefits and harms of specific courses of clinical action, and synthesize their findings into a set of recommendations reflecting the strength of the scientific evidence for or against the alternative courses of action. Systematic reviews and meta–analyses (which will be discussed later) also review, critique, and synthesize the literature on a topic, but unlike CPGs, they do not reflect the additional step of developing formal recommendations for clinical actions based on the quality and quantity of the scientific evidence.

CPGs are most useful when they are based on a large number of highquality studies, when they are developed in a way that minimizes the potential impact of subjective bias, and when their transitory nature and the need for frequent updating to incorporate new evidence are acknowledged. At present, there are a number of examples of CPGs on a single topic that were developed by different review groups that yield conflicting views of the evidence and contradictory recommendations to practitioners. Thus, CPGs must also be appraised critically with the themes of validity and importance in mind. To this end, a common set of quality indicators for CPGs has been proposed (Cluzeau et al., 2003).

In short, carefully conducted and current CPGs may be helpful with respect to the external research component of E3BP, but E3BP may lead to clinical decisions that differ from those recommended in a CPG. Again, a CPG should not be viewed as dictating clinical care but rather as a potential source of external evidence to be considered when using E3BP to make decisions about clinical care.

E3BP: A REALITY CHECK

We face myriad clinical decisions about unique patients every day in practice settings that likewise have unique requirements and resources. The calculus of integrating all of these variables with three kinds of best evidence for clinical practice sounds daunting if not completely impossible. Does E3BP demand that we have the best available external, internal, and patient–preference evidence to back up every clinical decision we make for every patient we see? If not, what are the circumstances under which we should view E3BP not just as a noble ideal but rather as something more like an ethical mandate?

These questions do not have easy answers, but the notions of honest (unbiased) uncertainty and clinical ethical reasoning provide a way to approach them. For example, there are many clinical decisions for which high–quality external evidence is not available. Although estimates vary, studies suggest that even in medicine, where strong epidemiologic research methods have been employed for more than 30 years, less than half of treatment decisions can be supported by the highest level of external evidence (Djulbegovic et al., 1999; Michaud, McGowan, van der Jagt, Wells, & Tugwell, 1998). There are probably just as many clinical actions for which external, internal, and patient preference evidence is equivocal—that is, no one clinical approach can be shown to be substantially superior to its alternatives. In such cases, competent and ethically responsible clinicians can readily justify decisions based on their own education and experience in conjunction with the progress and positive reactions of their patients. It is when honest and unbiased clinicians recognize that there is cause for uncertainty about the optimal clinical decision that E3BP provides a very helpful tool.

Unfortunately, uncertainty is not a perfect guide to deciding when to use E3BP in a more formal or explicit manner because everyone has had the experience of being 100% certain of something that was later found to be wrong. Barrett–Connor (2002) paraphrased Pickering (1964) on the essential tension between certainty and questioning that faces clinical scientists.

"If you are a clinician, you must believe that you know what will help your patient; otherwise, you cannot counsel, you cannot prescribe. If you are a scientist, however, you must be uncertain—a scientist who no longer asks questions is a bad scientist." (p. 30)

If clinicians recognize the need to review their patients' progress and their own performance in an unbiased fashion, to update their knowledge on a regular basis, and to question their subjective beliefs and assumptions, E3BP provides a way to balance the roles of clinician and scientist.

The remainder of this handbook is organized as follows. Chapter 2 discusses how to turn uncertainty into the kinds of questions needed for productive and efficient evidence searches. Chapter 3 concerns searching for external evidence. Chapters 4 and 5 provide overviews of validity and importance, two of the most important criteria for determining whether evidence is sufficiently credible and strong that it could alter one's current clinical approach (e.g., Ebell, 1998). Chapters 6 and 7 show how to apply these ideas in appraising evidence from individual studies of treatment and diagnosis, respectively, and Chapter 8 describes how to evaluate evidence reports that synthesize findings from multiple studies. Chapter 9 and Chapter 10 discuss the validity and importance of evidence from clinical practice and evidence concerning patient preferences, respectively. And finally, Chapter 11 discusses what might lie beyond evidence–based practice in the future of communication disorders.

Table of Contents

About the Author
Preface
Acknowledgments

  1. Introduction to Evidence-Based Practice

  2. Asking Questions About Evidence

  3. Finding External Evidence

  4. Validity of Evidence: An Overview

  5. Importance of Evidence: An Overview

  6. Appraising Treatment Evidence

  7. Appraising Diagnostic Evidence

  8. Appraising Systematic Reviews and Meta-Analyses

  9. Appraising Patient/Practice Evidence

  10. Appraising Evidence on Patient Preferences

  11. A Prognosis for E3BP in Communication Disordersb
References

Appendix A CATE: Critical Appraisal of Treatment Evidence
Appendix B CADE: Critical Appraisal of Diagnostic Evidence
Appendix C CASM: Critical Appraisal of Systematic Review or Meta-Analysis
Appendix D CAPE: Checklist for Appraising Patient/Practice Evidence
Appendix E CAPP: Checklist for Appraising Evidence on Patient Preferences

Index
From the B&N Reads Blog

Customer Reviews