Tracking Mental Health Outcomes: A Therapist's Guide to Measuring Client Progress, Analyzing Data, and Improving Your Practice / Edition 1 available in Paperback
- Pub. Date:
A complete, step-by-step guide to tracking and documenting treatment outcomes
Outcomes assessment has become an increasingly critical component of contemporary mental health practice, yet most therapists receive little or no training in accepted outcomes assessment and documentation methods. Tracking Mental Health Outcomes fills this gap, providing step-by-step guidance on choosing the best outcomes-tracking methods and instruments for your practice. You'll see how to integrate them into everyday clinical procedures and use the data they supply to improve the quality of care you provide as well as fully comply with insurance company and regulatory agency requirements.
An indispensable working resource for mental health professionals, Tracking Mental Health Outcomes:
• Describes both intraclient and normative approaches to outcomes assessment and how to integrate them into your practice
• Uses DSM-IVTM as the standard reference point for assessing outcomes
• Provides clear-cut examples of third-party payer requirements
• Describes commercially available assessment instruments and how to use them
• Features case examples illustrating how to perform and document outcomes assessment-from initial intake to termination
• Supplies blank forms for recording and tracking outcomes data on the enclosed computer disk
|Product dimensions:||8.50(w) x 11.00(h) x 0.80(d)|
About the Author
DONALD E. WIGER, PhD, is a licensed practicing psychologist and the author of The Clinical Documentation Sourcebook, Second Edition and The Psychotherapy Documentation Primer, part of Wiley's PracticePlanners series.
KENNETH B. SOLBERG, PhD, is a licensed psychologist and a faculty member at the Minnesota School of Professional Psychology.
Read an Excerpt
INTRODUCTION AND HISTORICAL OVERVIEW OF OUTCOME RESEARCH AND ASSESSMENT
How do mental health therapists know the extent to which their therapy is effective and the degree of improvement their clients experience? Answers to these and similar questions have been forming for the past several years. Most outcome research is relatively recent.
How do third-party payers, managed care companies, prospective employers, or clients determine a therapist's effectiveness? The typical indices historically have been the therapist's education and experience. Nevertheless, we have been encouraged for decades to monitor outcome (Hyman & Berger, 1965; Sanford, 1962, Shlien, 1966; Truax, 1966) in that it is the only way the client's welfare and the public good can be adequately served. Treatment effectiveness is intended to benefit the client, but it cannot be assessed without monitoring therapists in outcome studies.
Are the most effective therapists those who receive the most referrals? Attain the most repeat business? Earn the most money? Receive the most letters of appreciation? Prevent the most suicides? Attain the highest number of services authorized by managed care? A clinic owner may view such measures as effective because of increased business. However, they may have little to do with factors viewed as effective by third-party payers, gatekeepers, and regulatory agencies. In fact, some outside sources may view such indicators negatively. For example, the CEO of a clinic desires increased numbers of visits for fiscal reasons, whereas a managed care director desires decreased numbers of visits for the same reasons. The tug-of-war on the almighty dollar affects both perspective and decision-making processes in gauging effectiveness. The concepts of therapeutic effectiveness and financial effectiveness, therefore, may be confusing due to conflicting self-interests.
The formula that compares the concepts of therapeutic effectiveness and cost-effectiveness is termed the cost-benefit ratio. The monetary cost of therapy is not difficult to calculate, but the benefits of therapy are not self-evident. Consider two clients, each claiming to have undergone successful mental health treatment as evidenced by a return to adequate full premorbid functioning. If, for example, Client A requires 20 sessions to be restored to adequate mental health functioning and Client B requires only 10 sessions, the cost-benefit ratio for Client A will be twice that for Client B. Factors that increase the costs of therapy include, but are not limited to, inpatient versus outpatient treatment, number of sessions, cost per session, adjunctive therapy, and medications. Indirect costs include factors such as loss of wages, decreased productivity, and any other setbacks due to the client's impaired condition. The means or rationale for measuring outcome is therefore highly dependent on the rater's perspective. Economics are thus an important factor in evaluating treatment efficacy.
There are conflicting demands on the therapist when providing treatment. Even if the client does not have health insurance, payment for services, rent, utilities, or related expenses must come from somewhere. Only so many clients can be seen pro bono. The third-party payer demands effective treatment in fewer sessions. The clinic owner is concerned about making a profit; therefore, more sessions may be encouraged. The authors know of more than one clinic that pays therapists a bonus for seeing clients beyond a certain number of sessions (a quota system). The client just wants to get better, trusting that the therapist genuinely cares about his or her well-being and that others do not have a financial incentive in the case. Therapists go into the field to help people, but, like others, they must make mortgage and car payments, feed the children, and have a life. The clinic and third parties are concerned about the therapist outcome, while the client's significant others are interested in the outcome for the client.
Years before the current impetus of outcome and accountability, Carter (1983) warned that resistance within an agency should be expected when the necessary changes of documenting outcome are implemented. He further added that most of the documentation requirements will be perceived by therapists as unnecessary, time-consuming, and of little use in helping clients, only bureaucracies. Carter (1983, p. 122) suggested following these five steps when introducing new procedures into an agency:
- 1. The commitment of top management
- 2. The transfer of this commitment throughout the organization
- 3. The development of a strategy to implement an outcome monitoring system
- 4. Implementation
- 5. Integration of new information
The change in the focus of mental health treatment from a service to a business has increased the importance of total quality management (TQM), in which variables such as client satisfaction and therapist involvement are utilized in improving the quality of services (Eisen & Dickey, 1996). The client becomes an active participant in making decisions regarding treatment options and evaluation of outcome. Previously the client's role was that of passive patient.
Recent years have seen increased emphasis on the documentation of outcome in clinical settings. The demand for outcome documentation and research cuts across the field of health care, affecting professions such as medicine, nursing, social work, and clinical psychology. At the same time, providers of health services are often uncertain as to what the assessment of outcome really means in their local clinical settings, and often lack resources to conduct such research. This book provides practical, easy-to-use tools for assessing, tracking, and analyzing clinical outcome in mental health settings.
The terms outcome assessment and outcome research in the context of mental health treatment have been used to refer to a number of different, but related, concepts and research paradigms in the mental health literature. For the purposes of this text, four different uses of the term outcome are considered. First, psychotherapy process research compares variables within the clinical situation to determine which factors lead to the best outcome. Second, efficacy research studies compare a specific treatment against a control group under highly controlled conditions to determine whether the treatment is efficacious in causing the desired therapeutic outcome. Third, effectiveness research studies treatment outcome in real-world clinical settings. Fourth, outcome assessment involves tracking individual client outcome in clinical settings. This text focuses on techniques for outcome assessment, the fourth meaning of the term outcome. It also considers techniques that provide information about the effectiveness of treatment programs, the third meaning of the term. In the remainder of this chapter, these various uses of the term outcome are explored further.
OUTCOME AS A PROCESS
Much research over the past 50 years has attempted to further understanding of the process of psychotherapy. Process research attempts to identify the factors within the therapeutic context that are associated with more positive outcome. These factors might include client characteristics, therapist characteristics, characteristics of the therapeutic relationship, and specific therapeutic techniques and strategies. Often, process research has focused on a search for common factors in therapy that cut across different schools of thought. The literature abounds in identifying variables that affect the outcome of therapy. For example, Orlinski, Grace, and Parks (1994) identified five basic processes that affect therapeutic outcome: (1) quality of the therapeutic relationship, (2) skill of the therapist, (3) level of client cooperation, (4) level of client openness, and (5) length of treatment. Walborn (1996) identified four process variables that tie together the various schools of thought, including (1) the therapeutic relationship, (2) cognitive insight and change, (3) emotions in therapy, and (4) client expectations. Goldfried (1980) noted two process variables common between therapies: (1) the client is provided with new corrective experiences and (2) the client is given direct feedback. Bergin and Garfield (1994) and Hubble, Duncan, and Miller (1999) provide a thorough review of the role that process variables play in psychotherapy.
Client and therapist variables have often been examined in process research. For example, Moras and Strupp (1982) reported that clients who were more affiliative and less hostile prior to counseling were more successful in terms of treatment outcome. Conte, Plutchik, Buck, and Picard (1991) noted a number of personality traits and other variables that adversely impact the outcome of therapy. Such processes are confounding when comparing various types of treatment modalities, because they are likely common between the therapies. For example, if two therapies are being compared, but the skill level of the therapist, the number of sessions, or any other aspects of treatment differ, results will not be valid.
Process research has often attempted to identify the common positive elements of different therapies. Effective therapies emphasize the common processes that are helpful. Although therapeutic procedures and terms may appear to vary, the underlying psychological mechanisms of change are common among them; therefore, the specific treatment employed is not as important as the underlying processes that are common to successful treatment. Patterson (1974) stated that the differences between therapies are accidental and that a therapy may be effective in spite of its uniqueness.
The process approach to examining outcome contrasts with the efficacy approach, which attempts to identify which treatments are beneficial for specific disorders. The aim of process research cuts across treatment modalities and diagnostic categories, investigating the processes within psychotherapy itself.
OUTCOME AS EFFICACY
The first efforts at conducting outcome research on psychological interventions attempted to ask the question, "Do patients who receive psychotherapy fare better than those who do not?" The classic Eysenck (1952) evaluation of 24 studies concluded that the effects of psychotherapy are no greater than the effects of time itself. Eysenck (1952) described his findings as follows:
Patients treated by means of psychoanalysis improve to the extent of 44 percent; patients treated eclectically improve to the extent of 64 percent; patients treated only custodially or by general practitioners improve to the extent of 72 percent. There thus appears to be an inverse correlation between recovery and psychotherapy; the more psychotherapy, the smaller the recovery rate. (p. 322)
Writers such as Bergin (1971), McNeilly and Howard (1991), and Walborn (1996) noted several statistical and methodological problems in the Eysenck study. Nevertheless, the nature of psychological interventions has changed considerably since the time of Eysenck's initial work. His ideas provided argument for several years to those who questioned whether psychotherapy was a worthwhile endeavor. Prior to Eysenck's critiques of the benefits of psychotherapy, Fuerst (1938) stated, "Unfortunately there exists a great deal of confusion and contradiction about what really can be accomplished by psychotherapy" (p. 260). Little (1972) noted that the executive director of the American Psychological Association stated, "Our credibility is in doubt" (p. 2).
After Eysenck, efforts to demonstrate the efficacy of psychotherapy emphasized well-controlled studies where some individuals received treatment while others (a control group) did not. Many of these studies utilized traditional experimental control procedures like randomization in an effort to enhance their validity. Although there was consistent progress in psychotherapy outcome research using these methodologies in the two decades after Eysenck's article, the results of these studies were inconsistent. Some studies showed statistically significant differences between treatment and control groups, while others did not yield statistically significant differences. It appeared that treatment worked sometimes and did not work at other times. This state of affairs existed until the advent of metanalytic approaches in the 1970s.
Beginning with the seminal work of Smith and Glass (Smith & Glass, 1977; Smith, Glass, & Miller, 1980), the results of metanalyses of large numbers of individual outcome studies demonstrated conclusively that most consumers of psychotherapy fared consistently better than those who did not receive treatment. Smith and Glass's (1977) original finding indicated that those who received psychotherapy scored about two-thirds of a standard deviation in a psychologically healthier direction than those who did not receive therapy. Lambert, Shapiro, and Bergin (1986) similarly reported an improvement of close to one standard deviation. Others have offered similar conclusions based on metanalysis (Andrews & Harvey, 1981; Lipsey & Wilson, 1994; Prioleau, Murdock, & Brody, 1983; Smith, Glass, & Miller, 1980). Whiston and Sexton (1993) noted that approximately 65 percent of those receiving psychotherapy improved, but about 6 to 11 percent got worse.
Although the metanalytic studies provide convincing evidence that, in general, individuals who receive psychotherapy are better off than individuals who do not receive therapy, this finding is not necessarily generally known or accepted. Speer (1998) pointed out that the public wonders whether psychotherapy works, stating that in mental health clinics 50 percent of clients obtain four or fewer sessions and 25 percent do not return after the first session. Fifty percent of clients do not discuss with their therapists that they are discontinuing services (Phillips, 1991). Andrews (1991) criticized psychotherapists for not conducting outcome evaluations. Without empirical research that shows psychotherapy is effective, there will continue to be a crisis in the public's confidence in psychotherapy (Krawitz, 1997). Recently, disagreement as to whether or not psychotherapy works has not been a major issue (Lambert, 1991); rather, the multifaceted, interrelated variables affecting the outcome of therapy are under scrutiny. Confounding findings such as notable differences in outcome in laboratory versus clinical settings further hinder conclusions.
Although the metanalytic studies have provided strong evidence that individuals receiving therapy do in fact fare better than those who do not receive therapy, the evidence for the differential efficacy of various therapies for various disorders has been less convincing. This is despite considerable research designed to begin to answer the question stated by Paul (1967), who argued that the appropriate questions are not those such as, "Does psychotherapy work?" or "Does client centered ...( or) behavioral ... therapy work?," or even, "For what (problem areas) does it work?" (p. 111). Rather, Paul argued, the appropriate question is, "What treatment, by whom, is the most effective for this individual with that specific problem, and under which set of circumstances?" Although the evidence from metanalysis showed clearly that psychotherapy was better than no psychotherapy, research did not yield consistent evidence for the differential effectiveness of different types of psychotherapy for different types of disorders. This lack of evidence for differential effectiveness was termed the "Dodo bird verdict" by Luborsky, Singer, and Luborsky (1975). They suggested that, as in the outcome of the circular race led by the Dodo bird in Alice in Wonderland, in the race to determine which therapeutic technique is most effective, "All have won and all must have prizes" (Carroll, 1981).
The Dodo bird verdict became increasingly more unsatisfactory as pressure grew in mental health to provide more definitive lists of which therapeutic techniques were effective in the treatment of specific diagnostic categories. When managed care companies asked therapists to identify treatments of proven efficacy for specific Diagnostic and Statistical Manual of Mental DisordersFourth Edition (DSM-IV) disorders, the answer "Everything works equally well" was not deemed satisfactory, especially when the medical profession was quick to identify lists of medications that did have such specific applications. In response to this pressure, a task force of the American Psychological Association's (APA's) Division 12 (Society of Clinical Psychology) has compiled a listing of treatments that are empirically supportedthat is, whose efficacy has been established through testing in a randomized clinical trial (Nathan & Gorman, 1998). Although the work of this group remains somewhat controversial, it has succeeded in compiling considerable evidence that certain psychotherapeutic interventions are efficacious in the treatment of specific psychiatric disorders.
OUTCOME AS EFFECTIVENESS
In both metanalytic reviews of psychotherapy outcome and efforts to establish empirically supported therapies, the assumption has usually been that the gold standard for establishing efficacy is the well-controlled randomized clinical trial. The problem is that efficacy research is conducted under idealized conditions incorporating methodological features such as random assignment to treatment or control groups, carefully monitored treatment conducted according to a treatment manual, and homogeneous client populations presenting with single, well-established diagnoses. While these features are essential to demonstrate whether a particular treatment produces a desirable outcome, they also distance the efficacy study from the actual practice of psychotherapy.
The distinction between efficacy and effectiveness research was initially made in the context of medical research (for example, Brooks & Lohr, 1985). Efficacy is defined as outcome within a clinically controlled laboratory setting in which optimal conditions are prevalent in demonstrating treatment outcome. Effectiveness is defined as the outcome of treatment in a real-world setting, in which results are generally much more variable than in laboratory settings. Because research on treatment outcome has traditionally focused on efficacy rather than effectiveness, its external validity is compromised (VandenBos, 1996). Efficacy studies have traditionally been conducted using experimental or scientific methods in which subjects are assigned to random, discrete treatment groups. Highly controlled laboratory procedures are employed that attend to the specific diagnostic category being studied. Treatment is conducted strictly according to a treatment manual. Homogeneous client populations are randomly assigned to experimental groups. Clients meeting criteria for more than one diagnosis are not included in the studies (Seligman, 1995, 1996).
Calls to implement the effectiveness approach have appeared in journals of preventive medicine (Flay, 1986), neurology (Holloway, 1988), and psychiatry (Wells, 1999). Each article argues that outcome research must be applied to actual delivery of medical services. The fact that a treatment has been demonstrated to be efficacious in a randomized clinical trial does not necessarily guarantee that the same treatment will be effective in an applied clinical setting. Weisz and Weiss (1989) added that it may not be possible to generalize research findings to treatment settings unless clinical practice includes the same controls as experimental research. Perhaps such controls, from the point of view of the therapist, would mechanize and degrade therapy to a point of going through the motions of compliance.
Studies of clinical effectiveness differ substantially from efficacy studies in that effectiveness studies measure client change in typical or real-life clinical situations. Atkisson et al. (1992) referred to the efficacy/effectiveness distinctions as clinical research and clinical services research. There are few experimental controls for variables such as number of sessions, multiple diagnoses, or other factors as in efficacy studies. Seligman has been a strong advocate of effectiveness research within psychology (Seligman, 1995, 1996; Seligman & Levant, 1998). He argued that efficacy research sacrifices external validity for the sake of internal validity, and that this is the wrong emphasis from the viewpoint of the practitioner. To achieve high internal validity, efficacy studies must utilize randomization, treatment manuals, single diagnoses, specific lengths of treatment, and other controls to isolate the effects of treatment. Seligman concluded that efficacy studies are not the best measures of psychotherapy outcome because they do not represent actual clinical situations. He asked how many clients are typically seen who are randomly assigned to a therapist, receive treatment for a prescribed number of sessions according to a treatment manual, and receive treatment only if they present with one clearly defined diagnosis.
There is an increasing emphasis on the effectiveness approach to research of mental health outcomes (Foxhall, 2000). The National Institute of Mental Health (NIMH) has issued a call for grant proposals that focus on testing the effectiveness of interventions in the actual settings in which they are delivered. Although NIMH will continue to fund the randomized clinical trials it has traditionally favored, the shift to a public health model is a major change in the funding criteria of the agency (Norquist, Lebowitz, & Hyman, 1999). Similarly, accreditation bodies generally require documentation of the effectiveness of the treatment programs to be offered by the agency or institution under review. For example, the Rehabilitation Accreditation Commission (Slaven, 1997) specifically prescribes the effectiveness (as opposed to the efficacy) approach as the appropriate standard for documentation of outcomes.
Although looking at outcome in terms of effectiveness brings the assessment process into the clinical setting, the effectiveness approach still emphasizes outcome evaluation at the level of the agency or overall treatment program. In other words, the effectiveness approach tends to ask whether a particular treatment approach really works in an applied setting, or whether clients in general appear to benefit from a particular treatment program. However, the starting point for assessing outcome for most clinicians is at the level of the individual client. In other words, the clinician must provide evidence about treatment outcomes for "Jane Doe" or "John Doe," who are clients in treatment. It is possible that at some point the clinician will also want to look at aggregate individual outcome data for evidence of program effectiveness. However, the first step is always looking at individual outcome assessment.
The dose-effect model of Howard, Kopta, Krause, and Orlinski (1986) has provided evidence of increased benefits of therapy as the number of sessions increases up to a certain point, at which the rate of improvement decreases. Kopta, Lueger, Saunders, and Howard (1999) concluded that effectiveness studies have demonstrated that there are different improvement rates depending on the degree of client distress: (1) acute distressfastest; (2) chronic distressintermediate, and (3) characterologicalslowest. They further add that for most clinical syndromes, 50 percent of clients return to normal functioning by the 16th session and 75 percent return to normal functioning between sessions 26 and 58. A phase model of improvement rates (Howard, Krause, & Lyons, 1993; Howard, Orlinsky, & Lueger, 1994) is described as having three incremental stages. The first, remoralization, is described by clients feeling demoralized. Their level of personal distress significantly impairs their functioning. Alleviation of subjective levels of stress often begins within a few sessions of supportive therapy. Some clients terminate therapy when they have dealt with current stressors. The second phase of treatment, remediation, spotlights symptom reduction. Treatment focuses on reducing symptoms by working on coping skills. The course of therapy lasts about 16 sessions or three to four months. Termination takes place when current symptomology is reduced. Clients who work through this phase, but want to prevent relapse or repetition of previous ongoing patterns that have regularly interfered with their adaptive functioning (e. g., social, occupational, etc.), may continue to the next phase. Rehabilitation, the last phase, may last several months or years. Clients work on unlearning maladaptive behaviors that have become lifelong habits and learning new behaviors that are more adaptive. Howard, Lueger, Maling, and Martinovich (1993) described the three phases as "sequentially dependent," requiring different treatment goals and outcome measures. Howard et al. (1994) held that remoralization is accomplished by encouragement and empathic listening, remediation takes place with interpretations and clarifications, and rehabilitation is aided with assertiveness training. Thus, generic or global measures of outcome provide little help to the client.
INDIVIDUAL OUTCOME ASSESSMENT
Mental health professionals are currently involved in the painful process of changing roles from theoretical to scientific practitioners. The number of PsyDs, typically considered a credential of more practice-focused professionals, has increased substantially compared to PhDs as evidence of current transitions from research-focused to clinically focused graduate education. Lambert, Okiishi, Finch, and Johnson (1998) stated, "It appears that psychotherapists will be involved in outcome assessment either by choice or default. Professional psychologists are seemingly well suited to the task of being practitioner-scientists and using outcome assessment to the advantage of their patients" (p. 63). Sederer, Dickey, and Herrman (1996) observed that the goals of mental health research have changed in focus from process variables to benchmarking, profiling, report cards, and instrument panels, suggesting that outcome studies must go beyond the laboratory into the field.
Clement (1996) stated that most graduate professors and clinical advisors fail to integrate research and practice, and that those who teach and maintain part-time practices usually do not blend their research into their practices either. Traditional clinical training tends to separate rather than integrate research and practice (Black, 1991; Clement, 1988; Moldawsky, 1992; Talley, Butler, & Strupp, 1994; Tyler & Clark, 1987). Strupp (1989) stated, "Although I have greatly profited from the investigation of others, nothing is as convincing as one's own experience" (p. 717).
Clinicians do not necessarily need to become outcome researchers, but all should be involved in assessing the outcome of their clients' therapy (Eisen & Dickey, 1996). Clement (1996) argues that private practice settings do not reinforce clinical psychologists' conducting research or systematic evaluation. The lack of reinforcement has led to general disdain for integrating research and practice. Clement adds that therapists may be interested in finding better ways to help people but that correlations between research findings and actual practice are discouraging (Goldfried & Wolf, 1996; Kopta, Lueger, Saunders, & Howard, 1999; Morrow-Bradley & Elliott, 1986).
Recent literature such as that of Speer (1998), Sperry et al. (1996), and Wiger (1999a) attributes the need for outcome assessment to the rise in demands for clinical accountability. Managed care companies, the self-proclaimed efficiency experts of modern mental health treatment, are often described as the culprits, requiring therapists to document their effectiveness without providing clear criteria about how this should be done.
Demand for individualized outcome measures has increased over the last 20 years. Persons (1991) pointed out that standardized outcome measures do not adequately bridge the gap between actual clinical practice and research. Clients have specific problem areas that are addressed by the therapist but may not be measured by tests. Persons suggested that outcome measurement must be individualized rather than solely standardized.
Kopta et al. (1999) have held that rising demands for accountability in clinical practice, along with recent advances in biological psychiatry, have increased the pressure to provide specific therapeutic interventions that have been empirically validated. Without such validation, several problem areas arise when attempting to continue receiving third-party payment (Barlow, 1994; Broskowski, 1995). The scientific model requires replication, evidence, and openness to scrutiny. Dornelas, Correll, Lothstein, & Wilber (1996) state that many of the outcome variables considered important by third-party payers are viewed as reductionistic by mental health therapists.
The effects of mental health treatment are both difficult to measure and complicated by lack of agreement as to what to measure and how to measure it. Misapplication of statistical procedures has led to questionable interpretations. Prior to the requirement to demonstrate outcome, therapeutic effectiveness was important, but it was not measured empirically, as in today's trends. Previous indirect measures included indices such as (1) whether clients returned for more sessions, (2) referrals, (3) client feedback, (4) session content, and other informal indices. None of these measures was formalized, making it difficult if not impossible to track clients' progress with any level of certainty. It was often assumed that client satisfaction ratings signified client progress, but this has not been consistently verified.
There is no universally agreed-upon rating system in mental health care to indicate effectiveness. It is not possible to directly count units of client progress. Even client satisfaction ratings do not necessarily imply effective treatment, due to confounding variables (Attkisson & Zwick, 1982; Campbell, Ho, Evensen, & Bluebird, 1996; Carscddon, George, & Wells, 1990; Edwards, Yarvis, Mueller, & Langsley, 1978; Greenfield & Attkisson, 1989; Lambert, Salzer, & Bickman 1998; Pekarik, 1992; Pekarik & Wolff, 1996; Vuori, 1999; Williams, 1994). Many therapists are simply likable people, and may be rated equally highly by clients who have experienced either little or much significant therapeutic change.
Mental health professionals care about their clients' well-being; therefore it seems intuitive that outcome measures would be welcomed with open arms. This has not been the case, however. Most therapists have actively resisted increased requirements in outcome documentation. It can be threatening for a therapist to have cases scrutinized by others for reasons other than case conceptualization. In the past, client records were considered private. (See Wiger, 1999a, for a discussion about how to clearly document mental health records yet preserve confidentiality.) Hawkins, Mathews, and Hamden (1999) pointed out, ". . . [C]linicians have relied too much on questionable information, such as retrospective reports. Retrospective reports often contain biases and other errors, due to such influences as clients' failure to notice important recent events when they occurred, forgetting, being unduly influenced by one or two salient events, attempting to make things look better or worse than they really are, and trying to please the clinician" (p. 2).
Hawkins et al. (1999) have further argued that a number of current documentation systems are based on intermittent measurement systems that use general indices of adjustment. Two problems exist in this paradigm. First, when client progress and setbacks are measured intermittently, clinical effectiveness may be compromised as more time lapses between measures. "The more frequent, relevant, credible, and specific the feedback, the more it 'teaches' the clinician how well s/he is doing at achieving the objective of the treatment" (p. 5). A second concern in current outcome techniques is the choice of tests. Too often, general or global tests of adjustment are used because they can cover a wide range of client problem areas. The trade-off is vague information by which outcomes are measured. (See Ogles, Lambert, and Masters, 1996, for a more detailed discussion.)
Dornelas et al. (1996) have suggested that in order for client change to be measured over time, multiple measurement points are essential. The procedure for collecting the data is as important as the data themselves. Concerns are noted, however, due to the added time and costs of additional data collection. Dornelas et al. suggest a shift in thinking about outcome assessment from an additional procedure to a part of routine treatment that contributes to the clinical care of the client. Informed consent in all procedures, especially after termination, is crucial due to ethical responsibilities.
Global measures are appropriate when employed periodically as measures of overall well-being, or as comparisons to normative groups, but they are not very helpful as means of measuring problem areas specific to a client. General tests cannot measure progress on specific treatment plan objectives. The clinician must decide whether to use the outcome procedures that are designed to track client-specific concerns as identified in the treatment plan, or to measure predetermined global measures, or both. Each method measures outcome, but the more vague or global the measurement instruments used, the more vague the interpretation of results.
The outcome measurement system in this text is in agreement with Hawkins et al. (1999) and Clement (1999) in that the quality of outcome information must be specific to the client and must be measured on an ongoing basis. The frequency of measurement depends on the cooperation, time availability, and accuracy of information provided by the client, collaterals, and therapist. The more feedback the therapist receives about, and obtains from, the client, the more specific information is available to fine-tune therapeutic interventions. Few would disagree that ongoing specific feedback is of higher quality than occasional general feedback when considering the individual client's best interest. Ongoing assessment monitors interventions and aids in deciding to continue, modify, discontinue, or add new procedures. This text suggests procedures in which both global and specific measures of outcome are incorporated to objectively view client progress from more than one perspective.
Howard, Lueger, et al. (1993) suggested a three-phase model of psychotherapy outcome that proposes progressive improvements of subjective well-being, reduction in symptoms, and enhanced life functioning. Improvements were noted in each area as a function of time. Initial improvements were found especially in subjective well-being. Gradual improvements took place in symptomatic distress and in life functioning. Improvements in well-being preceded reductions in symptomatic distress.
Outcome data may come from a variety of sources depending on the client's level of functioning, environmental supports, age, and other factors. Some procedures are standardizedthat is, clients are compared to normal and clinical populationswhile others measure changes within the individual. Both indices are helpful in outcome assessment.
SOURCES OF DATA FOR OUTCOME INDICATORS
Lambert and Lambert (1999) have suggested that an ideal study of change would comprise multiple perspectives including sources of information such as the client, therapist, relevant others, trained observers, and social agencies that store information. Speer (1998) pointed out that in any clinical situation clients will improve in some areas and regress in others. When only one outcome measure is used, it might reveal areas of progress or regression, depending on the measurement conducted. Other important areas might be overlooked. Thus, even therapy that is highly effective in several areas may appear ineffective due to a poor choice of outcome indicators. Outcome indicators incorporating multiple measurements, multiple perspectives, and multiple points in time will best aid in assessment.
Clement (1996) described traditional experimental research strategies as resulting from collection of data on large groups of subjects, in which analysis is between and within groups. The individual clinician, though, is interested in assessing changes within the individual. Most clinicians are not trained in such procedures, nor is there adequate time to closely monitor client behaviors. Therefore, information from multiple sources is necessary in outcome assessment. The scientific model requires replication, evidence, and openness to scrutiny.
The various schools of thought do not agree on which outcome measures are most effective. Walborn (1996) pointed out, for example, that the psychodynamic camp favors global measures of client change, while cognitive behaviorists prefer a symptom reduction approach to assessing outcome. Steuer et al. (1984) have suggested that several dependent measures should be taken to provide an objective perspective.
Lambert and McRoberts (1993) reviewed 116 outcome studies reported in the Journal of Consulting and Clinical Psychology (JCCP) between 1986 and 1991. They found that outcome measures were classified into five source categories including (1) self-report, (2) trained observer, (3) significant other, (4) therapist, or (5) instrumental (e. g., societal records, physiological recording devices). Client reports were most often used. Twenty-five percent of the studies involved using solely client self-reports, three-fourths of which used more than one self-report index. The next most popular procedure was using two sources simultaneously. Self-report and observer ratings were used in 25 percent of the studies, self-report and therapist ratings in 15 percent, and self-report and instrumental sources in 8 percent. Ninety percent of the studies contained a self-report alone or used in combination with other ratings. Significant other ratings were rarely employed. Thirty percent of the studies used six or more types of ratings.
The use of multiple raters generally leads to similar results, but at times may lead to different conclusions due to the perspective of the rater. Massey and Wu (1994) found a high correspondence between client ratings of functional scales by consumers, family members, and case managers, although slight differences were noted in that case managers rated consumers somewhat lower than others in vocational abilities and community living skills. Massey and Wu concluded that multiple perspectives are helpful in determining appropriate treatment and discharge readiness, and may help reduce tragic errors in determining accessibility to services.
When a multidisciplinary approach to treat-ment is taken, outcomes are especially difficult to track. For example, a client may be involved in individual and group therapy in addition to taking medications. Changes in one form of treatment may affect other modes. An alteration in medications may prove to be quite helpful, but if a change in talk therapy takes place at the same time, it is difficult to make any causal connections or attribute outcome indicators to a particular therapist or treatment variable.
Whether the primary cause of the mental disorder is biological, social, characterological, or some combination of factors is difficult to determine. However, the mode of treatment can be crucial depending on the etiology of the disorder (Pearsall, 1997). Outcomes, therefore, are highly affected by numerous variables that may be outside the therapist's control. It is extremely difficult to compare the effectiveness of therapists when the nature of the severity of the clients' problems differs. For example, therapists primarily seeing clients with acute problem areas (e. g., outpatient, brief therapy) may appear on paper to be more successful than therapists seeing clients with severe, chronic disorders (e. g., inpatient, long-term) if the number of sessions is a criterion for clinical effectiveness.
INCORPORATING THE DSM-IVAND USUAL CLINICAL PROCEDURES INTO OUTCOME ASSESSMENT
This text aims to reduce or eliminate redundant paperwork in all areas of documentation by incorporating outcome assessment into usual clinical procedures. Too many outcome systems rely on additional outcome paperwork, significantly increasing and complicating the therapist's administrative duties. Some outcome systems contain several pages of questions designed to cover a wide range of problem areas. Such systems may be convenient for the test designer, but are often extremely burdensome for the client and therapist. Some other outcome measurement systems provide several separate questionnaires to cover a number of different problem areas. Problems exist in that these predetermined questions do not necessarily match individualized client problem areas. The authors know of a number of clinics and individual therapists who have purchased elaborate outcome measurement systems, but have discontinued their use due to the excessive time and effort required.
This text provides training in unbiased, standardized, and atheoretical outcome assessment that encompasses an empirically based model earlier suggested by Boorse (1976) and Persons (1991). It differs from most previous texts by its insistence on incorporating the DSM-IV as the standard. Clearly, the DSM-IV is not an outcome measurement tool or system, but it does operationally define the basic criteria for mental health disorders.
The DSM-IV is not based on underlying psychological mechanisms, but rather on observable and measurable mental health symptoms and impairments. Outcome measures, no matter how elaborate or accurate, must remain secondary to the DSM-IV. There are several excellent measurement indices available, but their proper place is to support the DSM-IV. They must never become the primary diagnostic instruments. The variables, measures, and tests may be quite helpful, but the DSM-IV remains the standard. Other outcome measures are interchangeable (i. e., several choices of diagnostic, personality, behavioral, and cognitive tests are available) depending on the therapist's needs and the validity and reliability of the instrument for the intended population.
The several therapies, theories, and schools of thought provide different modalities by which psychological problems are alleviated. The DSM-IV is the hub by which those with different or conflicting viewpoints of what works in therapy can communicate. Without the DSM-IV it is likely that poor communication between professionals would take place due to the influence of specific theoretical assumptions that are not universally shared. The assessment and treatment aspects of mental health services have both commonalities and distinct differences.
The DSM-IV is atheoretical; thus it avoids incorporating any one school of thought or causal inferences in diagnosis. The various theories abound in such speculations; therefore, therapists must be especially careful not to allow a theoretical perspective to influence their diagnoses. For example, concluding that a client is depressed solely because of a disruptive childhood (psycho-dynamic), lack of reinforcers (behavioral), dysfunctional thoughts (cognitive), lack of positive regard (client-centered), lack of meaning or purpose (humanistic/existential), and so forth, may be helpful in treatment, but the causalities in themselves do not provide sufficient information to form a diagnosis. Although not all agree in full with the DSM-IV nosology, it is the current standard in mental health diagnosis.
The client's DSM-IV diagnostic information is subsequently incorporated in writing treatment plan objectives. The treatment plan objectives (client-generated) are also atheoretical. However, the treatment strategies (various means by which the therapist will intervene to attain the objectives) are highly influenced by the therapist's school of thought. The bulk of outcome studies have focused on which treatment variables are most effective. The system presented in this text is designed to augment and clarify the ongoing clinical documentation in a manner in which the therapy is regularly fine-tuned and thus remains on target due to helpful outcome data. Ongoing measures are based on the client's specific problem areas within a DSM-IV standard. The information used in the diagnostic interview and treatment plan is the same information needed for ongoing outcome assessment. In addition, the therapist may choose to periodically administer brief global tests of well-being or pathology in which the client's condition can be compared to normative samples.
The relatively small amount of time required to document clinical effectiveness is easily made up in timesaving measures when the therapist is adequately trained in effective documentation techniques. This text is not designed to specifically teach documentation methods; the reader is referred to The Psychotherapy Documentation Primer (Wiger, 1999a) for such training.
Mental health professionals are under increasing pressure from a variety of sources to document outcome. However, the term outcome has been used to describe very different kinds of research endeavors. Two of these approaches, process research and efficacy research, have tended to utilize traditional experimental methodologies. Process research has investigated variables within the context of therapeutic intervention to determine which factors are associated with more positive outcomes. Some success has been obtained in identifying factors that seem to be common to various approaches to psychotherapy (Hubble et al., 1999). Efficacy research has considered various therapeutic techniques or approaches in an effort to determine whether clients who receive treatment do better than similarly distressed individuals who do not receive treatment. Metanalysis of efficacy studies has convincingly demonstrated that individuals who receive therapy experience better outcomes than those who do not receive therapy. Efficacy studies have been less successful in demonstrating that certain therapeutic techniques work better with certain diagnostic categories. However, progress is being made in developing a set of empirically supported therapies (Nathan & Gorman, 1998).
The results of both efficacy research and process research have important implications for the practitioner. Competent clinicians should be knowledgeable about both process and efficacy research findings because such findings are relevant to their clinical work. However, practitioners are unlikely to engage in this kind of research in the context of their everyday practice, and this text is not about how to conduct the kinds of studies described earlier as process research or efficacy research. Experimental methodologies utilizing randomly assigned control groups, treatment manuals, and carefully selected subject samples are simply not feasible in most applied settings. Advocates of effectiveness research (Seligman, 1996) have argued that for these kinds of reasons the results of efficacy studies may have limited applicability to the actual clinical setting. Rather, Seligman and others have emphasized the importance of conducting outcome research in the actual setting in which services are delivered.
Most clinicians are now required to document outcomes for individual clients as part of their provider contracts. Individual outcome assessment can be conducted utilizing global measures given before and after intervention. However, the long time periods between testing do not aid in the client's treatment. More current outcome procedures incorporate both normative and individualized measures. Data from multiple sources provide a wide perspective when assessing individual outcome. The DSM-IV provides a reference point that gives the various schools of thought and outcome measurement tools common ground. Because the DSM-IV is the primary criterion for receiving and continuing mental health services, it should also be the crucial element in outcome assessment. This text presents procedures for clinicians to use in assessing individual client outcome.
It may also be possible to combine the results of individual outcome assessments in evaluating the effectiveness of a treatment program or agency. Some suggestions for this kind of analysis are also presented in this text, and procedures for documenting the outcome of treatment programs are provided. Procedures for conducting effectiveness research are also presented.0471389501.txt |
A fascinating aspect of high-speed data over cable networks is that though the foundations for creating two-way interactive services in the United States began over 30 years ago, in the late sixties and early seventies, it is only recently that consumers began to savor the fruits of widespread cable modem deployment. The promise of high-speed data over cable has been met; it works, and it is affordable.
In the race for supplying multimedia broadband services to the home, cable networks outdistance the competitors of digital subscriber line (DSL), fixed-point wireless, and fiber optic to the home systems. In addition to having a very high aggregate bandwidth delivery capability, cable networks today are already delivering a mixture of analog and digital television, toll-quality telephony, and high-speed data services. Moreover, the cable network itself is evolving into a highly scalable distribution system. The race is ongoing and for the foreseeable future it appears likely that cable will lead the race to the home for supplying television, telephony, and high-speed data services.
This book is about the implementation of today's modern high-speed digital interactive services over cable. It focuses not just on cable modems, rather on the entire cable network system and its potential for creating the highly pervasive broadband access network of the future. The chapters comprise a systematic discussion of cable network technology, beginning with cable plant topology.
All modern two-way interactive services, specifically modern cable modem systems, share common architectural aspects that were honed from the lessons taught by early cable modem deployments in the 1990s. A cable modem is part of a larger system, which includes the cable operator's network and the cable Internet service provider's (ISP's) network. While cable modems themselves bear much similarity to an Ethernet local area network (LAN), in reality, supplying that service requires substantial backend network services in a cable operator's facility. To make this all work for the benefit of the consumer, the North American cable operators developed a standard for cable modems that is being widely implemented by many manufacturers. Following close on the heels of the establishment of cable modems standards were various approaches for supplying telephone services over cable. The deployment of high-speed data services carries with it many aspects of socialization that are part of the education process of both the consumer and the cable operator. These include various security, service availability, and vulnerability issues. Eventually, the cable services we know todayvideo, data, and voicewill undergo an evolution.
Before launching the discussion of cable plant topology, it is important to review the past 30 years of cable network development, as well as the impact that government regulation and technology advances have had on the industry.
The Original Motivation for Cable Networks
The popular label for cable television networks is CATV, which stands for Community Antenna Television. CATV's roots in America date back to 1948, when twin-lead wires were strung from roof to roof in the town of Astoria, Oregon. The first use of coaxial cable appeared in 1950, in Lansford, Pennsylvania. Beginning in the late forties and continuing through the fifties, America was in the throes of a television frenzy; everyone wanted to own a TV. The problem was, not everyone could receive TV signals. Over-the-air broadcast was the only means available to distribute network television. The motivation behind the development of CATV was to improve reception, either in areas where the television signals were too weak or where there was reception interference, such as in large cities where the tall buildings deflected signals every which way, causing multipath distortions.
CATV networks grew up around cities and municipalities chiefly because the cable operators needed access to both the utility poles to string their coaxial cables or permission to dig the holes in which to bury cables, and local governments had the power to grant such access. Having right of way is a fundamental necessity to the cable operator. The permission to provide CATV service and obtain right of way is given via a franchise agreement from the local government. Periodically, a franchise agreement needs to be renewed. The large cable operators are called Multiple System Operators (MSOs) because they own the cable operations in many cities. Each city, however, has its own franchise agreement that must be renewed separately. Historically, local municipalities granted a cable franchise to only one operator. This created an environment where MSOs did not compete with each other for the same subscribers. This unique arrangement has persisted, allowing MSOs to openly join together on various activities to improve cable television technology and deployment. This also gave MSOs a monopoly in each franchise and one could argue whether that was beneficial or not to customers.
The CATV network in any given town typically runs past dwellings, not businesses, as there is no need to run the cable where there is no potential for adequate revenue. Thus, the primary factor used to describe the size of the cable operator is households passed (HHP), also called homes passed. HHP is based on the number of homes physically passed by the CATV network. The other factor is called the take rate, for a particular service. For example, assume the HHP is 10,000 for a cable operator; therefore, a 35 percent take rate for video services would mean there are 3,500 subscribers.
Today, CATV networks have evolved far beyond their roots as the community antenna into a multiple service broadband network. Concomitantly, the term CATV has been replaced by the more appropriate term cable network.
Government RegulationsThe success and growth of the cable television industry has been both helped and hampered by government restrictions throughout the past 40 years. To address the competitive posturing among the cable operators and the broadcast television industry, including the development market called Ultra High-Frequency (UHF), the Federal Communications Commission (FCC), Congress, the Supreme Court, and various other legal bodies all became involved in modifying the way cable television was permitted to operate. This section summarizes the more noteworthy events that shaped how cable networks function today.
1966. FCC issued its "Second Report and Order." To address arguments over competing rights from the broadcast off-air (over the air) television industry, the FCC imposed wide-ranging and restrictive regulations on cable television. The commission declared that cable television stations "must carry" local off-air television channels within their markets. On distant station programs, the FCC issued a nonduplication rule that required cable television operators to wait at least a day to rebroadcast a program. And cable television networks in the top 100 markets had to show evidence supporting public interest before importing a distant station. The latter regulation was imposed to protect fledgling UHF stations. In smaller markets, this restriction was not applied.
1968. The Supreme Court ruled that, based on the Copyright Act of 1909, cable operators did not have to pay royalties nor seek consent to retransmit distant television signals because the distribution process was not considered a "performance of the work." As a reaction in part to the Supreme Court ruling, the FCC reshaped its distant signal rules; the new requirements said that, within 35 miles of the top 100 markets, cable operators had to obtain permission of the originating distant station before retransmitting. In addition, cable operators located with 35 miles of an off-air station in smaller markets were required to carry signals from the nearest full network, independent, and education stations in their region or state. Cable operators outside the 35-mile zone of any station were permitted to carry distant signals as long as they did not do so in lieu of carrying a closer station of the same type. The latter rules were called anti-leapfrogging restrictions. These new rulings, however, caused confusion, and thus stalled evidentiary hearings. Awaiting clarification, the FCC declared that new hearings would be held, based on the new 1968 rules. This declaration greatly slowed down the process of retransmission consents, essentially freezing cable television expansion, especially in the major markets. This slowdown came to be called the freeze.
1969. The FCC mandated that cable operators with over 3,500 subscribers had to originate their own programming on at least one channel. This forced these cable operators to build studios, an effort that was a capital-intensive expense absent any real new revenue sources, and to dedicate a precious channel. In this year, the FCC also allowed cable operators to interconnect cable facilities.
1970. The FCC restricted telephone common carriers from providing cable television service to viewers in their operating communities, although a waiver could be issued for sufficient cause. Telephone companies were required to provide pole space if available. In this same year, the FCC also issued restrictions for cable systems that were directly or indirectly owned by national television networks or broadcast stations, and prohibited them from carrying off-air stations.
1972. The FCC issued its "Cable Television Report and Order" which was based on elements from a 1971 FCC-issued "Letter of Intent" and 1971 FCC and White House-issued "Consensus Agreement" between copyright holders, broadcast television operators, and cable television operators. In this report and order, cable systems outside all television markets had to carry all television signals assigned to the cable community, including educational stations within 35 miles and other significantly viewed 1 stations, in smaller markets, a total of three full networks stations and one independent station in combination with local and distant stations. The same must-carry rules applied to cable systems in the top 100 markets, in addition to the carriage of additional independent stations. One nonbroadcast station was allowed for every broadcast station carried on the system. This was seen as lifting distant signal importation restrictions, so the FCC required that cable systems in the top 100 markets have a minimum of 20 channels. At the time, this was pushing the limit of cable television amplifier technology, which forced some operators to deploy a dual cable system. Two-way communications capacity was required. The 1972 action was initially heralded as the thawing of the freeze (referred to as the rules thaw). However, the order continued to restrict the cable operator's ability to offer varied programming, as it protected the interests of broadcasters and copyright holders in the major markets. By this year, the United States had over 2,800 cable operators and over 6 million subscribers. Unfortunately, few of these subscribers were in the top markets, due to the lack of varied programming.
1975. The FCC relaxed requirements on older systems, dropped the requirements for channel capacity and access (local origination) channels, and eliminated the two-way capacity and one-for-one requirement.
1976. FCC eliminated distant-signal anti-leapfrogging requirements. Satellites appeared in this time period. Home Box Office (HBO) launched in 1976; it was the first pay cable product delivered to cable operators via satellite. This was a milestone in the cable television industry. Also in 1976, systems with fewer than 3,500 subscribers were relieved of the two-way capacity requirement. And Congress passed the Copyright Revision Act of 1976. This required cable operators to pay royalties for the retransmission of distant signals based on the system's gross revenues.
1977. A court case involving HBO (petitioner) versus the FCC and U. S. Government (respondents) was decided in favor of HBO, forcing the FCC to abandon its pay cable restrictions. Specifically, the Court of Appeals remove all FCC pay-cable anti-siphoning 2 rules. In addition, the Court laid strict groundwork whereby current and future FCC regulation on cable had to be justified. This was a landmark case for the cable industry; it prompted the expansion of many cable systems, which henceforth began to carry more pay movie and sports channels. Subsequently, subscribership increased. Another result of the HBO decision was that the FCC now had to prove demonstrable harm to local broadcasting stations warranting federal protection. Also, now cable operators in the top 100 markets could offer compelling programming, and their subscribership, too, increased.
1980. Following a series of FCC and Supreme Court rulings, cable operators were permitted to carry as many distant signals as their viewers demanded.
1984. Congress passed the Cable Communications Policy Act ("Cable Act"), amending the Communications Act of 1934 to cover cable. Its purpose is to establish a national cable policy; establish franchise procedures that encourage cable growth and assure that cable systems are responsive to community needs and interests; outline the respective spheres of federal, state, and local authority over cable; assure that cable provides the widest possible diversity of services to the public; establish an orderly process for franchise renewal that protects cable operators against unfair denials of renewal if they meet the standards of the act; and promote competition and minimize unnecessary cable regulation. As a result of this act, franchise procedures, as well as rate-setting authorities, were put into place in local and state governments. In addition, federal procedures were established to cover leased access to cable systems, subscriber privacy, theft of cable service, and cable ownership restrictions. The act also addressed obscenity and other cable programming issues. Finally, the act amended the FCC's basic authority to include authority over cable. Cable operators gave up some rights as to freely drop, re-tier, or re-price program offerings in exchange for the franchise renewal standards and exclusion of telephone companies from offering cable services in their service areas. The act also provided that no cable system would be considered a common carrier, subjected to utility-type federal or state regulation, by virtue of its provision of cable service. However, the act did not include enhanced services such as high-speed data, telephony, or other interactive services.
1985. The Ninth Circuit Court ruled in Preferred Communications, Inc. vs. City of Los Angeles that, consistent with the First Amendment, cable operators could not arbitrarily restrict access to a single cable television company when the utility poles and conduits were physically capable of accepting more than one system. The court also made broad analogies between newspapers and cable, finding that cable operators exercised "substantial" editorial control of their channels.
1986. The Supreme Court ruled that cable television enjoyed First Amendment rights as a communicator of originated programming and as an editor of programming produced by others. The burden of proof was put on local municipalities to show factual basis for any restrictions on "unfettered" cable operator speech.
1992. Congress passed the Cable Television Consumer Protection and Competition Act in response to complaints from consumers about excessive rate increases. The act reintroduced rate regulation for certain services and equipment provided by most cable television systems. It noted was that competitors to cablein Direct Broadcast Satellite (DBS), Multichannel Multipoint Distributed Service (MMDS), broadcasters, and telephone companiesbacked consumers. The act required that cable operators create a basic service tier to include all local broadcast signals and all nonsatellite-delivered distant broadcast signals that the system intended to carry, as well as all public, educational, and government access programming. The act further required the FCC to develop reasonable rates for the basic tier. This requirement included the establishment of standards, based on the basis of actual cost, of charges for subscriber installation and lease of the equipment for the basic service, including a converter box and remote control unit. In addition, the FCC was directed to establish guidelines identifying unreasonable rates for higher tiers of service. The act also authorized the FCC to reduce unreasonable rates and require refunds to subscribers. The act also: required cable operators to offer their programming to future competitors, such as MMDS, Satellite Master Antenna Television (SMATV), and DBS operators at reasonable prices on a nondiscriminatory basis; barred municipalities from unreasonably refusing to grant competitive franchises; required cable operators to carry local broadcast stations ("must carry"), or at the option of the broadcaster, to compensate the broadcaster for retransmission; regulated the ownership by cable operators of other media such as MMDS and SMATV. Finally, the FCC was directed to impose new regulations in many areas that affect customer service, including "cable-ready" televisions and consumer equipment, disposition of home wiring, privacy, rates for leased access channels, and more.
1996. Congress passed the Telecommunications Act of 1996, which enabled telephone companies to become open video systems, thereby coming into direct competition with cable. Telephone companies were allowed to seek a cable franchise or to buy or build wireless MMDS, SMATV, or other wireless facilities to distribute video. The act also allowed the creation of Competitive Local Exchange Carriers (CLECs), which enabled cable operators to provide telephone services in competition to the Incumbent Local Exchange Carrier (ILEC). This prompted a renewed focus of providing telephone services over cable.
Hand-in-hand with regulatory changes, numerous advances in technology were instrumental in shaping the cable industry. These events include improvements in coaxial cable, amplifiers, set-top boxes, headend equipment, satellite communications, microwave communication, use of fiber optics, use of computers, digital compression, pay-per-view, and two-way services. This section summarizes the more noteworthy technology advances and organizational events that have shaped and grown the cable industry.
1961. TelePrompTer introduced a pay TV system called KeyTV to show the second Floyd Patterson/Ingemar Johansson heavyweight fight. The pay-to-view nature of this event prompted the birth of pay-per-view.
1962. The introduction of aluminum-shielded distribution cable with foam dielectric led to major improvements in the quality reception on 12-channel systems. The range of cable bandwidth was from 50MHz to 212MHz with no specification for the FM band.
1965. Solid-state electronics were introduced to cable amplifier and headend equipment. The first patent for dual heterodyne set-top box was filed by Ronald Mandell and George Brownstein. This was the first set-top box that eliminated off-air reception interference.
1966. TelePrompTer began testing of Amplitude Modulation Link (AML) microwave technology, which enabled cable operators to import distant signals.
1967. The first dual cable system with subscriber A/B switch was deployed in San Jose, California, at Gill Cable. Jerrold introduced its Transistor Main Line amplifier, improving quality and reliability.
1969. The Society of Television Engineers (SCTE) was formed. Approximately 2,500 cable networks existed as the decade ended.
1970. Jerrold introduced the Starline One transistor amplifier. Harmonically Related Carriers (HRC) were introduced in headends, which extended the range of larger cable systems. Cable bandwidth was specified in the continuous range from 5MHz to 300MHz.
1972. SRS demonstrated the first computer-controlled interactive television at the National Cable Television Association (NCTA) convention. Cable manufacturers introduced the first bonded and laminated tape with braid-drop cable, improving handling and cable longevity.
1973. TelePrompTer, Scientific Atlanta, and HBO jointly demonstrated the first satellite delivery system. This landmark event meant that 97 percent of the homes in the United States could be served via satellite systems. HBO began satellite distribution, using 33-foot (10-meter) diameter receiver dishes. Solid-state amplifier equipment in the 50 to 300MHz range became common. This allowed distribution of 35 program channels. TelePrompTer experimented with FM-modulated fiber-optic systems to improve picture quality.
1976. Cable industry momentum builds for satellite programming.
1977. Satellite dishes of 4.5 meter were introduced. Cable manufacturers introduced low-loss polyethylene foam dielectric for trunk, distribution and drop cables, thereby effectively lowering high-frequency cable attenuation by 10 percent.
1978. A patent was issued to James Tanner for "positive tap" premium channel blocking.
1979. TRW introduced 400MHz hybrid technology, which expanded channel capacity to 55. ATC installed data transmission capabilities in four systems, providing companies with direct links between facilities.
1980. Addressable set-top converters were first implemented. The anticipation of the first two-way addressable services fueled cable and computer industry interest in an interactive age. Low-noise galium aresenide amplifiers become less expensive and more reliable, which helped to stimulate the emerging DBS industry. Backyard dishes could now tune in the satellite programs.
1982. Time Fiber Communications introduced fiber-optic Mini-Hub for Multiple Dwelling Units (MDUs). United Cable deployed fiber optics in Alameda, California.
19841985. Many new program channels came online, including Disney Channel, Playboy, Discovery Channel, Home Shopping Network, and others. Cable bandwidth was extended to 500MHz.
1985. Some 6,600 cable systems were serving 42 million homes, at nearly at 50 percent households passed (HHP) take rate. General Electric introduced its first compression technology for channel expansion.
1986. HBO began to scramble its signal full time, thereby preventing backyard dish owners to receive without paying.
1987. With the introduction of the 550MHz system, channel capacity rose to 80. Jim Chiddix, Chief Engineer at American Television and Communications Corporation in Denver and others, demonstrate fiber optic system to the NCTA engineering committee. Their effort laid the groundwork for the future HFC systems. Cable manufacturers extended bandwidth for trunk, feeder, distribution and drop cables to 600MHz.
1988. Cable Television Laboratories (CableLabs) was established. Stereo audio appeared on some television channels. HFC began using analog modulation, thereby successfully demonstrating compatibility with existing analog TV sets, which numbered more than a billion worldwide.
1990. General Instruments introduced its DigiCipher system at the High Definition Television (HDTV) proceedings at the FCC. The Cumulative Leakage Index (CLI) rules were put into effect by the FCC. They required cable operators to tighten their maintenance standards to protect the aeronautical industry. Pirate set-top boxes began to appear.
1991. Cable manufacturers extended bandwidth for trunk, feeder, distribution, and drop cables to 1000MHz.
1992. Tele-Communications, Inc. (TCI), Time Warner, and Viacom began building HFC systems with a target node size of fewer than 1,000 homes passed.
1993. Time Warner, with help from Silicon Graphics, launched the Full Service Network trial in Orlando, Florida. Cox Communications launched its "ring-in-ring" network, which used double, self-healing rings in the distribution plant. This greatly increased reliability and met the stricter telephone industry standards.
1994. Internationally, MPEG2 became the preferred algorithm for compressing and sending digital TV signals. Regional network designs began to include Synchronous Optical Network (SONET) and Asynchronous Transfer Mode (ATM) technology. The North American cable television industry expanded into the telecommunications industry with the formation of the Teleport Communications Group (TCG), a joint ownership of TCI, Time Warner, Continental, Comcast, and Box. The Institute of Electrical and Electronic Engineers (IEEE) 802.14 Working Group was formed, to focus on high-speed data-over-cable networks. The Digital Audio Video Council (DAVIC) consortium was formed to create video-on-demand specifications.
1995. Time Warner began experimenting with telephony over cable. In December, at the Western Cable Show, John Malone, Chairman of TCI, announced a cable modem specification initiative.
1996. TCI rolled out @Home service in San Francisco. Time Warner launched Road Runner in Akron and Canton, Ohio. By end of year, 10 MSOs had launched commercial cable modem services. TCI launched Headend In The Sky (HITS) digital video programming system. The first Data Over Cable Service Interface Specifications (DOCSIS) for cable modems were released in December.
1997. CableLabs launched its Opencable project, with the objective of developing advanced digital set-top terminals for use in two-way cable networks. General Instrument announced a long-term plan to supply 15 million advanced digital set-top boxes to nine leading MSOs. This heralded the new digital video age.
1998. WebTV and WorldGate competed for Internet over-TV access. CableLabs began the DOCSIS cable modem certification program.
2000. By midyear CableLabs ... which provides up-to-date certification status, had certified 58 DOCSIS cable modems from over two dozen vendors, and qualified Cable Modem Termination System (CMTS) products from five vendors.
The Impetus for Two-Way Services
In the original community antenna systems, signals traveled in only one direction, from the antenna to the subscriber. The direction from the cable operator is called the forward path, or the downstream. Recall from the Government Regulations chronology at the beginning of the chapter that in 1972, to end the struggle between the TV broadcasters and the CATV industry, FCC proceedings resulted in what came to be called the "rules thaw." At that time many regulations came into effect that impacted the cable industry. Among those changes was the creation of the return path, or upstream spectrum allocation for two-way cable services. This provided the frequency allocation for subscribers sending data to the headend.
Prior to the 1972 "rules thaw," cable technology typically supported 12 channels of video programming. The 1972 actions of the FCC prompted many franchise agreements to be rewritten, forcing cable operators to carry 21 to 24 channels and requiring capital expenditures to enhance their cable plant. But because the technology at the time didn't support that capacity on a single cable, operators had to install a second cable, thereby creating the dual cable system. (A dual cable system is two separate cable systems running in a parallel topology. It requires double the length of cable, double the number of amplifiers, and double the amount of maintenance when both cable plants are active.) Subscribers were given a switch box, on which they selected either the "A" or "B" cable.
Later in the 1970s, an improvement in video amplifier technology permitted many more downstream channels than in previous generations. This, coupled with the original two-way vision, led to the development of the topology in which the A cable would be used for video programming and the B cable would be used as a mid-split system for supporting two-way commercial services. This came to be called the A/B mid-split architecture.
But in the seventies, the technology wasn't up to the new capabilities, so the subscriber base failed to emerge, thus the additional revenue cable operators needed to realize the vision of two-way services failed to materialize. Consequently, two-way services went by the wayside for many more years. There were trial deployments of two-way technologies throughout the 1980s but serious deployments were never realized. The market was just not ready to expand to two-way. From 1989 through 1994 several two-way data services were deployed that began to lay the groundwork and raise the awareness that the time for market deployment was nearing. In late 1994, the U. S. cable industry issued several Request for Information (RFI) documents which helped to focus vendor involvement. In December 1995, a broad-based initiative was launched by the cable industry which has led to a concentrated effort to expand the services offered over CATV networks, specifically by adding two-way high-speed data and packet-based telephony. This effort is being made in conjunction with a focus on upgrading CATV plants to combine optical fiber and coaxial cable. These are the aforementioned hybrid fiber-coaxial (HFC) plants that allow operators to grow their broadband service offerings.
An excellent overview of the vision of two-way services in the early seventies appeared in a November 1971 IEEE Spectrum article, written by Ronald K. Jurgen. Entitled "Two-way Applications for Cable Television Systems in the '70s," it offers the best description of the motivation of the time:
Now, as the '70s move ahead, there is a new impetus to further growth in cable television systems. This impetus stems from the demonstrated engineering capability to bring many more channels and new communication services to subscribers while giving them the facility to "talk back" to the system and, in some cases, to all other subscribers on the system. This two-way, or bidirectional, capability opens up a whole new realm of application possibilities. The implications of two-way cable facilities are so important to the overall communications policies of the United States that government agencies and other policy-proposing groups are taking a close look at this emerging new technology.
In parallel with high-speed two-way data developments, a worldwide effort in the 1990s was made to move from analog signal transmission to digital signal transmission for CATV systems. The digital formats in use are based on a set of standards drafted by the Moving Pictures Experts Group (MPEG). Digitization of television signals provides compression and the capability to support enhanced quality, including high-definition TV (HDTV) and high-speed data service. But the process of moving from analog to digital CATV transmission will take time. Consequently, in 1999 many cable operators were offering digital as well as analog TV channels, and the MPEG standards also address the encoding and decoding of NTSC, PAL, and SECAM formats in support of the millions of deployed analog TV sets around the world.
Since 1995, the following heralds the progress of CATV system enhancements to support advanced television services and two-way high-speed data and voice services:
- Operators begin aggressively upgrading all-coaxial cable plants to HFC.
- Systems boast a large number of program channels, 80-plus.
- Cities begin awarding multiple franchises.
- MSOs are permitted to become CLECs and to offer telephone services.
- Operators begin to focus on high-speed data and telephony as additional services.
- Operators begin significant cable modem rollouts in 1995 with proprietary products and later in 1999 with standards based products.
- AT&T and other cable operators deploy circuit switched telephony services. CableLabs focuses on packet voice services.
- Digital TV channels based on MPEG2 standards deployed via "digital set-top boxes."
High-Speed Data over Cable Standards
Recently, numerous standards activities around the world have focused on cable modem technology. The first standards group to undertake the task was the IEEE 802.14 CATV working group, which was first chartered in November 1994. Impatient in the face of the slow-moving IEEE process, the North American cable industry began to develop its own set of specifications. Within a year, a set of specifications were in place, called the Data over Cable Service Interface Specifications (DOCSIS), which are detailed in Chapter 5.
On the international front, in September 1994, an industrial consortium called the Digital Audio Video Council (DAVIC) began its standards process, focusing initially on the video-on-demand market; it quickly switched its attention to the intelligent set-top box, digital video based on international standards, and two-way interactive services. In June 1999, DAVIC completed its task and turned its specifications over to the international standards authorities. In September 1999, DAVIC ceased to exist; many of its members moved to a new group called the TV Anytime Forum ... .
The IEEE 802.14 committee continued its work despite the launch of the North American initiative, with the objective of creating an international, rather than national standard. A great deal of crossover, both of information and personnel, took place between the IEEE and the DOCSIS groups. IEEE 802.14 produced its draft specification in 1998. Also in 1998, in coordination with DOCSIS, the 802.14 effort started a new working group aimed at developing an advanced physical layer modulation scheme that would be suitable to both the IEEE standard and the DOCSIS specifications. But by September 1999, the joint effort was ceased, followed by the disbanding of the IEEE 802.14 working group in November 1999.
The DOCSIS Specifications
The current DOCSIS specifications are the result of a project initiated by the work of a group called Multimedia Cable Network System Partners Limited (MCNS), whose members include TCI, Time Warner, Cox, Comcast, and their partners, Continental, Rogers, and CableLabs. The goal of MCNS was to speed the development of the communications and operations support interface specifications for cable modems and associated equipment. The specifications were designed to be nonvendor-specific, allowing cross-manufacturer compatibility for high-speed data communications services over two-way hybrid fiber-coax (HFC) cable television systems.
MCNS/DOCSIS was the brainchild of John Malone, then Chairman, at TCI, in December 1995 in response to broadband access competition, vendor postures, and lack of progress in public standards process of the IEEE 802 LAN/MAN (Metropolitan Area Network) committee. Originally a closed development effort by the six MCNS cable companies and selected vendors (BayNetworks/LANCity, now Nortel, GI, Broadcom), with CableLabs to help with process management, today, many vendors are participating, in a nondisclosure agreement (NDA) fashion. Once completed, DOCSIS specifications were made publicly available via ... , with CableLabs in charge of a strict revision and control process for updates.
The DOCSIS specifications are actually a family of coordinated specifications dealing with many aspects of a cable modem access system. The most well-known is the Radio Frequency Interface (RFI) specification, usually referred to as the DOCSIS Specification. The initial release of the RFI specification was DOCSIS RFI Version 1.0 in December, 1996. It was based on an evolved LANCity-based protocol, targeted at residential, low-cost, off-the-shelf cable modems with certified interoperability between different vendors. The architecture of the DOCSIS system is a single, large Ethernet-based bridged LAN. It has a single ISP service provider architecture. Version 1.0 is primarily a best-effort Internet access system and was not designed for Quality of Service (QoS) support. DOCSIS RFI Version 1.0 was adopted by the Society of Cable Telecommunications Engineers (SCTE) Data Standards Subcommittee (DSS) as its standard in July 1997. In the fall of 1997, it was adopted as the U. S. position, in the ITU J. 112 recommendation.
In 1999, CableLabs released DOCSIS RFI Version 1.1 based upon the required needs of the PacketCable packet voice and video project (Packet-Cable is another CableLabs project). DOCSIS RFI Version 1.1 added substantial protocol support to provide dynamic QoS facilities for packet voice services, in addition to packet data services. Other enhancements include baseline privacy, multicast support, and others. Version 1.1 also has packet recognition support for IEEE 802.1p tagged Ethernet frames. The tagging supports both priority and virtual LAN (VLAN) tagging. Note that while DOCSIS RFI Version 1.1 is substantially different from Version 1.0, a DOCSIS Version 1.1 modem can operate in a fully backwards compatible DOCSIS Version 1.0 mode, also DOCSIS V1.1 continues to provide a single, large Ethernet-based bridged LAN architecture.
Today, CableLabs runs an impressive DOCSIS vendor certification process for cable modems. The acceptance of DOCSIS in the North American cable operator community is predicated on a sufficient number of vendors being certified and product being available. In October 2000, more than 90 cable modems were DOCSIS V1.0 certified from approximately 36 manufacturers. DOCSIS V1.1 certified cable modems are expected in the first quarter of 2001.
For Further Information
An excellent review of the advances in cable television from the technology perspective can be found in the book History Between Their Ears: Recollections of Pioneer CATV Engineers by Archer S. Taylor, available from the Cable Center at ... .
Information about CableLabs is available at ... and the DOCSIS project at ... .
Information about the history of DAVIC and its specifications can be found at ....
The CED Communications Engineering and Design magazine reports on the cable and broadband industry. Its Web site is ... .
Table of Contents
Quick Reference Guide to Sample Forms.
DEFINING MENTAL HEALTH OUTCOMES.
Introduction and Historical Overview of Outcome Research and Assessment.
The Need for Outcome Assessment.
Individual and Normative Approaches to Outcome Assessment.
INDIVIDUALIZED OUTCOME ASSESSMENT.
Measuring and Quantifying Ongoing Behaviors.
Assessing Outcome Through the DSM-IV Interview and Intake Information.
Assessing Outcome Using the Treatment Plan.
Assessing Outcome Using Progress Notes and Ongoing Charting.
NORMATIVE OUTCOME ASSESSMENT.
Defining Clinically Significant Change.
Normative Data and Examples for Four Common Outcome Measures.
Selecting Standardized Outcome Measures.
Collecting and Analyzing Normative Outcome Data.
Integrating Individual and Normative Outcome Measures.
About the Authors.
About the Disk.