Uh-oh, it looks like your Internet Explorer is out of date.
For a better shopping experience, please upgrade now.
The most up-to-date resource of comprehensive information for conducting cross-battery assessments
The Cross-Battery assessment approach—also referred to as the XBA approach—is a time-efficient assessment method grounded solidly in contemporary theory and research. The XBA approach systematically integrates data across cognitive, achievement, and neuropsychological batteries, enabling practitioners to expand their traditional assessments to more comprehensively address referral concerns. This approach also includes guidelines for identification of specific learning disabilities and assessment of cognitive strengths and weaknesses in individuals from culturally and linguistically diverse backgrounds.
Like all the volumes in the Essentials of Psychological Assessment series, Essentials of Cross-Battery Assessment, Third Edition is designed to help busy practitioners quickly acquire the knowledge and skills they need to make optimal use of psychological assessment instruments. Each concise chapter features numerous callout boxes highlighting key concepts, bulleted points, and extensive illustrative material, as well as test questions that help you to gauge and reinforce your grasp of the information covered.
Essentials of Cross-Battery Assessment, Third Edition is updated to include the latest editions of cognitive ability test batteries , such as the WISC-IV, WAIS-IV, and WJ III COG, and special purpose cognitive tests including the WMS-IV and TOMAL-II. This book now also overs many neuropsychological batteries such as the NEPSY-II and D-KEFS and provides extensive coverage of achievement batteries and special purpose tests, including the WIAT-III, KM-3, WRMT-3 and TOWL-4. In all, this book includes over 100 psychological batteries and 750 subtests, all of which are classified according to CHC (and many according to neuropsychlogical theory. This useful guide includes a timesaving CD-ROM, Essential Tools for Cross-Battery Assessment (XBA) Applications and Interpretation, which allows users to enter data and review results and interpretive statements that may be included in psychological reports.
Note: CD-ROM/DVD and other supplementary materials are not included as part of eBook file.
Read an Excerpt
Note: The figures and/or tables mentioned in this sample chapter do not appear on the Web.
For the past six decades, cognitive ability tests have made significant contributions to psychology research and practice. Although individually administered intelligence batteries continue to be used widely by clinicians, they do not adequately measure many of the cognitive abilities that contemporary psychometric theory and research specify as important in understanding learning and problem solving. The lack of representation of important cognitive abilities on most current intelligence batteries creates a gap between theories of the structure of intelligence and the traditional practice of measuring these abilities (Flanagan & McGrew, 1997). In order to narrow the theory-practice gap, commonly used intelligence tests need to be modernized so that a broader range of cognitive abilities can be both measured and interpreted in a more valid and defensible manner. The CHC (Cattell-Horn-Carroll) Cross-Battery approach described in this volume was developed specifically by McGrew and Flanagan (1998) as a method to update assessment practice by grounding it solidly within contemporary psychometric theory.
HISTORY AND DEVELOPMENT OF CHC CROSS-BATTERY ASSESSMENT
The process of analyzing and classifying human cognitive abilities "has intrigued scientists for centuries" ( Kamphaus, Petoskey, & Morgan, 1997, p. 33). Attempts to define the construct of intelligence and to explain and classify individual differences in cognitive functions have been characterized by significant variability for decades. The differences among theories of intelligence are exemplified by the numerous multiple-intelligences models that have been offered over the years to explain the structure of intelligence. Some of the most popular models include Carroll's Three-Stratum Theory of Cognitive Abilities, Gardner's Theory of Multiple Intelligences, the Cattell-Horn Fluid-Crystallized (Gf-Gc) theory, Feuerstein's Theory of Structural Cognitive Modifiability (SCM ), the Luria-Das Model of Information Processing, and Sternberg's Triarchic Theory of Intelligence (see Flanagan, Genshaft, & Harrison, 1997, for a comprehensive description of these theories). Each of these theories represents an attempt to comprehend a class of phenomena and, ultimately, to fulfill the chief goal of science: "to minimize the mental effort needed to understand complex phenomena through classification" (Thurstone, 1935, p. 45; cited in Flanagan, McGrew, & Ortiz, 2000). To achieve this goal each theory of intelligence provides a taxonomic framework for classifying and analyzing the nature of the cognitive characteristics that account for the variability in observed intellectual performance among and between individuals (Flanagan, McGrew, et al., 2000).
Among the popular theoretical frameworks, psychometric theories are the oldest and most well established. Furthermore, the psychometric approach is the most research-based and has produced the most economically efficient and practical instruments for measuring cognitive abilities in applied settings ( Neisser et al., 1996; Taylor, 1994). The reader is referred to Carroll (1993), Gustafsson and Undheim (1996), Ittenbach, Esters, and Wainer (1997), Kamphaus (1993), Sattler (1988), and Thorndike and Lohman (1990) for historical information on the development of psychometric theories of intelligence.
Recently, psychometric theories of intelligence have converged on a more complete multipleÐ cognitive abilities taxonomy (" complete" in a relative sense, because theories are never truly complete), which reflects a review of the extant factor-analytic research conducted over the past 60 years. This taxonomy serves as the organizational framework for both the Carroll and Cattell-Horn models (Carroll, 1983, 1989, 1993, 1997; Gustafsson, 1984, 1988; Horn, 1988, 1991, 1994; Horn & Noll, 1997; Lohman, 1989; Snow, 1986), the two most prominent psychometric theories of intelligence proposed to date (Flanagan, McGrew, et al., 2000; McGrew & Flanagan, 1998; Sternberg & Kaufman, 1998; Woodcock, McGrew, & Mather, 2001).
Recent theory-driven, joint, or "cross-battery" factor analyses of the major intelligence batteries (e. g., Flanagan & McGrew, 1998; Keith, Kranzler, & Flanagan, 2000; McGhee, 1993; McGrew, 1997; Woodcock, 1990; Woodcock, et al., 2001) indicated that the majority of current intelligence tests do not adequately assess the complete range of broad cognitive abilities included in either Horn's (1991, 1994) or Carroll's (1993, 1997) model of the structure of intelligence. Of course, this is not surprising because the vast majority of extant intelligence tests were never specifically designed or developed to operationalize CHC theory. Therefore, use of the CHC Cross-Battery approach provides practitioners with two unique advantages: (a) It allows data gathered both within and across test batteries to be organized and interpreted in a theoretically and empirically meaningful way; and ( b) it specifies and allows examination of the empirically validated links between specific cognitive abilities and specific areas of academic functioning. These advantages have profound implications for examiners and examinees, given the emergence of research that indicates that many of the cognitive abilities that contribute significantly to the explanation of academic skills are either not measured or not measured well by existing intelligence tests ( Keith, 1999; McGrew, Flanagan, Keith, & Vanderwood, 1997; Vanderwood, McGrew, Flanagan, & Keith, 2000). The need to have readily available tests with which to clearly specify and validly measure cognitive abilities, along with the need to examine the known links between specific cognitive abilities and specific academic skills, helped to spark the development of the CHC Cross-Battery approach and provide compelling reasons for its adoption and use (Flanagan & McGrew, 1997; Flanagan, McGrew, & Ortiz, 2000; McGrew & Flanagan, 1998).
RATIONALE FOR THE CHC CROSS-BATTERY APPROACH
In the intellectual assessment literature, Woodcock (1990) was first to advance the notion of crossing intelligence batteries to measure a more complete range of broad abilities included in contemporary psychometric theory, following his compilation of a series of cross-battery factor analyses of several intelligence batteries. Similarly, although Kaufman did not use the term cross-battery, he has long advocated the practice of supplementing intelligence batteries ( particularly the Wechsler Scales) to gain a more complete understanding of cognitive functioning (e. g., Kaufman, 1994). Because it is clear that most single intelligence batteries provide limited information when considered within the context of contemporary theory and research, CHC Cross-Battery assessment emerged in response to this need and as a way of advancing the science of assessment (Flanagan & McGrew, 1997).
Briefly, the CHC Cross-Battery approach was developed to (a) provide practitioners with a way to conduct more valid and comprehensive assessments of cognitive abilities and processes; ( b) circumvent the significant weaknesses in intracognitive discrepancy models for the diagnosis of learning disability; (c) provide researchers with a multiplicity of theory-driven and empirically supported classifications of intelligence tests that can be used to design and improve research studies on human cognitive abilities (Flanagan & McGrew, 1997; McGrew & Flanagan, 1998); and (d) provide test developers with a classification system, a blueprint that can be used to conceptualize new tests and evaluate and modify existing ones. The reader is referred to Flanagan, McGrew, and colleague (2000) for a detailed discussion of the more specific and fundamental reasons for the development of the cross-battery method.
The CHC Cross-Battery approach is designed to spell out how practitioners can conduct assessments that approximate the total range of broad cognitive abilities more adequately than most single intelligence batteries can (Carroll, 1997, p. 129). According to Carroll (1998), this approach "can be used to develop the most appropriate information about an individual in a given testing situation" ( p. xi). Likewise, Kaufman (2000) stated that the approach can serve to "elevate [test] interpretation to a higher level, to add theory to psychometrics and thereby to improve the quality of the psychometric assessment of intelligence" (p. xv).
According to McGrew and Flanagan (1998) the CHC Cross-Battery approach is a time-efficient method of cognitive assessment that is grounded in contemporary psychometric theory and research on the structure of intelligence. More specifically, it allows practitioners to measure validly a wider range (or a more selective but in-depth range) of abilities than can be represented by a single intelligence battery. The approach is based on three foundational sources or pillars of information (Flanagan & McGrew, 1997; McGrew & Flanagan, 1998). Together, the three pillars (summarized in Rapid Reference 1.1) provide the knowledge base necessary to organize theory-based, comprehensive, reliable, and valid assessments of cognitive abilities.
Pillar #1: A Well-Validated Theoretical Foundation
The first pillar of the approach is a relatively complete taxonomic framework for describing the structure and nature of intelligence. This taxonomy is reflected by the Cattell-Horn-Carroll theory of cognitive abilities (CHC theory). Hence, CHC theory represents an integration of Carroll's (1993) three-stratum theory and the Cattell-Horn Gf-Gc theory (Horn, 1994; see also FlanÐ agan, McGrew, et al., 2000; McGrew, 1997; McGrew & Flanagan, 1998; Woodcock et al., 2001; Woodcock, personal communication, July 16, 1999). The reader is referred to Carroll (1993, 1997) and Horn and Noll (1997) for comprehensive descriptions of their respective theories.
Although there are several important differences between the Carroll (1993) and Horn (1991, 1994) models (see McGrew & Flanagan, 1998), in order to realize the practical benefits of the calls for more theory-based interpretation ( Kaufman, 1979, 1994; Kamphaus, 1998; Kamphaus et al., 1997), it was considered necessary to settle upon a single, integrated cognitive abilities taxonomy (McGrew, 1997). A first effort to create a single CHC taxonomy for use in the evaluation and interpretation of intelligence batteries was proposed by McGrew (1997). McGrew and Flanagan (1998) and Flanagan, McGrew, and colleague (2000) subsequently presented slightly revised integrated models based on additional research. The integrated CHC model presented in Flanagan and McGrew (2000) is used in the current work and is the validated framework upon which cross-battery assessments are based. The CHC model is presented in Figure 1.1.
In the CHC model, cognitive abilities are classified at three strata that differ in degree of generality (Carroll, 1993). The broadest or most general level of ability in the CHC model is represented by stratum III, located at the apex of the hierarchy. This single cognitive ability, which subsumes both broad (stratum II) and narrow (stratum I) abilities, is interpreted by Carroll as representing a general factor (i. e., g ) that is involved in complex higher order cognitive processes (Gustafsson & Undheim, 1996).
The exclusion of g in Figure 1.1 does not mean that the CHC model used in this book does not subscribe to a separate general human ability or that g does not exist. Rather, it was omitted by McGrew (1997), McGrew and Flanagan (1998), and Flanagan, McGrew, and colleague (2000) because it was judged to have little practical relevance to this method of assessment and interpretation. That is, the CHC Cross-Battery approach was designed to improve psychological and psycho-educational assessment practice by describing the unique pattern of specific (stratum II) cognitive abilities of individuals (McGrew & Flanagan, 1998).
The most prominent and recognized abilities in the model are located at stratum II. These broad abilities include Fluid Intelligence (Gf ), Crystallized Intelligence (Gc), Visual Processing (Gv), and so forth (see Figure 1.1) and represent "basic constitutional and longstanding characteristics of individuals that can govern or influence a great variety of behaviors in a given domain" (Carroll, 1993, p. 634). The broad CHC abilities vary in their emphasis on process, content, and manner of response. Approximately 70 narrow (stratum I) abilities are subsumed by the broad CHC abilities (see Figure 1.1). Narrow abilities "represent greater specializations of abilities, often in quite specific ways that reflect the effects of experience and learning, or the adoption of particular strategies of performance" (Carroll, 1993, p. 634).
It is important to recognize that the abilities within each level of the hierarchical CHC model typically display positive intercorrelations (Carroll, 1993; Gustafsson & Undheim, 1996). For example, the different stratum I (narrow) abilities that define the various CHC domains are correlated positively to varying degrees. These intercorrelations give rise to and allow for the estimation of the stratum II (broad) ability factors. Likewise, the positive correlations among the stratum II (broad) CHC abilities are sometimes used as justification for the estimation of a stratum III (general) g factor (e. g., Carroll, 1993). The positive factor intercorrelations within each level of the CHC hierarchy indicate that the different CHC abilities do not reflect independent-- that is, un-correlated-- traits (Flanagan & McGrew, 2000).
Overall, the CHC conception of intelligence is supported extensively by factor-analytic (i. e., structural) evidence as well as by developmental, neuro-cognitive, and heritability evidence (see Horn & Noll, 1997, and Messick, 1992, for a summary). In addition, a mounting body of research is available on the relations between the broad CHC abilities and many academic and occupational achievements (see McGrew & Flanagan, 1998, for a review of this literature). Furthermore, studies have shown that the CHC structure of intelligence is invariant across the lifespan (e. g., Bickley, Keith, & Wolfe, 1995), and across ethnic groups and gender (e. g., Carroll, 1993; Gustafsson & Balke, 1993; Keith, 1997, 1999). In general, the CHC theory is based on a more thorough network of validity evidence than are other contemporary multidimensional ability models of intelligence (see Kranzler & Keith, 1999; McGrew & Flanagan, 1998; Messick, 1992; Sternberg & Kaufman, 1998). According to Daniel (1997), the strength of the multiple (CHC) cognitive abilities model is that it was arrived at "by synthesizing hundreds of factor analyses conducted over decades by independent researchers using many different collections of tests. Never before has a psychometric ability model been so firmly grounded in data" ( p. 1042Ð 1043). The broad and narrow abilities of the CHC model are defined briefly as follows.
Broad and Narrow CHC Ability Definitions
The definitions provided here are consistent with those presented in McGrew and Flanagan (1998) and Flanagan, McGrew, and colleague (2000). They were derived from an integration of the writings of Carroll (1993), Gustafsson and Undheim (1996), Horn (1991), McGrew (1997), McGrew, Werder, and Wood-cock (1991), and Woodcock (1994).
Fluid Intelligence (Gf ) Fluid Intelligence encompasses mental operations that an individual uses when faced with a novel task that cannot be performed automatically. These mental operations include forming and recognizing concepts, perceiving relationships among patterns, drawing inferences, comprehending implications, problem solving, extrapolating, and reorganizing or transform-ing information. Inductive and deductive reasoning are considered to be the hallmark narrow ability indicators of Gf (Carroll, 1993; McGrew & Flanagan,1998). Definitions as well as corresponding task demands and select subtests of the Gf narrow abilities are presented in Rapid Reference 1.2.
Crystallized Intelligence (Gc) Crystallized Intelligence refers to the breadth and depth of a person's accumulated knowledge of a culture and the effective use of that knowledge. This store of predominately verbal or language-based knowledge represents those abilities that have been developed largely through the investment of other abilities during educational and general life experiences (Horn & Noll, 1997). Gc abilities are those mentioned most often by lay persons who are asked to describe an "intelligent" person (Horn, 1988). The image of a sage captures to a large extent the essence of Gc (McGrew & Flanagan, 1998). Definitions as well as corresponding task demands and select subtests of the Gc narrow abilities are presented in Rapid Reference 1.3.
Quantitative Knowledge (Gq) Quantitative Knowledge encompasses an individual's store of accumulated quantitative, declarative, and procedural knowledge. The Gq knowledge base is necessary to use quantitative information and manipulate numeric symbols.
The difference between Gq and Quantitative Reasoning (RQ; subsumed by Gf ) is noteworthy. While Gq represents an individual's store of accumulated mathematical knowledge, RQ represents the ability to reason inductively and deductively when solving quantitative problems. For example, when a task requires math skills and general math knowledge (e. g., knowing what the multiplication symbol means), Gq would be evident. When a task involves solving for a missing number in a number sequence task (e. g., 1,2,4,8,16, ____), RQ would be required. The narrow Gq abilities are described in Rapid Reference 1.4.
Reading /Writing Ability (Grw) Reading/ Writing Ability, like Gc and Gq, is an accumulated store of knowledge. The Grw knowledge base includes basic reading and writing skills necessary for comprehending written language and expressing thoughts and ideas through writing. It includes both basic abilities (e. g., reading decoding, spelling) and complex abilities (e. g., reading comprehension and writing composition). Currently, Grw is not well defined or researched within the CHC framework. In applied settings, Grw (and Gq) are conceived of as achievement domains and are therefore measured by achievement tests, not by intelligence tests. In Carroll's (1993) three-stratum model, eight narrow reading and writing abilities are subsumed by Gc in addition to other abilities. In the CHC models presented by McGrew (1997), McGrew and Flanagan (1998), and Flanagan, McGrew, and colleague (2000), these eight narrow abilities define the broad Grw ability (see Figure 1.1). Because Grw abilities are measured predominantly by achievement tests, they will not be discussed in detail here.
Short-Term Memory (Gsm) Short-Term Memory refers to the apprehension and holding of information in immediate awareness as well as the ability to use the information within a few seconds. Gsm is described as a limited capacity system because most individuals can retain only seven "chunks" of information ( plus or minus two) in this system at one time. Examples of Gsm include the ability to remember a telephone number long enough to dial it, and the ability to retain a sequence of spoken directions long enough to complete the task. Because there is a limit to the amount of information that can be held in short-term memory, information is typically lost after only a few seconds. When a new task requires an individual to use his or her Gsm to store new information, the previous information held in short-term memory is either lost or stored in the acquired knowledge bases of the individual (i. e., Gc, Gq, Grw) through the use of long-term storage and retrieval abilities. The Gsm narrow abilities are described in Rapid Reference 1.5.
Visual Processing (Gv) Visual Processing refers to the generation, perception, analysis, synthesis, storage, retrieval, manipulation, and transformation of visual patterns and stimuli ( Lohman, 1994). An individual who can effectively reverse and rotate objects mentally, interpret how objects change as they move through space, perceive and manipulate spatial configurations, and maintain spatial orientation would be regarded as having a strength in Gv abilities (Mc-Grew & Flanagan, 1998). Various narrow abilities subsumed by Gv are described in Rapid Reference 1.6.
Auditory Processing (Ga) At the broadest level, auditory abilities "are cognitive abilities that depend on sound as input and on the functioning of our hearing apparatus" (Stankov, 1994, p. 157) and reflect "the degree to which the individual can cognitively control the perception of auditory stimulus inputs" (Gustafsson & Undheim, 1996, p. 192). Auditory Processing (Ga) requires the perception, analysis, and synthesis of patterns among auditory stimuli as well as the discrimination of subtle differences in patterns of sound (e. g., complex musical structure) and speech when presented under distorted conditions. Although Ga abilities do not require language comprehension (Gc), they appear to be important in the development of language skills (e. g., Morris et al., 1998). Ga subsumes most of those abilities referred to as "phonological awareness/ processing" (e. g., Phonetic Coding). However, the Ga domain is very broad and encompasses many specific abilities beyond phonological awareness and processing abilities. Select Ga abilities are described in Rapid Reference 1.7.
Long-Term Storage and Retrieval (Glr) Long-Term Storage and Retrieval is the ability to store new or previously acquired information (e. g., concepts, ideas, items, names) in long-term memory and to retrieve it fluently later through association (Horn, 1991). Glr abilities have been prominent in creativity research, where they have been referred to as idea production, ideational fluency, and associational fluency (Carroll, 1993; McGrew & Flanagan, 1998). Glr has been confused often with a person's stores of acquired knowledge (i. e., Gc, Gq, and Grw). It is important to realize, however, that Gc, Gq, and Grw represent what is stored in long-term memory, while Glr is the efficiency by which this information is stored in and later retrieved from long-term memory (Flanagan, Mc-Grew, et al., 2000).
It is also important to distinguish between the different processes involved in Glr and Gsm. Although the expression long-term carries with it the connota-tion of days, weeks, months, and years, long-term storage processes can begin within a couple of minutes or hours of performing a task. Therefore, the time between initial task performance and recall of information related to that task is not of critical importance in defining Glr. Rather, the occurrence of an in-tervening task that engages short-term memory during the interim before the attempted recall of the stored information (e. g., Gc) is the critical or defining characteristic of Glr ( Woodcock, 1994). Rapid Reference 1.8 describes several narrow Glr memory and fluency abilities.
Processing Speed (Gs) Processing Speed is akin to mental quickness and is frequently associated with intelligent behavior ( Nettelbeck, 1994). Processing speed involves performing cognitive tasks fluently and automatically, particularly when under pressure to maintain focused attention and concentration. The expression attentive speediness appears to capture the essence of Gs. Gs abilities require little complex thinking or mental processing and are usually measured by fixed-interval timed tasks. Three different narrow speed-of-processing abilities are subsumed by Gs in the present CHC model. These narrow abilities are described in Rapid Reference 1.9.
Decision/ Reaction Time or Speed (Gt) Both Carroll and Horn in their respective cognitive ability models include a broad speed ability that differs from Gs.The ability proposed by Carroll, Processing Speed (Decision/ Reaction Time or Speed; Gt ), subsumes narrow abilities that reflect an individual's quickness in reacting (reaction time) and making decisions (decision speed). The ability proposed by Horn, Correct Decision Speed (CDS), is quite similar to Carroll's Gt ability and is typically measured by recording the time an individual needs to provide an answer to a problem on a variety of tests (e. g., letter series, clas-sifications, vocabulary; Horn, 1988, 1991). After a review of the descriptions of Gt and CDS offered by Carroll and Horn, respectively, it appeared that CDS is a much narrower ability than Gt. Therefore, CDS is subsumed by Gt in the CHC model used in this book (for details, see Figure 1.1; Flanagan et al., 2000; McGrew, 1997; and McGrew & Flanagan, 1998).
It is important to understand the difference between Gt and Gs. According to Flanagan and McGrew (2000),
Gt abilities reflect the immediacy with which an individual can react (typically measured in seconds or parts of seconds) to stimuli or a task, while Gs abilities reflect the ability to work quickly over a longer period of time (typically measured in intervals of 2Ð 3 minutes). Being asked to read a passage (on a self-paced scrolling video screen) as quickly as possible and, in the process, touch the word "the" with a stylus pen each time it appears on the screen, is an example of Gs. The individual's Gs score would reflect the number of correct responses (taking into account errors of omission and commission). In contrast, Gt may be measured by requiring a person to read the same text at their normal rate of reading and press the space bar as quickly as possible whenever a light is flashed on the screen. In this latter paradigm, the individual's score is based on the average response latency or the time interval between the onset of the stimulus and the individual's response. ( pp. 44Ð 45)
Because Gt is not measured by any of the major intelligence batteries, it will not be discussed further in this book.
The reader is referred to Carroll (1993, 1997), Flanagan, McGrew, and colleague (2000), Horn (1991, 1994), Horn and Noll (1997), and McGrew and Flanagan (1998) for a comprehensive description of CHC theory and the abilities it encompasses, as well as for supporting evidence for and limitations of the theory. For a discussion of additional developments and potential future refinements and extensions of the CHC model, see Flanagan, McGrew, and colleague (2000) and Woodcock et al. (2001).
We realize that other structural models and theories of cognitive abilities have made significant contributions to the intelligence knowledge base and have unique features that may lead to modifications in and perhaps illuminate possible shortcomings of CHC theory. Notwithstanding, contemporary CHC theory is presented here because it is currently the most researched, empirically supported, and comprehensive descriptive hierarchical psychometric framework from which to organize thinking about intelligence-test interpretation. According to Gustafsson and Undheim (1996), "the empirical evidence in favor of [this] hierarchical arrangement of abilities is overwhelming" (p. 204). As such, the CHC theory is the taxonomic framework around which cross-battery assessment and interpretation are organized (see also Carroll, 1997, 1998; Flanagan et al., 1997; Genshaft & Gerner, 1998; McGrew, 1997; McGrew & Flanagan, 1998; Woodcock, 1990; Ysseldyke, 1990).>
Pillar #2: Broad Cognitive Ability Classifications
The second pillar of the CHC Cross-Battery approach is the CHC broad (stratum II ) classifications of cognitive ability tests. Specifically, based on the results of a series of cross-battery confirmatory factor-analysis studies of the major intelligence batteries, McGrew and Flanagan (1998) and Flanagan, McGrew,>and colleague (2000) classified all the subtests of these batteries according to the particular CHC broad cognitive abilities they measure. Their CHC classifications based on these analyses are presented in Table 1.1.
The gaps or holes in Table 1.1 exemplify the theory-practice gap that exists in the field of intellectual assessment. The data in the table show that the WPPSI-R, K-ABC, KAIT, and CAS batteries measure only two to three broad CHC abilities adequately. The WPPSI-R measures primarily Gv and Gc. The K-ABC measures primarily Gv and Gsm, and to a much lesser extent, Gf; while the KAIT measures primarily Gf, Gc, and Glr, and to a much lesser extent, Gv and Gsm. The CAS measures primarily Gs, Gsm, and Gv. Finally, while the DAS, SB: IV, WISC-III, and WAIS-III do not provide sufficient coverage to narrow the theory-practice gap, their comprehensive measurement of approximately four CHC abilities, as depicted in Table 1.1, is nonetheless an improvement over the previously mentioned batteries (Flanagan, McGrew, et al., 2000; McGrew & Flanagan, 1998).
The results of the cross-battery factor analyses presented in Table 1.1 demonstrate that the amount of information yielded by most single intelligence batteries is limited; thus, it may be necessary to supplement any one the of the major batteries with tests from other batteries to ensure that certain abilities are well represented in an assessment as dictated by referral concerns. In order to supplement an intelligence test in a defensible and valid manner, however, it is necessary to understand what abilities underlie the major cognitive batteries.
Classification of all tests at the broad ability level is necessary to improve upon the validity of cognitive assessment and interpretation (McGrew & Flanagan, 1998). Specifically, broad ability classifications are necessary because they ensure that the CHC constructs that underlie such assessments are minimally affected by construct-irrelevant variance (Messick, 1989, 1995). In other words, knowing what tests measure what abilities enables clinicians to organize tests into clusters that contain only measures that are relevant to the construct or ability of interest.
To clarify, construct-irrelevant variance is present when an "assessment is too broad, containing excess reliable variance associated with other distinct constructs . . . that affects responses in a manner irrelevant to the interpreted con-structs" (Messick, 1995, p. 742). For example, the WISC-III Verbal IQ ( VIQ) has construct-irrelevant variance because, in addition to its four indicators of Gc (i. e., Information, Similarities, Vocabulary, Comprehension), it has one indicator of Gq (i. e., Arithmetic). Therefore, the VIQ is a mixed measure of two distinct, broad CHC abilities (Gc and Gq); it contains reliable variance (associated with Gq) that is irrelevant to the interpreted construct of Gc (McGrew & Flanagan, 1998). This represents a grouping together of subtests on the basis of face validity (e. g., grouping tests together that appear to measure the same common concept), an inappropriate aggregation of subtests that can actuallydecrease reliability and validity (Epstein, 1983). The purest Gc composite on the WISC-III is the Verbal Comprehension Index, because it contains only construct-relevant variance.
Construct-irrelevant variance can also operate at the subtest (as opposed to composite) level. For example, the Verbal Analogies test on the WJ-R measures both Gc and Gf. That is, in factor-analytic studies, the Verbal Analogies test had significant loadings on both the Gc and Gf factors. Therefore, this test is considered factorially complex-- a situation that complicates interpretation of this measure (e. g.: Is poor performance due to low vocabulary knowledge [Gc] or to poor reasoning ability [Gf ], or both?).
In short, interpretation is far less complicated when composites are derived from relatively pure measures of the underlying construct (e. g., tests printed in bold type in Table 1.1). "[ A] ny test that measures more than one common factor to a substantial degree yields scores that are psychologically ambiguous and very difficult to interpret" (Guilford, 1954, p. 356; cited in Briggs & Cheek, 1986). Therefore, CHC Cross-Battery assessments are designed using only empirically strong or moderate (but not factorially complex or mixed) measures of CHC abilities, following the information presented in Table 1.1 2 (i. e., tests printed in bold).
To date, more than 250 CHC broad ability classifications have been made based on the results of cross-battery factor-analytic studies (such as those presented in Table 1.1) and the logical task analyses of intelligence-test experts (see Flanagan, McGrew, et al., 2000, for a discussion). These classifications of cognitive ability tests guide practitioners in identifying measures that assess various aspects of the broad cognitive abilities (such as Gf and Gc) represented in CHC theory. These classifications have been integrated into the CHC Cross-Battery Worksheets provided in Appendix A.
If constructs are broad and multifaceted, like those represented at stratum II in the CHC model, then each component (i. e., CHC broad ability) "should be specified and measured as cleanly as possible" (Briggs & Cheek, 1986, p. 130, emphasis added). Because the approach is designed to include only empirically strong or moderate (but not mixed) measures of CHC abilities into appropriate (i. e., construct-relevant) composites, CHC Cross-Battery assessment offers a more valid means of measuring the CHC constructs than that offered by most single intelligence batteries (see Flanagan, in press).
Pillar #3: Narrow Cognitive Ability Classifications
The third pillar of the cross-battery approach is the CHC narrow (stratum I) classifications of cognitive ability tests. These classifications were originally reported in McGrew (1997), then later reported in McGrew and Flanagan (1998) and Flanagan, McGrew, and colleague (2000) following minor modifications. Classifications of cognitive ability tests according to content, format, and task demand at the narrow (stratum I) ability level were necessary to improve further upon the validity of intellectual assessment and interpretation (see Messick, 1989). Specifically, these narrow ability classifications were necessary to ensure that the CHC constructs that underlie assessments are well represented. According to Messick (1995), construct underrepresentation is present when an "assessment is too narrow and fails to include important dimensions or facets of the construct" ( p. 742).
Interpreting the Wechsler Block Design (BD) test as a measure of Visual Processing (i. e., the broad Gv ability) is an example of construct underrepresentation. This is because the BD test measures only one narrow aspect of Gv (i. e., Spatial Relations). At least one other Gv measure (i. e., subtest) that is qualitatively different from Spatial Relations (measured by BD) is necessary to include in an assessment to ensure adequate representation of the Gv construct. That is, two or more qualitatively different indicators (i. e., measures of two or more narrow abilities subsumed by the broad ability) are needed for appropriate construct representation (see Comrey, 1988; Messick, 1989, 1995). The aggregate of BD (a measure of Spatial Relations at the narrow ability level) and Object Assembly (a measure of Closure Speed at the narrow ability level), for example, would provide a good estimate of the broad Gv ability because these tests are strong measures of Gv (see Table 1.1) and represent qualitatively different aspects of this broad ability.
The Verbal Comprehension Index ( VCI) of the WAIS-III is an example of good construct representation. This is because the VCI includes Vocabulary ( VL), Similarities ( LD/ VL), Comprehension (LD), and Information (K0), which represent qualitatively different aspects of Gc. Despite the fact that the construct of Gc is well represented on the Wechsler Intelligence Scales, there are few composites among the major intelligence batteries that are both relatively pure (i. e., containing only construct-relevant tests) and well represented (i. e., containing qualitatively different measures of the broad ability represented by the composite; see Flanagan, McGrew, et al., 2000, for a review). In fact, most major intelligence batteries yield composites characterized by construct-irrelevant variance and have two or more constructs that are underrepresented (McGrew & Flanagan, 1998).
In addition to interpreting a single subtest as a measure of a broad ability, construct underrepresentation occurs when the aggregate of two or more measures of the same narrow (stratum I) ability is interpreted as measuring a broad (stratum II) CHC ability. For example, the Memory for Names and Visual-Auditory Learning tests of the WJ-R are interpreted as measuring the broad ability of Glr ( Woodcock & Mather, 1989), even though they are primarily measures of Associative Memory (MA), a narrow ability subsumed by Glr. Thus, the Glr cluster of the WJ-R is most appropriately interpreted as an estimate of Associative Memory (a narrow ability) rather than an estimate of Long-Term Storage and Retrieval (a broad ability).
"A scale [or broad CHC ability cluster] will yield far more information-- and, hence, be a more valid measure of a construct-- if it contains more differentiated items [or tests]" (Clarke & Watson, 1995). CHC Cross-Battery assessment circumvents the misinterpretations that can result from underrepresented constructs by specifying the use of two or more qualitatively different indicators to represent each broad CHC ability. In order to ensure that qualitatively different aspects of broad abilities are represented in assessment, classification of cognitive ability tests at the narrow (stratum I) ability level was necessary. This process involved the use of a systematic expert consensus process to classify the more than 250 cognitive ability tests previously mentioned according to the narrow (stratum I) abilities they measure (see Flanagan, McGrew, et al., 2000). These classifications aid in the selection of qualitatively different test indicators for each of the broad abilities represented in CHC Cross-Battery assessments. Thus, construct validity is maximized rather than compromised (McGrew & Flanagan, 1998; Messick, 1995).
The tests of the major intelligence batteries are classified at both the broad and narrow ability levels on the CHC Cross-Battery Worksheets available in Appendix A.
In sum, the latter two cross-battery pillars guard against two ubiquitous sources of invalidity in assessment: construct-irrelevant variance and construct under-representation. Taken together, the three pillars underlying the cross-battery approach provide the necessary foundation from which to organize assessments of cognitive abilities that are more theoretically driven, comprehensive, and valid. The subsequent chapters in this book will describe how to organize and interpret CHC Cross-Battery assessments as well as instruct practitioners in the appropriate use of the related worksheets and summary sheets.
Cross-Battery Assessment in Perspective
It is important to realize that the crossing of batteries described in this book is not an entirely new method of intellectual assessment per se, as it is a common practice in neuropsychological assessment (e. g., Lezak, 1976, 1995; Wilson, 1992) and is carried out routinely by astute practitioners (Brackett & McPherson, 1996). In fact, Kaufman continues to advocate for and instruct practitioners in supplemental testing methods (see Kaufman, 2000). Kaufman, Lichtenberger, and Naglieri's (1999) suggestion for "test integration" is one such example (p. 332). Notwithstanding, a time-efficient method for crossing intelligence batteries was not formally operationalized until recently (Flanagan, McGrew, et al., 2000; McGrew & Flanagan, 1998). The CHC Cross-Battery approach defined here provides a systematic means for clinicians to make valid, up-to-date interpretations of current intelligence batteries, in particular, and to augment them in a way that is consistent with the empirically supported CHC theory of cognitive abilities.
Through an understanding of the breadth and depth of broad and narrow CHC cognitive abilities and their relations to outcome criteria (e. g., specific academic skills), it will become clear that the measurement of these abilities, via cross-battery assessment, supercedes global IQ in the evaluation of learning and problem-solving capabilities (Flanagan, 2000; Flanagan, McGrew, et al., 2000). Moving beyond the boundaries of a single test kit by adopting the psychometrically and theoretically defensible cross-battery principles represents a significantly improved method of measuring cognitive abilities (Carroll, 1998; Kaufman, 2000). Furthermore, because the cross-battery approach is theory-focused (rather than test kitÐ focused), its principles and procedures can be used with any intelligence battery.
THE NEED FOR A CROSS-BATTERY ASSESSMENT APPROACH
The need to have cross-battery assessment techniques to broaden the assessment of cognitive functioning beyond the confines of a single battery not only is apparent in school and clinical psychology (e. g., Brackett & McPherson, 1996; Flanagan & Mc-Grew, 2000; Kaufman, 1994; Kaufman et al., 1999; Woodcock, 1990), it is also apparent in neuropsychology (e. g., Lezak, 1976, 1995; Wilson, 1992). In fact, as previously stated, neuropsychological assessment has been characterized for years by the crossing of various standardized tests in an attempt to measure a broader range of brain functions than that offered by a single instrument (Lezak, 1976, 1995). Unlike Flanagan and McGrew's approach, however, the omnipresent techniques of crossing batteries within the field of neuropsychological assessment do not appear to be grounded in a systematic process that is both psychometrically and theoretically defensible. Thus, as Wilson (1992) cogently pointed out, the field of neuropsychological assessment is in need of an eclectic approach that can guide practitioners through the selection of measures that would result in more specific and delineated patterns of function and dysfunction-- an approach that provides more clinically useful information than one that is "wedded to the utilization of subscale scores and IQs" (p. 382).
Indeed, all fields involved in the assessment of cognitive functioning have a need for an approach that can aid practitioners in their attempts to "touch all of the major cognitive areas, with emphasis on those most suspect on the basis of history, observation, and on-going test findings" ( Wilson, 1992, p. 382; see also Brackett & McPherson, 1996). Although the theories and conceptual models that underlie neuropsychological assessment may differ from those underlying other types of assessment (e. g., psychoeducational), the principles and procedures that define the CHC Cross-Battery approach to assessment can be adopted for use within any field.
APPLICATION OF THE CHC CROSS-BATTERY APPROACH
In order to ensure that CHC Cross-Battery assessment procedures are psychometrically and theoretically defensible, it is recommended that practitioners adhere to three guiding principles. These principles are presented in Rapid Reference 1.10 and are described briefly in the following pages. ( It is important to note that the CHC Cross-Battery Worksheets that comprise Appendix A incorporate these principles in order to facilitate the application of this method of assessment.)
Guiding Principle 1
When constructing broad (stratum II) ability composites or clusters, one should include only relatively pure CHC indicators (i. e., those tests that had either strong or moderte [but not mixed] loadings on their respective CHC factors in cross-battery factor analyses). There is one exception to this principle: A test that was classified logically at the broad (stratum II ) level may be used in cross-battery assessments if there is a clear, established relation between it and the format of a test that was classified empirically. For example, although the WMS-III Digit Span test has not been included in adequately designed CHC cross-battery factor analyses to date, it is most likely a good indicator of Memory Span (MS), a narrow ability of Gsm. This is because it is very similar in testing format (e. g., administration procedure, task demand, nature of stimuli) to the Wechsler Intelligence Scales' Digit Span tests, which have had consistently strong loadings on Gsm factors in CHC theoryÐ driven cross-battery factor analyses (e. g., Woodcock, 1990; Woodcock et al., 2001). As a general rule of thumb, empirically classified tests should be selected over logically classified tests whenever feasible. This will ensure that only construct-relevant tests are included in cross-battery assessments. (Empirically and logically classified tests are clearly marked on the CHC Cross-Battery Worksheets presented in Appendix A.)
Guiding Principle 2
When constructing broad (stratum II) ability composites, include two or more qualitatively different narrow (stratum I) ability indicators for each CHC domain to ensure appropriate construct representation. Without sufficient empirically or logically classified tests available to represent constructs adequately, inferences about an individual's broad (stratum II) ability cannot be made. For example, when a composite is derived from two measures of Vocabulary ( VL; for example, WJ-R Oral Vocabulary and Picture Vocabulary), it is inappropriate to generalize about an individual's broad Gc ability because the Gc construct is underrepresented. In this case the composite (i. e., WJ-R Comprehension-Knowledge [or Gc] Cluster) is best interpreted as a measure of Lexical Knowledge (a narrow stratum I ability) rather than as Gc (a broad stratum II ability). Alternatively, inferences can be made about an individual's broad Gc ability based on a composite that is derived from one measure of Lexical Knowledge and one measure of General Information (i. e., two qualitatively different indicators of Gc; see Gc worksheet in Appendix A). Of course, as stated earlier, the more broadly an ability is represented (i. e., through the derivation of composites based on multiple qualitatively different narrow ability indicators), the more confidence one has in drawing inferences about that broad ability based on the composite score. A minimum of two qualitatively different indicators per CHC composite is recommended for practical reasons (viz., time-efficient assessment; McGrew & Flanagan, 1998; Woodcock et al., 2001).
Guiding Principle 3
When conducting CHC Cross-Battery assessments, it is important to select tests from the smallest number of batteries to minimize the effect of spurious differences among test scores that may be attributable to differences in the characteristics of independent norm samples (Mc-Grew, 1994). For example, the Flynn effect (Flynn, 1984) indicates that there is, on average, a difference of three standard-score points between the test scores of any two tests that were standardized 10 years apart. Using the WJ-R to augment the WISC-III and DAS, or the WJ III to augment the CAS or WAIS-III, following the steps that follow will ensure a valid and comprehensive assessment of most CHC broad abilities (see Keith, Flanagan, et al., 2000). Because the WISC-III and WJ-R, for example, were normed within 2 years of one another and both were found to have exemplary standardization sample characteristics ( Kamphaus, 1993; Kaufman, 1990; Salvia & Ysseldyke, 1991), this combination of batteries would be appropriate for CHC Cross-Battery assessments (see Hanel, 2001; Mascolo, 2001).
There are times, however, when crossing more than two batteries is necessary to gain enough information to test hypotheses about cognitive strengths or weaknesses or to answer specific referral questions. For example, since Glr is not measured (or at least, not adequately) by most intelligence batteries, it is often necessary to supplement tests, such as the Wechslers, SB: IV, KAIT, K-ABC, and CAS, with tests from more than one additional battery to gain enough qualitatively different measures of Glr to constitute broad representation of this ability in assessment. Although crossing more than two batteries may not seem desirable from a psychometric standpoint, it is important to realize that "when cross-battery assessments are implemented systematically and adhere to the recommendations for development, use, and interpretation, the potential error introduced due to crossing norm groups is likely negligible and has far fewer implications than the error associated with the improper use and interpretation of cognitive ability performance associated with the traditional assessment approach (e. g., subtest analysis)" (Flanagan, McGrew, et al., 2000, p. 223, emphasis in original).
In summary, the pillars and guiding principles underlying the CHC Cross-Battery approach provide the necessary foundation from which to conduct comprehensive assessments of the broad CHC abilities that define the structure of intelligence in current psychometric theory and research. Preliminary studies have shown that assessments organized around CHC theory, following cross-battery principles and procedures (see Chapter 2), are valid and explain certain academic skills (e. g., reading decoding, reading comprehension) better than do assessments organized around traditional models (e. g., Wechsler models). These studies are summarized in the next section.
RESEARCH FOUNDATION OF CHC CROSS-BATTERY ASSESSMENT
The entire CHC Cross-Battery approach was built on research. Kaufman (2000) commented that this new approach to assessment and interpretation "is based on an impressive compilation and integration of research investigations" ( p. xv). Specifically, the approach ensures that assessments are organized and interpreted according to the well-researched CHC theory of cognitive abilities. In addition, the classifications necessary to organize assessments according to CHC theory are either empirically based or the result of an expert consensus process. Thus, the CHC Cross-Battery approach rests on a solid research foundation. Notwithstanding, the validity of the assessment method was not fully evaluated until very recently.
Because the CHC Cross-Battery approach was formally introduced to the field only 2 years ago, little research on its utility is available. The few investigations that have been conducted using cross-battery data sets, however, are promising. Summaries of these investigations are organized around two important questions about the approach.
Does use of the CHC classifications and procedures of the CHC Cross-Battery approach result in valid measurement of CHC constructs? Preliminary studies suggest that the CHC clusters that are derived following cross-battery classifications and procedures are valid. For example, confirmatory factor analysis with WISC-R/ WJ-R cross-battery data demonstrated that they fit a seven-factor CHC model well and significantly better than a traditional three-factor WISC-R model. In a similar investigation of the cross-battery principles and procedures for organizing tests in assessment, Mascolo (2001) demonstrated the configural invariance of Flanagan's seven-factor structural model in an independent cross-battery dataset. Likewise, Keith, Kranzler, and Flanagan (in press) showed the configural invariance of the same CHC seven-factor model in an independent evaluation of CAS/ WJ III data. Thus, when the WISC-R, WISC-III, and CAS were supplemented with select tests from the WJ-R/ WJ III in a systematic manner, following the steps of the CHC Cross-Battery approach, the resultant CHC structural model underlying these data sets was supported by and indeed consistent with the extant factor-analytic cognitive abilities research. In addition, these cross-battery data fit a contemporary CHC seven-factor model better than competing traditional models of the structure of intelligence. To demonstrate fully the utility of the CHC Cross-Battery approach, it is necessary to cross-validate the previous findings and conduct similar research with intelligence batteries other than the WJ-R/ WJ III following cross-battery principles and procedures.
Do CHC cross-battery assessments provide a better understanding of academic skills than traditional (Wechsler Scale) assessments? Preliminary research suggests that assessments organized around cross-battery principles and procedures lead to better prediction of academic skills as well as to a more accurate description of the specific cognitive abilities that contribute to the explanation of specific academic achievements. For example, Flanagan (in press) found that the general ability (or g ) factor underlying a WISC-RÐ based cross-battery CHC model accounted for substantially more variance in reading achievement (approximately 25%) than did the g factor underlying a more traditional three-factor Wechsler model ( Verbal Comprehension [VC]Ð Perceptual Organization [PO]Ð Freedom From Distractibility [FFD]). Similarly, Hanel (2001) found that the g factor underlying a WISC-IIIÐ based cross-battery model accounted for substantially more variance in reading achievement than did the more frequently interpreted four-factor WISC-III model (i. e., VC-PO-FFD-Processing Speed). In addition, both Flanagan and Hanel found that when assessments are organized around the strong CHC model, specific cognitive abilities, such as Gc, Ga, and Gs, explained a significant portion of the variance in reading achievement beyond that accounted for by g. Their findings were consistent with the g /specific abilities literature (e. g., Keith, 1999; McGrew et al., 1997; Vanderwood et al., 2000) and suggest that these abilities may be particularly important to assess in addition to general ability in young children referred for reading problems. Following the cross-battery principles and procedures will ensure that these abilities are represented adequately in assessment.
These initial validity studies demonstrated that the application of the CHC Cross-Battery approach to the WISC-R, WISC-III, and CAS resulted in structurally valid CHC measures. Additional validity support for the resultant cross-battery CHC constructs was demonstrated through their significant (and expected) relations to external measures (viz., general and specific reading abilities; see Flanagan, in press, for details). Although it is recognized that these studies so far offer only limited evidence regarding the validity of the CHC Cross-Battery approach, they are nonetheless quite promising in their findings. Clearly, much more validity evidence will be necessary in order to substantiate fully the benefits and utility of the CHC Cross-Battery approach. Considering the strong research and theoretical foundations of the CHC Cross-Battery approach and given the results of these current studies, there appears to be every reason to believe that the direction of such future research is likely to be positive and consistent with what has already been found. As supporting validity evidence continues to emerge, the unique benefits and advantages provided by the approach (i. e., valid measurement of constructs using data from crossed batteries and better prediction of academic skills through accurate measurement of related cognitive abilities) are likely to become valuable, if not indispensable, tools in cognitive assessment.
COMPREHENSIVE REFERENCES ON THE CHC CROSS-BATTERY APPROACH
The Intelligence Test Desk Reference (ITDR): Gf-Gc Cross-Battery Assessment (Mc-Grew & Flanagan) and The Wechsler Scales and Gf-Gc Theory: A Contemporary Approach to Interpretation (Flanagan, McGrew, et al., 2000) currently provide the most detailed information on the development and implementation of the CHC Cross-Battery approach. In addition, these books provide a detailed description of the specific steps necessary to organize a cognitive ability evaluation according to current theory and research. While the former book provides the most comprehensive description of the psychometric, theoretical, content, and interpretive features of all current intelligence tests as well as numerous special-purpose tests (information necessary to make informed decisions about supplementing a given intelligence battery), the latter book provides a comprehensive and defensible organizational framework for interpreting cross-battery data using the Wechsler Intelligence Scales.
Table of Contents
Series Preface xiii
One Overview 1
Two How to Organize a Cross-Battery Assessment Using Cognitive, Achievement, and Neuropsychological Batteries 45
Three How to Interpret Test Data 121
Four Cross-Battery Assessment for SLD Identification: The Dual Discrepancy/Consistency Pattern of Strengths and Weaknesses in the Context of an Operational Definition 227
Five Cross-Battery Assessment of Individuals From Culturally and Linguistically Diverse Backgrounds 287
Six Strengths and Weaknesses of the Cross-Battery Assessment Approach 351
Seven Cross-Battery Assessment Case Report 365
Appendix A CHC Narrow Ability Definitions and Task Examples 389
Appendix B CHC Broad and Narrow Ability Classification Tables for Tests Published Between 2001 and 2012 399
Appendix C Descriptions of Cognitive, Achievement, and Neuropsychological Subtests by CHC Domain 417
Appendix D Critical Values for Statistical Significance and Base Rate for Composites on Comprehensive Cognitive and Achievement Batteries 425
Appendix E Variation in Task Demands and Task Characteristics of Subtests on Cognitive and Neuropsychological Batteries 431
Marlene Sotelo-Dynega and Tara Cuskley
Appendix F Variation in Task Demands and Task Characteristics of Subtests on Achievement Batteries by IDEA Academic Area 439
Jennifer T. Mascolo
Appendix G Neuropsychological Domain Classifications 445
Appendix H Understanding and Using the XBA PSW-A v1.0 Software Program Tab by Tab 457
Appendix I Cognitive and Neuropsychological Battery-Specific Culture-Language Matrices 485
Appendix J Cross-Battery Assessment Case Reports 503
Jim Hanson, John Garruto, and Karen Apgar
Appendix K Eugene, Oregon, School District Integrated Model for Specific Learning Disability Identification 505
Appendix L Summary of the Expert Consensus Study for Determining CHC Broad and Narrow Ability Classifications for Subtests New to This Edition 517
Appendix M Criteria Used in XBA DMIA v2.0 for Follow-Up on a Two-Subtest Composite 527
Author Index 533
Subject Index 537
About the Authors 553
About the CD 555
The CD-ROM contains the full versions of all Appendices; three software programs that analyze data (Cross-Battery Assessment Data Management and Interpretive Assistant, Pattern of Strengths and Weaknesses Analyzer, and Culture-Language Interpretive Matrix); and a form (Evaluation and Consideration of Exclusionary Factors for SLD Identification).
Most Helpful Customer Reviews
Excellent resource, although the material is much easier to understand and apply if supported through hands-on training.