Uh-oh, it looks like your Internet Explorer is out of date.
For a better shopping experience, please upgrade now.
Assessment for Reading Instruction, Second Edition / Edition 2 available in Paperback
Combining essential background knowledge with hands-on tools, this practical resource and text provides a detailed roadmap for conducting multidimensional reading assessment. The authors' research expertise and extensive classroom experience are reflected on every page. Presented are effective ways to evaluate K-6 students' spelling, word recognition, fluency, comprehension, strategic knowledge, and more. Aided by lively case examples, preservice and inservice teachers and reading specialists learn to tailor assessment to the needs of each child and use results strategically to inform instruction. The concluding chapter offers useful information on preparing a reading clinic report. Special features of this accessible 8 1/2" x 11" volume include numerous figures, tables, and sample assessment instruments, many with permission to photocopy.
|Publisher:||Guilford Publications, Inc.|
|Series:||Solving Problems in the Teaching of Literacy Series|
|Edition description:||Second Edition|
|Product dimensions:||6.00(w) x 1.25(h) x 9.00(d)|
Read an Excerpt
Assessment for Reading Instruction
By Michael C. McKenna Steven A. Stahl
The Guilford PressCopyright © 2003 The Guilford Press
All right reserved.
There has always been an intense interest in reading comprehension assessment. After all, comprehension could be called the "bottom line" of reading. Measuring it provides an indicator of how well all of the subprocesses of reading are working together. Comprehension assessment is a somewhat controversial topic, in that no general agreement exists on how best to do it. An extreme point of view was voiced by Frank Smith (1988) in the fourth edition of his book Understanding Reading. In it he says, "Comprehension cannot be measured at all ... because it is not a quantity of anything" (p. 53). Most reading experts would certainly acknowledge that comprehension assessment raises important issues, but most would also agree that useful estimates can be reached of (1) a child's overall ability to comprehend, and (2) how well a child has comprehended a particular selection.
APPROACHES TO COMPREHENSION ASSESSMENT
Let's consider the major approaches to assessing comprehension and examine the strengths and limitations of each.
The most traditional method of testing reading comprehension is by asking questions. The use of questions offers great administrative flexibility, ranging fromformal testing situations to class discussions. Questions allow teachers to focus on particular facts, conclusions, and judgments in which the teachers have an interest. By posing questions at various levels of thinking, a teacher can get a glimpse of how the child has processed a reading selection.
Types of Questions
There are many ways of categorizing questions. Bloom's (1969) taxonomy is sometimes used, for example. A far simpler approach is to think of questions in terms of levels of comprehension. We conventionally speak of three levels.
1. Literal questions require a student to recall a specific fact that has been explicitly stated in the reading selection. Such questions are easy to ask and answer, but they may reflect a very superficial understanding of content. Edgar Dale (1946), an eminent reading authority during the first half of the 20th century, once referred to literal comprehension as "reading the lines" (p. 1).
2. Inferential questions, like literal questions, have factual answers. However, the answers cannot be located in the selection. Instead the reader must make logical connections among facts in order to arrive at an answer. The answer to a question calling for a prediction, for example, is always inferential in nature even though we are uncertain of the answer, for the reader must nevertheless use available facts in an effort to arrive at a fact that is not stated.
Answers to inferential questions are sometimes beyond dispute and sometimes quite speculative. Let's say that a class has just read a selection on New Zealand. They have read that New Zealand is south of the equator and was colonized by Great Britain. If a teacher were to ask whether Auckland, New Zealand's capital, is south of the equator, the answer would require inferential thinking. There is no dispute about the answer, but the selection does not specifically mention Auckland, and the students must infer its location in relation to the equator.
On the other hand, were the teacher to ask the students if English is spoken in New Zealand, the answer would be inferential as well as speculative. The mere fact that Britain colonized New Zealand does not guarantee that English is spoken there today. For all inferential questions, the reader must use facts that are stated to reach a conclusion about a fact that is not stated. For this reason, Dale (1946) described inferential comprehension as "reading between the lines" (p. 1).
3. Critical questions call upon students to form value judgments about the selection. Such judgments can never be characterized as right or wrong, accurate or inaccurate because these types of answers are not facts. They are evaluations arrived at on the basis of an individual's value system. Critical questions might target whether the selection is well written, whether certain topics should have been included, whether the arguments an author makes are valid, and whether the writing is biased or objective. Understandably, Dale (1946) equated critical comprehension with "reading beyond the lines" (p. 1). (Hint: A short-cut to asking a critical-level question is to insert the word should. Doing so always elevates the question to the critical level. Of course, there are other ways to pose critical questions, but this method is surefire).
A teacher's judgment of how well a child comprehends may depend, in part, on the types of questions asked. Thus the choice of question type can affect a child's performance during a postreading assessment. A student may well do better if asked questions that are entirely literal than if asked questions at a variety of levels. The issue of which type(s) of questions to include in any postreading assessment is therefore an important one. Perhaps the best advice is to ask the type(s) of questions that you would expect a child to be able to answer during the course of day-to-day classroom instruction.
A final issue concerning types of comprehension questions is whether to subdivide each of the three levels into specific skills. For example, the literal level is often seen as comprised of skills involving sequences, cause-and-effect relationships, comparisons, character traits, and the like. Does it make sense to ask questions corresponding to each of these skills? Yes, as long as very little is made of the results. Skill-related questions can assure us that a range of comprehension skills is being developed, but when we attempt to compute scores for each skill, we often run into trouble. The problem is twofold. First, there are seldom enough questions for reliable measurement. Second, scores on specific skill tests tend to be almost perfectly correlated, suggesting that the skills are difficult to separate for assessment. In other words, a student who scores high on a test of literal sequences almost always scores high on a test of literal cause-and-effect relationships. It's hardly worth the effort to splinter comprehension to this extent.
Questions Based on Reading Dependency
Reading dependency (also called passage dependency) is the need to have read a selection in order to answer a particular comprehension question. Consider an example. The children have just read a story about a girl who brings a frog to school for Show and Tell. The frog jumps out of her hands and causes merry havoc in the classroom. The teacher then asks two comprehension questions:
1. "What did the girl bring to Show and Tell?"
2. "What color was the frog?"
These two questions are both literal. That is, they both require the students to respond with facts explicitly stated in the story. They differ considerably, however, in terms of their reading dependency. Even children with extensive experience participating in Show and Tell would be unlikely to predict the answer to the first question simply on the basis of experience. In short, it is necessary to have read the story in order to answer the question. The second question is another matter. Most children know that nearly all American frogs are green, and the children would not need to have read the passage in order to respond correctly. On the other hand, if the girl had brought some rare South American species that was, say, red and yellow, the same question would have been reading-dependent.
Which of these two questions tells the teacher more about whether the students comprehended the text? Clearly, questions that can be answered without having adequately comprehended a selection fail to assess reading comprehension. They may well be justified in the interest of conducting a worthwhile discussion, but teachers should not be misled into assuming that such questions would help them monitor their students' comprehension.
We have noted that even the questions in commercial test instruments tend to have problems regarding reading dependency. Many studies (e.g., Tuinman, 1971) have shown that when given only the questions and not the selections on which they are based, students do far better than chance would have predicted. This outcome implies that they are using prior knowledge to correctly answer some of the questions. The challenge in devising assessment instruments, or even informal approaches, is that it is difficult to determine what children are likely to know before they read a selection. This problem is especially troublesome with nonfiction text because of its factual content. Children already familiar with the content may find themselves with an unintended advantage in responding to postreading questions.
To better understand the concept of reading dependency, read the simple nonfiction example, "Crows," in Figure 7.1. The four comprehension questions that follow the passage represent four basic possibilities in regard to the degree of reading dependence reflected in the question. For most adults, the answer to question 1 lies not only in the passage but in their prior knowledge. This means that the question is not reading-dependent and is not a good indicator of reading comprehension. The answer to question 2, on the other hand, lies entirely within the adult's prior knowledge (in this case, prior experience) but not within the passage. The answer to question 3 lies in the passage but not in a typical adult's prior knowledge. This makes question 3 a reading-dependent one-and a much better indicator of comprehension. Finally, the answer to question 4 lies neither in prior knowledge (typically) nor in the passage. The issue of reading dependence boils down to a single guideline: If your intent is to assess reading comprehension, then your comprehension questions should target information that lies within the passage but that is not likely to lie within the student's prior knowledge. The Venn diagram used in Figure 7.1 may help conceptualize the four types of questions in relation to effective comprehension assessment.
Keep in mind that reading dependence is related to how much the reader knows in advance. What if question 3 were addressed to an ornithologist? Because an expert on birds could probably answer the question without needing to read the passage, the very same question is no longer a reading-dependent one. It all depends on whom you're asking. This is why the problem of reading dependence sometimes gets the better of commercial test developers. It's hard to predict what students, in general, may or may not know!
Readability of Questions
A third aspect of reading comprehension questions involves their readability. It is possible for the question itself to be harder to comprehend than the selection on which it is based. Written questions should be kept as simple as possible. Their difficulty level should certainly be no higher than that of the selections on which they are based, and ideally it should be simpler. Consider, in particular, the vocabulary you use in framing a question and also the complexity of your sentence structures. The KISS method (Keep It Simple, Stupid!) has much to recommend it when formulating comprehension questions.
Cloze testing involves deleting words from a prose selection and asking students to replace them on the basis of the remaining context. The ability to provide logical replacement words is thought to indicate the extent to which a student is able to comprehend the material. Cloze testing has two important advantages. First, it can be administered in a group setting, once students have been introduced to its rather unusual format. Second, it does not require comprehension questions. This means that issues such as reading dependence and question readability do not arise. Third, cloze scores correlate highly with more conventional methods of assessing comprehension, such as asking questions.
On the other hand, cloze testing has significant limitations. Its strange format can confuse some students. Spelling and fine motor limitations can prevent students from displaying what they actually comprehended. (In fact, cloze testing is rarely administered below fourth grade for this reason.) Finally, research indicates that cloze assessments, as unusual as the format may appear, tend to assess comprehension at only a very low level. A student's ability to integrate information across sentences and paragraphs is not readily tapped by cloze items.
Figure 7.2 provides streamlined suggestions for constructing, administering, and scoring a cloze test. The sample cloze test in Form 7.1 (p. 183) serves as a model of what a cloze test should look like in its conventional format. You might try your hand at taking this test and then scoring it using the answer key in Figure 7.3. In scoring the test, make sure to give yourself credit for only verbatim responses-that is, for the exact word that was deleted in each case. You may be tempted to award credit for synonyms and other reasonable responses, but this is a temptation that must be resisted. There are four reasons for accepting only verbatim replacements:
1. Verbatim scoring is more objective than the policy of awarding credit for synonyms. Otherwise, different scorers would tend to produce different scores.
2. Verbatim scoring leads to tests that are far easier to grade. Imagine how long it would take if you had to stop and carefully consider the semantic acceptability of every wrong answer.
3. Research has shown convincingly that verbatim scoring correlates very highly with scores based on accepting synonyms and other reasonable responses. The only thing accomplished by awarding credit for synonyms is to inflate scores. The rank ordering of students in a classroom is not likely to be changed.
4. Scoring criteria are based on verbatim scoring. If you give credit for synonyms and other logical responses, it will be nearly impossible to interpret the results. This reason alone is sufficient to justify giving credit for verbatim replacements only. The multitude of studies establishing the scoring criteria given in Figure 7.2 has assessed a variety of populations, including elementary students, middle and secondary students, college students, vocational-technical students, and even various special-education categories. If your score on the sample cloze test was 60% or higher, it is reasonable to conclude that the passage is at your independent reading level.
Comprehension is sometimes assessed by asking a student to retell orally the content of the reading selection. The degree of detail provided and the general coherence of the retelling are used to gauge comprehension.
Excerpted from Assessment for Reading Instruction by Michael C. McKenna Steven A. Stahl Copyright © 2003 by The Guilford Press. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of Contents
1. Introduction to Reading Assessment
2. General Concepts of Assessment
3. Informal Reading Inventories and Other Measures of Oral Reading
4. Emergent Literacy
5. Word Recognition and Spelling
8. Strategic Knowledge
9. Affective Factors
10. Preparing a Reading Clinic Report
Appendix. Case Studies
K–8 classroom teachers and reading specialists; upper-level undergraduates and graduate students in education. Serves as a text in such courses as Reading Assessment, Reading Diagnosis and Instruction, Assessment of Reading Problems, and Reading Difficulties.