Reclaiming Accountability: Improving Writing Programs through Accreditation and Large-Scale Assessments

Reclaiming Accountability brings together a series of critical case studies of writing programs that have planned, implemented, and/or assessed the impact of large-scale accreditation-supported initiatives. The book reimagines accreditation as a way to leverage institutional or programmatic change.

Contributions to the volume are divided into three parts. Part 1 considers how specialists in composition and rhetoric can work most productively with accrediting bodies to design assessments and initiatives that meet requirements while also helping those agencies to better understand how writing develops and how it can most effectively be assessed. Parts 2 and 3 present case studies of how institutions have used ongoing accreditation and assessment imperatives to meet student learning needs through programmatic changes and faculty development. They provide concrete examples of productive curricular (part 2) and instructional (part 3) changes that can follow from accreditation mandates while providing guidance for navigating challenges and pitfalls that WPAs may encounter within shifting and often volatile local, regional, and national contexts.

In addition to providing examples of how others in the profession might approach such work, Reclaiming Accountability addresses assessment requirements beyond those in the writing program itself. It will be of interest to department heads, administrators, writing program directors, and those involved with writing teacher education, among others.

Contributors: Linda Adler-Kassner, William P. Banks, Remica Bingham-Risher, Melanie Burdick, Polina Chemishanova, Malkiel Choseed, Kyle Christiansen, Angela Crow, Maggie Debelius, Michelle F. Eble, Jonathan Elmore, Lorna Gonzalez, Angela Green, Jim Henry, Ryan Hoover, Rebecca Ingalls, Cynthia Miecznikowski, Susan Miller-Cochran, Cindy Moore, Tracy Ann Morse, Joyce Magnotto Neff, Karen Nulton, Peggy O’Neill, Jessica Parker, Mary Rist, Rochelle Rodrigo, Tulora Roeckers, Shirley K. Rose, Iris M. Saltiel, Wendy Sharer, Terri Van Sickle, Jane Chapman Vigil, David M. Weed

1121977326
Reclaiming Accountability: Improving Writing Programs through Accreditation and Large-Scale Assessments

Reclaiming Accountability brings together a series of critical case studies of writing programs that have planned, implemented, and/or assessed the impact of large-scale accreditation-supported initiatives. The book reimagines accreditation as a way to leverage institutional or programmatic change.

Contributions to the volume are divided into three parts. Part 1 considers how specialists in composition and rhetoric can work most productively with accrediting bodies to design assessments and initiatives that meet requirements while also helping those agencies to better understand how writing develops and how it can most effectively be assessed. Parts 2 and 3 present case studies of how institutions have used ongoing accreditation and assessment imperatives to meet student learning needs through programmatic changes and faculty development. They provide concrete examples of productive curricular (part 2) and instructional (part 3) changes that can follow from accreditation mandates while providing guidance for navigating challenges and pitfalls that WPAs may encounter within shifting and often volatile local, regional, and national contexts.

In addition to providing examples of how others in the profession might approach such work, Reclaiming Accountability addresses assessment requirements beyond those in the writing program itself. It will be of interest to department heads, administrators, writing program directors, and those involved with writing teacher education, among others.

Contributors: Linda Adler-Kassner, William P. Banks, Remica Bingham-Risher, Melanie Burdick, Polina Chemishanova, Malkiel Choseed, Kyle Christiansen, Angela Crow, Maggie Debelius, Michelle F. Eble, Jonathan Elmore, Lorna Gonzalez, Angela Green, Jim Henry, Ryan Hoover, Rebecca Ingalls, Cynthia Miecznikowski, Susan Miller-Cochran, Cindy Moore, Tracy Ann Morse, Joyce Magnotto Neff, Karen Nulton, Peggy O’Neill, Jessica Parker, Mary Rist, Rochelle Rodrigo, Tulora Roeckers, Shirley K. Rose, Iris M. Saltiel, Wendy Sharer, Terri Van Sickle, Jane Chapman Vigil, David M. Weed

24.95 In Stock
Reclaiming Accountability: Improving Writing Programs through Accreditation and Large-Scale Assessments

Reclaiming Accountability: Improving Writing Programs through Accreditation and Large-Scale Assessments

Reclaiming Accountability: Improving Writing Programs through Accreditation and Large-Scale Assessments

Reclaiming Accountability: Improving Writing Programs through Accreditation and Large-Scale Assessments

eBook

$24.95 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

Reclaiming Accountability brings together a series of critical case studies of writing programs that have planned, implemented, and/or assessed the impact of large-scale accreditation-supported initiatives. The book reimagines accreditation as a way to leverage institutional or programmatic change.

Contributions to the volume are divided into three parts. Part 1 considers how specialists in composition and rhetoric can work most productively with accrediting bodies to design assessments and initiatives that meet requirements while also helping those agencies to better understand how writing develops and how it can most effectively be assessed. Parts 2 and 3 present case studies of how institutions have used ongoing accreditation and assessment imperatives to meet student learning needs through programmatic changes and faculty development. They provide concrete examples of productive curricular (part 2) and instructional (part 3) changes that can follow from accreditation mandates while providing guidance for navigating challenges and pitfalls that WPAs may encounter within shifting and often volatile local, regional, and national contexts.

In addition to providing examples of how others in the profession might approach such work, Reclaiming Accountability addresses assessment requirements beyond those in the writing program itself. It will be of interest to department heads, administrators, writing program directors, and those involved with writing teacher education, among others.

Contributors: Linda Adler-Kassner, William P. Banks, Remica Bingham-Risher, Melanie Burdick, Polina Chemishanova, Malkiel Choseed, Kyle Christiansen, Angela Crow, Maggie Debelius, Michelle F. Eble, Jonathan Elmore, Lorna Gonzalez, Angela Green, Jim Henry, Ryan Hoover, Rebecca Ingalls, Cynthia Miecznikowski, Susan Miller-Cochran, Cindy Moore, Tracy Ann Morse, Joyce Magnotto Neff, Karen Nulton, Peggy O’Neill, Jessica Parker, Mary Rist, Rochelle Rodrigo, Tulora Roeckers, Shirley K. Rose, Iris M. Saltiel, Wendy Sharer, Terri Van Sickle, Jane Chapman Vigil, David M. Weed


Product Details

ISBN-13: 9781607324355
Publisher: Utah State University Press
Publication date: 04/06/2016
Sold by: Barnes & Noble
Format: eBook
Pages: 341
File size: 2 MB

About the Author

Wendy Sharer, Tracy Ann Morse, Michelle F. Eble, and William P. Banks are writing faculty at East Carolina University. When their program faced reaccreditation in 2013, they chose to address the process as an opportunity to garner institutional support for revisions to their composition and writing across the curriculum programs.

Read an Excerpt

Reclaiming Accountability

Improving Writing Programs Through Accreditation and Large-Scale Assessments


By Wendy Sharer, Tracy Ann Morse, Michelle F. Eble, William P. Banks

University Press of Colorado

Copyright © 2016 University Press of Colorado
All rights reserved.
ISBN: 978-1-60732-435-5



CHAPTER 1

Assessing for Learning in an Age of Comparability

Remembering the Importance of Context


CINDY MOORE, PEGGY O'NEILL, AND ANGELA CROW


As compositionists attempt to navigate the ever-changing landscape of contemporary higher education accreditation, we can draw on our rich history of using external assessment mandates to our advantage. Given recent calls for more standardized assessment methods, and developments in processing, sharing, and analyzing big data sets based on those standardized methods, it is imperative that we work with accreditors, employing our expertise, as we have in the past, to develop approaches to assessing students and teachers that improve learning and are consistent with our disciplinary theories about writing, teaching, and assessment. Below, we review compositionists' responses to earlier assessment mandates and illustrate how a knowledge of such work can help current program administrators and faculty negotiate an accreditation context increasingly influenced by public calls for higher ed accountability and the new technologies that offer a means to achieve it.


Learning from the Past: Writing Assessment and Accountability in Context

Because of the consequences of assessment for both teaching and learning, writing administrators and faculty have, for decades, seen externally inspired writing assessment initiatives as opportunities to document student achievement, gather information about program strengths and challenges, and use that information to improve curriculum and instruction. The literature in our field is replete with articles and books that illustrate our willingness — and ability — to make assessment mandates serve our more specific interests while also satisfying requirements imposed by state legislators and accreditors.

During the mid- to late-twentieth century as the college-bound student population diversified, calls for writing placement and proficiency tests increased, and composition faculty became more involved in writing assessment. Understanding "the sociopolitical implications of these tests for students and teachers" (Greenberg 1982, 743), many compositionists worked throughout the 1970s and 1980s to ensure that externally imposed writing assessments were informed by current theory and research and served the needs of their particular students (Cooper and Odell 1977; Gere 1980; Greenberg 1982;White 1984). For example, one study investigated "the effect of different kinds of testing upon the distribution of scores for racial minorities" (White and Thomas 1981, 276), examining the consequences of a standardized multiple-choice test of usage and a local exam designed by California State University faculty in conjunction with testing experts. The study showed that scores for white students were similar for the standardized exam and the essay portion of the CSU exam, but the results for black and Latino students were quite different: the standardized multiple-choice exam "rendered a much more negative judgment of these students' use of English than did the evaluators of their writing" (1981, 281). White and Thomas argued that the results cast "some real question upon the validity of usage testing as an indicator of writing ability" (1981, 280).

Through research such as this, writing scholars realized the power of assessments to influence teaching and learning and wanted to minimize the chances that "writing teachers [would] find themselves administering writing proficiency tests that [bore] little relationship to their perception of college-level writing ability" and that administrators would use "the results of a test they consider inadequate or inappropriate" for placement or promotion" (Greenberg 1982, 367). While such efforts were not directly linked to accreditation demands, they highlighted the importance of balancing the needs of a particular local context with outside interests, for example, university administrators or policymakers (Greenberg 1982).

In the 1980s, one of the most influential responses to an external assessment mandate was the portfolio assessment system developed by Pat Belanoff and Peter Elbow to replace a university mandated proficiency exam. Belanoff and Elbow (1986) argued that the portfolio assessment, which involved teachers working in groups for norming during the semester and then at the end for the formal portfolio evaluation, promoted better teaching and more learning because, in part, the teachers developed shared evaluation criteria and the group discussions informed their work with students in their classrooms. Theportfolio program Belanoff and Elbow designed kick-started the portfolio movement in college composition, a movement that shaped writing assessment, teaching, and research. Though Belanoff and Elbow did not frame the portfolio assessment in terms of accreditation, it was part of a university assessment mandate that undoubtedly would have been reported to accreditors.

While many compositionists focused on research and development of individual placement and proficiency tests, some compositionists created more comprehensive writing programs in response to university assessment mandates. The best-known example of this type of work was done by Richard Haswell (2001) and his colleagues at Washington State University. Haswell led the effort to build a multi-tiered, integrated writing assessment program in response to a university general education program that mandated assessment of the composition program. Their "adventure into writing assessment" resulted in a program that encompassed placement testing upon entrance, portfolio exit assessments for first-year composition, a WAC requirement, and a rising junior portfolio that combined an impromptu essay, a reflective piece, and a collection of papers produced for courses across the curriculum. At all levels writing teachers were involved in the assessments, and a comprehensive writing center provided support for students, including required small-group sessions for those who did not pass the junior portfolio (Haswell 2001).

The WSU writing program, which received a commendation from its accrediting agency in 1999, illustrates some of the basic principles of writing assessment documented in the composition scholarship: (1) assessing and teaching writing are closely linked, and (2) the goal of writing assessment is to facilitate students' development as writers and improve students' writing. Such principles not only influenced how compositionists responded to earlier assessment mandates, but they offered theoretically grounded guidance for negotiating an accreditation landscape that became informed by similar principles. Since the mid1990s, accreditation agencies have been focusing more on how university resources, including those dedicated to instruction and assessment, are being used to promote student learning. Though this shift from a concern with "inputs" (such as faculty expertise and per-student expenditure) to the "outcomes" of such investments reflected educational theory and research, it was also a response to growing public pressure to ensure that students were acquiring the knowledge and skills promised by higher ed institutions. Because the only way to really show that learning has occurred is through work completed by students, accreditors began asking schools to demonstrate that students had met learning outcomes with direct assessment evidence. Writing, as compositionists have long understood, is one of the best, most direct methods for making student learning visible. Thus, as accrediting agencies turned their attention to what and how much students were learning, results from writing assessments began to figure more prominently in accreditation processes.

As they did when faced with earlier placement and proficiency mandates, writing specialists have used changing accreditation requirements to their advantage, documenting student learning and program effectiveness for their own purposes while also helping their institutions meet accreditation standards (Carter 2003; Walvoord 2004). One of the best examples of such work has been documented by John Bean and his colleagues at Seattle University who developed discourse-based assessments to respond in part "to pressures from the Northwest Association of Schools and Colleges" (Bean, Carrithers, and Earenfight 2005, 6). Bean used his expertise in WAC to work with disciplinary faculty across the campus to (1) identify the knowledge and skills that graduating students in their fields should have developed, (2) develop an embedded assignment to collect direct evidence of the students' performance of particular skills or knowledge, and (3) use this information to improve teaching and learning, so students would meet the desired learning outcomes. While Bean's work was a direct outgrowth of an accreditation visit, other recent work has reflected a more proactive approach to accreditation concerns (Carter 2003; Broad et al. 2009; Adler-Kassner and O'Neill 2010).

Composition, then, had a history of using assessment to improve student learning before it was emphasized so much by accreditors. Likewise, we understood the link between learning assessment and teaching improvement before accreditors made the connection explicit. For example, White (1985) noted 30 years ago that "the assessment of writing and the teaching of writing were intimately related" (xv). Advocates of using teachers to score student writing samples argued that "it brings together English teachers to talk about the goals of writing instruction" and that the teachers "take away from the experience much that is valuable to their teaching" (White 1984, 408). With the proliferation of portfolio evaluation in the 1980s and 1990s, the link between student assessment and faculty development — and, to some extent, faculty evaluation — became even more prominent (Hamp-Lyons and Condon 2000, xv). Therefore, when accreditors began in the 1990s to formalize the link between learning and teaching by adding faculty-performance criteria focused on student outcomes and recommendingfaculty-development workshops as a way to use data about learning to "close the loop" of assessment, compositionists were ready. For example, in the early 2000s Bean began offering discipline-specific workshops that helped faculty examine their courses, identify learning outcomes, develop effective teaching practices, and design course-embedded assessments in response to the accreditors' expectations (Carrithers, Ling, and Bean 2008). And while accreditors have tended not to enforce standards that emphasize the importance of aligning instruction with "the learning goals of academic programs" and ensuring that faculty evaluation results in "reliable" data that are "used to improve instruction" (Western Association of Schools and Colleges 2011, 5.16 and 5.18), when they begin to do so, we have an impressive history to draw upon. Our field has long embraced, for example, the multiple-method faculty evaluation approaches endorsed by a growing number of accreditors concerned with validity and reliability of faculty assessment. We have argued for years that "teachers should never be evaluated only by student perceptions," nor by a single "class visit" (White 1989, 168), and that the teaching portfolio — or collections of materials akin to it — is the best method for both evaluating instruction and inspiring real improvement (Anson 1994; Minter and Goodburn 2002).

One reason compositionists have been able to see the relationship with accreditation productively is that our discipline has always prioritized student learning. Further, as accreditors shifted their attention to documentation of student learning, their principles and criteria began to match with our disciplinary values. For example, our approach to student-learning assessment as "context-sensitive," "rhetorically based," "accessible," and "theoretically consistent" (O'Neill, Moore, and Huot 2009, 56), closely aligns with accreditation directives to engage in "ongoing systematic collection and analysis of meaningful, assessable, and verifiable data" (NWCCU), collected through "multiple direct and indirect measures" (HLC). Like our accrediting agencies, we see assessment as a means to "fulfill" our school's "mission" and "improve instruction" on our campuses (NEASC). In addition, at least since the 1990s our understanding of important assessment constructs such as validity and reliability has mirrored those of our accreditors. Consistent with writing assessment theories (Smith 1993; Huot 2002; Broad 2003; Conference on College Composition and Communication 2009), accrediting agencies agree that establishing validity of assessment results requires collecting evidence about content, process, consequences, and local and disciplinary contexts. Likewise, agencies view reliability as not simply a matter of consistency and correlations, but rather the result of an argument that requires, among other things, attention to purpose, context, and ultimate use, a view consistent with that of writing assessment experts (O'Neill 2011).

However, there is evidence to suggest that our ability to work productively with accreditors is changing. Though regional accrediting agencies have traditionally positioned themselves as peer collaborators whose priority is to help institutions articulate the quality of their programs, they are feeling pressure to respond to a public that has grown cynical not only about what we teach and why but about the basic value of a college degree. What these developments mean for accreditors is that they should be prepared for Congress to "further regulate accreditation and ... assert further oversight" (CHEA 2012) in a way that satisfies a public desire for some degree of standardization across higher ed institutions. Pulled between responsibility to schools and responsiveness to an increasingly hostile public, accreditors are starting to say and do things that appear at odds with their stated commitment to honor individual institutional histories and missions. Within this context, it is reasonable for compositionists to wonder what our local assessments will mean, how the results of those assessments will be used, and whether we can maintain our focus on what the students in our classrooms need so they can learn and thrive.


From Accountability to Comparability: The Threat to Context-Based Assessment

Historically, public criticisms of education and calls for accountability tended to be directed toward K–12 educators with little attention to higher education. Since the mid-1980s, though, such criticisms have expanded to include post-secondary educators, as reflected in debates over the periodic reauthorizations of the Higher Education Act, which have articulated goals for student access to college and responsible stewardship of the tuition and taxpayer dollars being spent to meet those goals. This more general sentiment that colleges and universities should be held publicly accountable gained significant traction in 2006, with the release of A Test of Leadership, a report compiled by a special commission appointed by then US Secretary of Education, Margaret Spellings (Miller et al. 2006). Known as the Spellings Report, the document questioned not only the soaring costs of education but the quality provided to students and taxpayers who, "despite increased attention to student learning" by schools and accreditors, have no way of knowing "how much students learn in colleges or whether they learn more at one college or another" (Miller et al. 2006, 13). Among the commission's recommendations were the introduction of "innovative means to control costs" and "creation of a consumer-friendly information database with reliable information," including college costs, admissions, completion and graduate rates, and "eventually" student achievement of learning outcomes as measured by standardized achievement assessments like the Collegiate Learning Assessment (CLA) (26-37).

More recently, this drive for accountability through collection and publication of comparable data was captured by President Obama's State of the Union address, in which he criticized colleges and universities for not providing "our citizens with the skills they need to work harder, learn more, and reach higher" and remarked on the "soaring cost of higher education," which "taxpayers cannot continue to subsidize" (Obama 2013). In order to help schools "do their part to keep costs down," he explained, the Department of Education had designed "a new 'College Scorecard,'" to allow families to compare the costs, graduation rates, loan default rates, median borrowing, and employment rates of colleges and universities, so they can determine which schools offer "the most bang for [their] educational buck" (Obama 2013).


(Continues...)

Excerpted from Reclaiming Accountability by Wendy Sharer, Tracy Ann Morse, Michelle F. Eble, William P. Banks. Copyright © 2016 University Press of Colorado. Excerpted by permission of University Press of Colorado.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Contents Introduction: Accreditation and Assessment as Opportunity - Wendy Sharer, Tracy Ann Morse, Michelle F. Eble, and William P. Banks Part One: Laying the Foundations—Educating and Learning from Accrediting Bodies 1. Assessing for Learning in an Age of Comparability: Remembering the Importance of Context - Cindy Moore, Peggy O’Neill, and Angela Crow 2. QEP Evaluation as Opportunity: Teaching and Learning through the Accreditation Process - Susan Miller-Cochran and Rochelle Rodrigo 3. Understanding Accreditation’s History and Role in Higher Education: How It Matters to College Writing Programs - Shirley K. Rose Part Two: Curriculum and Program Development through Assessment and Accreditation 4. Going All In: Creating a Community College Writing Program through the QEP and Reaccreditation Process - Jonathan Elmore and Teressa Van Sickle 5. Moving Forward: What General Studies Assessment Taught Us about Writing, Instruction, and Student Learning - Jessica Parker and Jane Chapman Vigil 6. Making Peace with a “Regrettable Necessity”: Composition Instructors Negotiate Curricular Standardization - David Weed, Tulora Roeckers, and Melanie Burdick 7. A Tool for Program Building: Programmatic Assessment and the English Department at Onondaga Community College - Malkiel Choseed 8. Centering and De-Centering Assessment: Accountability, Accreditation, and Expertise - Karen Nulton and Rebecca Ingalls 9. Using Accountability to Garner Writing Program Resources, Support Emerging Writing Researchers, and Enhance Program Visibility: Implementing the UH Writing Mentors during WASC Reaccreditation - Jim Henry 10. SEUFolios: A Tool for Using ePortfolios as Both Departmental Assessment and Multimodal Pedagogy - Ryan S. Hoover and Mary Rist Part Three : Faculty Development through Assessment and Accreditation 11. Write to the Top: How One Regional University Made Writing Everybody’s Business - Polina Chemishanova and Cynthia Miecznikowski 12. “Everybody Writes”: Accreditation-Based Assessment as Professional Development at a Research Intensive University - Linda Adler-Kassner and Lorna Gonzalez 13. A Funny Thing Happened on the Way to Assessment: Lessons from a Thresholds-Based Approach - Maggie Debelius 14. Faculty Learning Outcomes: The Impact of QEP Workshops on Faculty Beliefs and Practices - Joyce Neff and Remica Bingham-Risher 15. From the Outside In: Creating a Culture of Writing through a QEP - Angela Green, Iris Saltiel, and Kyle Christiansen About the Authors Index
From the B&N Reads Blog

Customer Reviews