Measuring College Learning Responsibly Accountability in a New Era
By Richard J. Shavelson
Stanford University Press Copyright © 2010 Board of Trustees of the Leland Stanford Junior University
All right reserved. ISBN: 978-0-8047-6120-8
Chapter One Assessment and Accountability Policy Context
ONE MEASURE OF THE IMPACT of a National Commission Report is that it stirs debate and changes behavior. Most such reports, however, come with great fanfare and exit, almost immediately, leaving hardly a trace. The report of former U.S. Secretary of Education Margaret Spellings' Commission on the Future of Higher Education-A Test of Leadership: Charting the Future of U.S. Higher Education-is an exception to this rule (www.ed.gov/about/bdscomm/list/ hiedfuture/reports/final-report.pdf). It spurred and continues to spur debate; it has demonstrably changed behavior.
This chapter sets the policy context for the quest to assess undergraduates' learning and hold higher education accountable. What follows is a characterization of the Spellings Commission's recommendations and those of professional associations for a new era of accountability, along with academics' critiques of the proposals. The chapter then sketches some of the major issues underlying assessment and accountability and concludes with a vision of a new era in which learning is assessed responsibly within the context of an accountability system focused on teaching and learning improvement, while at the same informing higher education's various audiences.
Spellings Commission Findings and Recommendations
While praising the accomplishments of American higher education, the Spellings Commission said that the "system" had become complacent. "To meet the challenges of the 21st century, higher education must change from a system primarily based on reputation to one based on performance. We urge the creation of a robust culture of accountability and transparency throughout higher education" (p. 21). The Commission considered "improved accountability" (p. 4) the best instrument for change, with colleges and universities becoming "more transparent about cost, price and student success outcomes" and "willingly shar[ing] this information with students and families" (p. 4).
The Commission found fault with higher education in six areas; the three most pertinent here are:
Learning: "The quality of student learning at U.S. colleges and universities is inadequate and, in some cases, declining" (p. 3).
Transparency and accountability: There is "a remarkable shortage of clear, accessible information about crucial aspects of American colleges and universities, from financial aid to graduation rates" (p. 4).
Innovation: "Numerous barriers to investment in innovation risk hampering the ability of postsecondary institutions to address national workforce needs and compete in the global marketplace" (p. 4).
Student learning was at the heart of the Commission's vision of a transparent, consumer-oriented, comparative accountability system. Such a system would put faculty "at the forefront of defining educational objectives ... and developing meaningful, evidence-based measures" (p. 40) of the value added by a college education. The goal was to provide information to students, parents, and policy makers so they could judge quality among colleges and universities. In the Commission's words (p. 4):
Student achievement, which is inextricably connected to institutional success, must be measured by institutions on a "value-added" basis that takes into account students' academic baseline when assessing their results. This information should be made available to students, and reported publicly in aggregate form to provide consumers and policymakers an accessible, understandable way to mea sure the relative effectiveness of different colleges and universities.
The Commission was particularly tough on the current method of holding higher education accountable: accreditation. "Accreditation agencies should make performance outcomes, including completion rates and student learning, the core of their assessment as a priority over inputs or processes" (p. 41). The Commission recommended that accreditation agencies (1) provide comparisons among institutions on learning outcomes, (2) encourage progress and continual improvement, (3) increase quality relative to specific institutional missions, and (4) make this information readily available to the public.
Higher Education Responds to the Commission's Report
At about the same time that the Commission released its report, higher-education associations, anticipating the Commission's findings and recommendations and wanting to maintain control of their constituent institutions' destinies, announced their take on the challenges confronting higher education. In a "Letter to Our Members: Next Steps," the American Council on Education (ACE), American Association of State Colleges and Universities (AASCU), American Association of Community Colleges (AACC), Association of American Universities (AAU), National Association of In de pen dent Colleges and Universities (NAICU), and the National Association of State Universities and Land-Grant Colleges (NASULGC) enumerated seven challenges confronting higher education (www.acenet.edu/AM/Template.cfm?Section?Home&CONTENTID?18309 &TEMPLATE?/CM/ContentDisplay.cfm):
Expanding college access to low-income and minority students
Keeping college affordable
Improving learning by utilizing new knowledge and instructional techniques
Preparing secondary students for higher education
Increasing accountability for educational outcomes
Internationalizing the student experience
Increasing opportunities for lifelong education and workforce training
Perhaps the most astonishing "behavior change" came from AASCU and NASULGC. These organizations announced the creation of the Voluntary System of Accountability (VSA). Agreeing with the Spellings Commission on the matter of transparency, these organizations created the VSA to communicate information on the undergraduate student experience through a common web reporting template or indicator system, the College Portrait. The VSA, a voluntary system focused on four-year public colleges and universities (www.voluntarysystem.org/ index.cfm), is designed to do the following:
Demonstrate accountability and stewardship to the public
Mea sure educational outcomes to identify effective educational practices
Assemble information that is accessible, understandable, and comparable
Of course, not all responses to the Commission's report and the associations' letter were positive in nature or reflective of behavior change. The report, as well as the letter, was roundly criticized. Critics rightly pointed out that the proposals did not directly address the improvement of teaching and learning but focused almost exclusively on the external or summative function of accountability.
The recommendation for what appeared to be a one-size-fits-all standardized assessment of student learning by external agencies drew particular ire (but see Graff & Birkenstein, 2008). To academics any mea sure that assessed learning of all undergraduates simply was not feasible or would merely tap general ability, and the SAT and GRE were available to do that. Moreover, it was not possible to reliably mea sure a campus's value added. Finally, cross-institutional comparisons amounted to comparing apples and oranges; such comparisons were nonsensical and useless for improving teaching and learning.
The critics, moreover, pointed out that learning outcomes in academic majors varied, and measures were needed at the department level. If outcomes in the majors were to be measured, these measures should be constructed internally by faculty to reflect the campus's curriculum. A sole focus on so-called cognitive outcomes would leave out important personal and social responsibility outcomes such as identity, moral development, resilience, interpersonal and inter-cultural relations, and civic engagement.
The report had failed, in the critics' view, to recognize the diversity of higher-education missions and students served. It had not recognized but intruded upon the culture of academe in which faculty members are responsible for curriculum, assessment, teaching, and learning. The higher-education system was just too complex for simple accountability fixes. Horse-race comparisons of institutions at best would be misleading to the public and policy makers, and at worse would have perverse effects on teaching and learning at diverse American college and university campuses.
Assessment and Accountability in Higher Education
The Commission report and the multiple and continuing responses to it set the stage for examining assessment and accountability in higher education in this text. The focus here is on accountability-in particular, the assessment of student learning in accountability. This is not to trivialize the other challenges identified by the Commission or by the professional higher-education organizations. Rather, the intent is to tackle what is one of the three bottom lines of higher education: student learning, which is the hardest outcome of all to get a good handle on. (The other two are research and ser vice.)
As we saw, there is a tug-of-war going on today as in the past among three forces: policy makers, "clients," and colleges and universities. The tug-of-war reflects a conflict among these "cultures." The academic culture traditionally focuses on assessment and accountability for organization al and instructional improvement through accreditation, eschewing external scrutiny. "Clients"-students and their parents and governmental agencies and businesses-rely on colleges and universities for education, training, and research. They want comparative information about the relative strengths and weakness among institutions in order to decide where to invest their time and economic resources. And policy makers are held responsible by their constituencies to ensure high-quality education. Consequently, policy makers have a need to know how well campuses are meeting their stated missions in order to assure the public. Reputation, input, and process information is no longer adequate for this purpose. As the Commission noted, "Higher education must change from a system primarily based on reputation to one based on performance" (p. 21).
All of this raises questions such as, "What do we mean by student learning?" "What kinds of student learning should higher education be held accountable for?" "How should that learning be measured?" "Who should mea sure it?" And "How should it be reported, by whom, to whom, and with what consequences?"
The Commission's report and its respondents also raised questions about the nature of accountability. The Commission took a client-centered perspective-transparency of performance indicators, with intercampus comparative information for students and parents. Four-year public colleges and universities have, in the most extreme response, in the VSA, embraced this perspective.
The Commission's vision is shared by the policy community. The policy community's compact with higher education has been rocked by rising costs, decreasing graduation rates, and a lack of transparency about student learning and value added. No longer are policy makers willing to provide resources to colleges and universities on a "trust me" or reputational basis; increased transparency of outcomes and accountability are demanded.
In contrast, most higher-education professional organizations view accountability as the responsibility of colleges and universities and their accrediting agencies. External comparisons are eschewed (with exceptions noted above); internal diagnostic information for the improvement of the organization and teaching and learning is sought. This is not to say colleges and universities do not recognize the challenges presented to them in the 21st century, as we saw in the open letter issued by the major higher-education organizations in the United States. They do, and they want to control accountability rather than be controlled by it.
These varying views of accountability lead back to first principles and questions. "What is accountability?" "What should campus leaders be held accountable for-valued educational processes? Valued outcomes? Both?" "How should accountability be carried out?" "Who should carry it out?" "Who should get to report findings?" "What sanctions should be meted out if campuses fail to mea sure up?" "Should there be sanctions and, if not, what?" "What are states currently doing to hold their colleges and universities accountable?" "How do other nations hold their higher-education systems accountable?" "What seems to be a reasonable and effective approach to accountability for the United States going forward into the 21st century?"
A Vision of Higher-Education Assessment and Accountability in a New Era
The vision of assessment and accountability presented in this text is one of continuous improvement of teaching and learning by campuses evolving into learning organizations, with progress based on an iterative cycle of evidence, experimentation, action, and reflection. The vision, in part, is one of direct assessment of student learning on cognitive outcomes in the major and in general or liberal education (measured by the Collegiate Learning Assessment). However, the vision of learning outcomes goes beyond the cognitive to individual and social responsibility outcomes, including, for example, the development of one's identity, emotional competence, perspective taking (moral, civic, interpersonal, intercultural), and resilience.
Colleges and universities would be held accountable by regional agencies governed by boards composed of higher-education leaders, policy makers, and clients. These agencies would be accountable to a national agency of similar composition. Agencies would conduct academic audits and report findings publicly, in readily accessible form, to various interested audiences.
The audit would focus on the processes a campus has in place to ensure teaching and learning quality and improvement. To do this, the audit would rely on and evaluate the campus's assessment program. The campus assessment program would be expected to collect, analyze, and interpret data and feed back findings into campus structures that function to take action in the form of experiments aimed at testing ideas about how to improve teaching and learning. Over time, subsequent assessments would monitor progress made in the majors, in general or liberal education, and by individual students. In addition to providing data on student learning outcomes, the audit program would include other indicators of quality-for example, admission, retention, and graduation rates and consumer quality surveys.
The audit findings-not the learning assessment findings per se-would be made public. The report, based on data from the campus assessment program and a report by an external expert visiting panel, would include appraisals as to how rigorous the institution's goals were, how rigorous the assessment of those goals was, how well the institution had embedded quality assurance mechanisms throughout the organization (including delving deeply into a sample of departments and their quality assurance processes), and how well the institution was progressing toward those goals. The report would also include a summary of the general strengths and weaknesses of the campus and its quality assurance mechanisms. In this way such published academic audits would "have teeth" and would inform both educators within the institution and policy makers and clients outside.
Excerpted from Measuring College Learning Responsibly by Richard J. Shavelson Copyright © 2010 by Board of Trustees of the Leland Stanford Junior University. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.