Multisite evaluation settings differ from the single settingscommon to research on evaluation use. In addition to the primaryintended users, there is another important group of potentialevaluation users in settings where government agencies or largenational or international foundations fund multisite projects:project leaders and local evaluators. If each project site isexpected to take part in or support the overall program evaluation,then these individuals frequently serve as links between theirprojects and the larger cross-project evaluation of the fundedprogram.
The field has not, until now, address the topic of how beingasked or required to participate in such evaluations affects thesepeople who play a critical role in multisite evaluations. Theseissue does so in two ways.
The first six chapters present data and related analyses fromresearch on four multisite evaluations, documenting the patterns ofinvovlement in these evaluation projects and the extent to whichdifferent levels of involvment in program evluations resulted indifferent patterns of evaluation use and influence. The remainingchapters offer reflections on the results of the cases or theirimplications, some by people who were part of the original researchand some by those who were not. The goal is to encourage readers tothink actively about ways to improve multisite evaluationpractice.
This is the 129th volume of the Jossey-Bass quarterly reportseries New Directions for Evaluation, an officialpublication of the American Evaluation Association.
About the Author
Jean A. King is a professional director of graduate studies in the Department of ORganizational Leadership, Policy, and Development at the University of Minnesota.
France Lawrenz is the Wallace Professor of Teaching and Learning in the Department of Educational Psychology and the associate vice president for research at the University of Minnesota.
Table of Contents
EDITORS’ NOTES (Jean A. King, Frances Lawrenz).
1. The Upside of an Annual Survey in Light of Involvement andUse: Evaluating the Advanced Technological Education Program(Stacie A. Toal, Arlen R. Gullickson).
The first of four case descriptions highlights a large-scaleevaluation directed by external program evaluators and thesurprising effect of a required annual survey on project staff whocompleted it.
2. Compulsory Project-Level Involvement and the Use ofProgram-Level Evaluations: Evaluating the Local Systemic Change forTeacher Enhancement Program (Kelli Johnson, Iris R.Weiss).
The second case description, in which the program evaluationmandated project-level staff to participate in specific ways,details the relationship between project-level involvement in thecore evaluation and the use of that evaluation by project leadersand evaluators.
3. Tensions and Trade-Offs in Voluntary Involvement: Evaluatingthe Collaboratives for Excellence in Teacher Preparation (LijaO. Greenseid, Frances Lawrenz).
The third case description examines the tensions and trade-offsthat arose from attempting to balance voluntary involvement in theevaluation by project principal investigators and evaluators withthe need to collect complete and comparable data across sites.
4. The Effect of Technical Assistance on Involvement and Use:The Case of a Research, Evaluation, and Technical AssistanceProject (Denise Roseland, Boris B. Volkov, CatherineCallow-Heusser).
In contrast to the other case descriptions, the fourth documentsthe effects of direct technical assistance and professionaldevelopment and their results in terms of involvement and use.
5. Documenting the Impact of Multisite Evaluations on theScience, Technology, Engineering, and Mathematics Field (DeniseRoseland, Lija O. Greenseid, Boris B. Volkov, FrancesLawrenz).
With the four case evaluation projects used as examples, thischapter discusses the impact of specific evaluations on the broaderfield of science, technology, engineering, and mathematicseducation and evaluation.
6. The Role of Involvement and Use in Multisite Evaluations(Frances Lawrenz, Jean A. King, Ann Ooms).
This cross-case analysis of the four case studies identifiesboth unique details and common themes related to promoting the useand influence of multisite evaluations.
7. Reflecting on Multisite Evaluation Practice (Jean A. King,Patricia A. Ross, Catherine Callow-Heusser, Arlen R. Gullickson,Frances Lawrenz, Iris R. Weiss).
The four lead evaluators for the large-scale evaluationsincluded as case descriptions discuss their experiences and whatthey have learned about multisite evaluation practice.
8. Culture and Influence in Multisite Evaluation (Karen E.Kirkhart).
This chapter explores the basic premise that evaluationinfluence must be understood and studied as a cultural phenomenon,especially in the complex environments that characterize multisiteevaluation.
9. Reflection on Four Multisite Evaluation Case Studies (PaulR. Brandon).
What do the findings of the four evaluation case studies suggestto an evaluation scholar who was not part of the research team thatcreated them? This chapter reviews the cases and summarizes theircomparative findings.
10. Building a Community of Evaluation Practice Within aMultisite Program (Leslie K. Goodyear).
Using a programmatic example, this chapter articulates how theprovision of evaluation technical assistance to a large, multisiteprogram and its funded projects can contribute to evaluationuse.
11. Toward Better Research On—and ThinkingAbout—Evaluation Influence, Especially in MultisiteEvaluations (Melvin M. Mark).
The final chapter provides a review of the concepts ofevaluation use, influence, and influence pathways, then discussesapproaches and challenges to studying evaluation influence andinfluence pathways, including the special challenges of multisitesettings.