Program Evaluation: Alternative Approaches and Practical Guidelines / Edition 4

Program Evaluation: Alternative Approaches and Practical Guidelines / Edition 4

by Jody Fitzpatrick
ISBN-10:
0205579353
ISBN-13:
2900205579357
Pub. Date:
09/27/2010
Publisher:
Program Evaluation: Alternative Approaches and Practical Guidelines / Edition 4

Program Evaluation: Alternative Approaches and Practical Guidelines / Edition 4

by Jody Fitzpatrick
$170.67
Current price is , Original price is $310.32. You
$310.32 
  • SHIP THIS ITEM
    In stock. Ships in 1-2 days.
  • PICK UP IN STORE

    Your local store may have stock of this item.

$170.67  $310.32 Save 45% Current price is $170.67, Original price is $310.32. You Save 45%.
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.

    Note: Access code and/or supplemental material are not guaranteed to be included with used textbook.

Overview

The most comprehensive text on the market, Program Evaluation: Alternative Approaches and Practical Guidelines, Third Edition provides an overview of a wide variety of approaches to evaluation and extensive practical guidelines for carrying out evaluation studies successfully. This text helps both students and professionals who are new to evaluation to understand how the field has evolved, what different approaches an evaluator can take in conducting evaluations, and how to plan and conduct an evaluation. The text makes extensive use of checklists, examples, and a comprehensive case study. Finally, throughout the book, students are introduced to current trends and controversial issues in evaluation and ways to conduct evaluations in an ethical and professional manner.

Product Details

ISBN-13: 2900205579357
Publication date: 09/27/2010
Pages: 568
Product dimensions: 6.50(w) x 1.50(h) x 9.50(d)

About the Author

Jody Fitzpatrick is an emeritus faculty member in public administration at the University of Colorado Denver, where she taught research methods and evaluation. She has conducted evaluations in many schools and human service settings and written extensively about successful evaluation practice. She served on the board and as president of the American Evaluation Association (AEA) and on the editorial boards of the American Journal of Evaluation and New Directions for Evaluation. She also chaired AEA's Teaching of Evaluation topical interest group and won a teaching award at the University of Colorado Denver. In her book, Evaluation in Action: Interviews with Expert Evaluators, she used interviews with expert evaluators to talk about the decisions that evaluators face as they plan and conduct evaluations and the factors that influence their choices. She evaluated the changing roles of counselors in middle schools and high schools and a program to help immigrant middle-school girls to achieve and stay in school. Her international work includes research on evaluation in Spain and elsewhere in Europe, and she has spoken on evaluation issues to policymakers and evaluators in France, Spain, Denmark, Mexico, and Chile.

James Sanders is professor emeritus of educational studies and former associate director of The Evaluation Center at Western Michigan University, where he taught, published, consulted, and conducted evaluations. A graduate of Bucknell University and the University of Colorado, he served on the board and as president of the American Evaluation Association (AEA). He chaired the steering committee that created the Evaluation Network, a predecessor to AEA. His publications include books on school, student, and program evaluation. He has worked extensively with schools, foundations, and government and nonprofit agencies to develop their evaluation practices. As chair of the Joint Committee on Standards for Educational Evaluation, he led the development of the second edition of the Program Evaluation Standards. He was also involved in developing the concepts of applied performance testing for student assessments, cluster evaluation for program evaluations by foundations and government agencies, and mainstreaming evaluation for organizational development. His international work in evaluation concentrated in Canada, Europe, and Latin America. He received distinguished service awards from Western Michigan University, where he helped establish a Ph.D. program in evaluation, and from the Michigan Association for Evaluation.

Blaine Worthen is psychology professor emeritus at Utah State University, where he founded and directed the evaluation methodology Ph.D. program and the Western Institute for Research and Evaluation, conducting more than 350 evaluations for local and national clients in the United States and Canada. He received his Ph.D. from The Ohio State University. He is a former editor of Evaluation Practice and founding editor of the American Journal of Evaluation. He served on the American Evaluation Association (AEA) board of directors and received AEA's Alva and Gunnar Myrdal Evaluation Practice Award and the American Education Research Association's Best Evaluation Study Award. He taught university evaluation courses for three decades, managed federally mandated evaluations in 17 states, advised numerous government and private agencies, and has given more than 150 keynote addresses and evaluation workshops in the United States, England, Australia, Israel, Greece, Ecuador, and other countries. He has written extensively on evaluation, measurement and assessment and is the author of 135 articles and six books. His Phi Delta Kappan article, "Critical Issues That Will Determine the Future of Alternative Assessment," was distributed to 500 distinguished invitees at the White House’s Goals 2000 conference. He is recognized as a national and international leader in the field.

Lori Wingate is the executive director of The Evaluation Center at Western Michigan University (WMU), where she has worked since 1997. She has a Ph.D. in interdisciplinary evaluation from WMU and has been working in the field of program evaluation since 1993. From 2009 to 2019, she directed EvaluATE, the evaluation hub for the National Science Foundation's Advanced Technological Education program. From 2011 to 2019, she served as a subject matter expert in evaluation for the U.S. Centers for Disease Control and Prevention (CDC). She has led more than 75 webinars and workshops on evaluation in various contexts, including CDC University, American Evaluation Association Summer Evaluation Institute, and EvaluATE. She leads the Evaluation Checklist Project at WMU and has developed numerous resource materials to support evaluation practice, including checklists, templates, and guides. She has written book chapters on evaluating humanitarian response to emergencies, evaluation checklists, and metaevaluation. She was the book review section editor for the American Journal of Evaluation from 2005 to 2011 and has led a range of evaluation projects in the areas of STEM education, public health, and higher education.

Table of Contents

Prefacexiii
Part 1Introduction to Evaluation1
1Evaluation's Basic Purpose, Uses, and Conceptual Distinctions3
A Brief Definition of Evaluation4
Informal versus Formal Evaluation8
Distinguishing between Evaluation's Purposes and Evaluators' Roles and Activities9
Some Basic Types of Evaluation16
Evaluation's Importance--and Its Limitations26
2Origins and Current Trends in Modern Program Evaluation30
The History and Influence of Evaluation in Society30
Recent Trends Influencing Program Evaluation44
Part 2Alternative Approaches to Program Evaluation53
3Alternative Views of Evaluation57
Diverse Conceptions of Program Evaluation58
Origins of Alternative Views of Evaluation59
Themes among the Variations67
A Classification Schema for Evaluation Approaches68
4Objectives-Oriented Evaluation Approaches71
Developers of the Objectives-Oriented Evaluation Approach and Their Contributions72
How the Objectives-Oriented Evaluation Approach Has Been Used80
Strengths and Limitations of the Objectives-Oriented Evaluation Approach82
5Management-Oriented Evaluation Approaches88
Developers of the Management-Oriented Evaluation Approach and Their Contributions89
How the Management-Oriented Evaluation Approach Has Been Used94
Strengths and Limitations of the Management-Oriented Evaluation Approach95
6Consumer-Oriented Evaluation Approaches100
Developers of the Consumer-Oriented Evaluation Approach and Their Contributions101
How the Consumer-Oriented Evaluation Approach Has Been Used104
Strengths and Limitations of the Consumer-Oriented Evaluation Approach108
7Expertise-Oriented Evaluation Approaches112
Developers of the Expertise-Oriented Evaluation Approach and Their Contributions114
How the Expertise-Oriented Evaluation Approach Has Been Used121
Strengths and Limitations of the Expertise-Oriented Evaluation Approach123
8Participant-Oriented Evaluation Approaches129
Evolution of Participant-Oriented Evaluation Approaches130
Developers of the Participant-Oriented Evaluation Approach and Their Contributions131
How Participant-Oriented Evaluation Approaches Have Been Used145
Strengths and Limitations of Participant-Oriented Evaluation Approaches146
9Alternative Evaluation Approaches: A Summary and Comparative Analysis152
Cautions about the Alternative Evaluation Approaches153
Contributions of the Alternative Evaluation Approaches158
Comparative Analysis of Characteristics of Alternative Evaluation Approaches159
Eclectic Uses of the Alternative Evaluation Approaches163
Drawing Practical Implications from the Alternative Evaluation Approaches165
Part 3Practical Guidelines for Planning Evaluations169
Introduction of Case Study170
10Clarifying the Evaluation Request and Responsibilities173
Understanding the Reasons for Initiating the Evaluation174
Conditions under which Evaluation Studies Are Inappropriate178
Determining When an Evaluation Is Appropriate: Evaluability Assessment182
Using an Internal or External Evaluator185
Hiring an Evaluator189
How Different Evaluation Approaches Clarify the Evaluation Request and Responsibilities192
11Setting Boundaries and Analyzing the Evaluation Context199
Identifying Intended Audiences for an Evaluation200
Describing What Is to Be Evaluated: Setting the Boundaries203
Analyzing the Resources and Capabilities That Can Be Committed to the Evaluation212
Analyzing the Political Context for the Evaluation216
Variations Caused by the Evaluation Approach Used217
Determining Whether to Proceed with the Evaluation219
12Identifying and Selecting the Evaluation Questions and Criteria232
Identifying Appropriate Sources of Questions and Criteria: The Divergent Phase234
Selecting the Questions, Criteria, and Issues to Be Addressed: The Convergent Phase246
Remaining Flexible during the Evaluation: Allowing New Questions, Criteria, and Standards to Emerge253
13Planning How to Conduct the Evaluation260
Identifying Design and Data Collection Methods262
Specifying How the Evaluation Will Be Conducted: The Management Plan275
Establishing Evaluation Agreements and Contracts285
Part 4Practical Guidelines for Conducting and Using Evaluations301
14Collecting Evaluation Information: Design, Sampling, and Cost Choices303
Using Mixed Methods304
Designs for Collecting Causal and Descriptive Information307
Sampling320
Cost Analysis324
15Collecting Evaluation Information: Data Sources and Methods, Analysis, and Interpretation334
Common Sources and Methods for Collecting Information335
Planning and Organizing the Collection of Information356
Analysis of Data and Interpretation of Findings358
16Reporting and Using Evaluation Information375
Purposes of Evaluation Reports376
Important Factors in Planning Evaluation Reports377
Key Components of a Written Report382
Suggestions for Presenting Information in Written Reports388
Alternative Methods for Reporting: The Adversary Approach394
Human and Humane Considerations in Reporting Evaluation Findings395
Suggestions for Effective Oral Reporting398
A Checklist for Good Evaluation Reports400
How Evaluation Information Is Used400
17Dealing with Political, Ethical, and Interpersonal Aspects of Evaluation411
Establishing and Maintaining Good Communications among Evaluators and Stakeholders412
Understanding Potential Bias Resulting from the Evaluator's Personal Values and Interpersonal, Financial, and Organizational Relationships with Others415
Maintaining Ethical Standards: Considerations, Issues, and Responsibilities for Evaluators and Clients423
Political Pressures and Problems in Evaluation432
18Evaluating Evaluations442
The Concept and Evolution of Metaevaluation443
The Joint Committee's Standards for Program Evaluation444
Summary of the Program Evaluation Standards445
AEA Guiding Principles for Evaluators449
The Role of Metaevaluator451
Some General Guidelines for Conducting Metaevaluations453
A Need for More Metaevaluation455
Part 5Emerging and Future Settings for Program Evaluation461
19Conducting Multiple-Site Evaluation Studies463
Purposes and Characteristics of Multiple-Site Evaluations464
Multisite Evaluation (MSE)466
On-Site Evaluation at Multiple Sites471
Cluster Evaluation475
Other Approaches to Multiple-Site Evaluation481
20Conducting Evaluation of Organizations' Renewal and Training in Corporate and Nonprofit Settings485
Evaluation in the Nonprofit Sector486
Evaluating Corporate Training Programs491
Personnel Evaluation495
Other Methods of Organizational Assessment497
21The Future of Evaluation507
The Future of Evaluation508
Predictions concerning the Profession of Evaluation508
Predictions concerning the Practice of Evaluation510
A Vision for Evaluation513
Conclusion513
Suggested Readings514
AppendixEvaluation-Related Web Sites515
References519
Author Index543
Subject Index551
From the B&N Reads Blog

Customer Reviews