Table of Contents
Figures and Tables ix
Preface xi
Acknowledgments xiii
The Author xv
One: Introduction 1
Learning Objectives 1
The Evaluation Framework 3
Summary 7
Key Terms 7
Discussion Questions 7
Two: Describing the Program 9
Learning Objectives 9
Motivations for Describing the Program 11
Common Mistakes Evaluators Make When Describing the Program 12
Conducting the Initial Informal Interviews 12
Pitfalls in Describing Programs 13
The Program Is Alive, and So Is Its Description 14
Program Theory 15
The Program Logic Model 20
Challenges of Programs with Multiple Sites 29
Program Implementation Model 30
Program Theory and Program Logic Model Examples 30
Summary 53
Key Terms 54
Discussion Questions 54
Three: Laying the Evaluation Groundwork 55
Learning Objectives 55
Evaluation Approaches 56
Framing Evaluation Questions 57
Insincere Reasons for Evaluation 60
Who Will Do the Evaluation? 60
External Evaluators 61
Internal Evaluators 62
Confidentiality and Ownership of Evaluation Ethics 63
Building a Knowledge Base from Evaluations 64
High Stakes Testing 65
The Evaluation Report 66
Summary 68
Key Terms 69
Discussion Questions 69
Four: Causation 71
Learning Objectives 71
Necessary and Sufficient 72
Types of Effects 81
Lagged Effects 81
Permanency of Effects 81
Functional Form of Impact 81
Summary 83
Key Terms 83
Discussion Questions 84
Five: the Prisms of Validity 85
Learning Objectives 85
Statistical Conclusion Validity 87
Small Sample Sizes 88
Measurement Error 90
Unclear Questions 91
Unreliable Treatment Implementation 91
Fishing 92
Internal Validity 92
Threat of History 93
Threat of Maturation 94
Selection 94
Mortality 95
Testing 96
Statistical Regression 97
Instrumentation 98
Diffusion of Treatments 99
Compensatory Equalization of Treatments 99
Compensatory Rivalry and Resentful Demoralization 100
Construct Validity 100
Mono-Operation Bias 102
Mono-Method Bias 102
External Validity 103
Summary 105
Key Terms 105
Discussion Questions 106
Six: Attributing Outcomes to the Program: Quasi-experimental Design 107
Learning Objectives 107
Quasi-Experimental Notation 108
Frequently Used Designs That Do Not Show Causation 109
One-Group Posttest-Only 109
Posttest-Only with Nonequivalent Groups 110
Participants’ Pretest-Posttest 111
Designs That Generally Permit Causal Inferences 112
Untreated Control Group Design with Pretest and Posttest 112
Delayed Treatment Control Group 118
Different Samples Design 120
Nonequivalent Observations Drawn from One Group 121
Nonequivalent Groups Using Switched Measures 122
Cohort Designs 123
Time Series Designs 125
Archival Data 127
Summary 128
Key Terms 128
Discussion Questions 129
Seven: Collecting Data 131
Learning Objectives 131
Informal Interviews 132
Focus Groups 132
Survey Design 136
Sampling 140
Ways to Collect Survey Data 143
Anonymity and Confidentiality 144
Summary 146
Key Terms 147
Discussion Questions 147
Eight: Conclusions 149
Learning Objectives 149
Using Evaluation Tools to Develop Grant Proposals 150
Hiring an Evaluation Consultant 152
Summary 152
Key Terms 153
Discussion Questions 153
Appendix A: American Community Survey 155
Glossary 157
References 163
Index 165