Evaluation Essentials: Methods For Conducting Sound Research / Edition 1

Evaluation Essentials: Methods For Conducting Sound Research / Edition 1

by Beth Osborne Daponte
ISBN-10:
0787984396
ISBN-13:
9780787984397
Pub. Date:
07/15/2008
Publisher:
Wiley
ISBN-10:
0787984396
ISBN-13:
9780787984397
Pub. Date:
07/15/2008
Publisher:
Wiley
Evaluation Essentials: Methods For Conducting Sound Research / Edition 1

Evaluation Essentials: Methods For Conducting Sound Research / Edition 1

by Beth Osborne Daponte

Paperback

$86.75
Current price is , Original price is $86.75. You
$86.75 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.


Overview

Evaluation Essentials

Evaluation Essentials is an indispensable text that offers an introduction to program evaluation. Examples of program descriptions from a variety of sectors including public policy, public health, non-profit management, social work, arts management, education, international assistance, and labor illustrate the book's step-by-step approach to the process and methods of program evaluation. Perfect for students as well as new evaluators, Evaluation Essentials offers a comprehensive foundation in the core concepts, theories, and methods of program evaluation.

Product Details

ISBN-13: 9780787984397
Publisher: Wiley
Publication date: 07/15/2008
Series: Research Methods for the Social Sciences , #23
Pages: 192
Product dimensions: 7.10(w) x 9.25(h) x 0.41(d)

About the Author

Beth Osborne Daponte, Ph.D., is a senior research scholar at the Institution for Social and Policy Studies and lecturer in the School of Management at Yale University. Currently, she is also working with a large community foundation, helping it address its evaluation challenges at both the organizational and programmatic levels.

Read an Excerpt

Click to read or download

Table of Contents

Figures and Tables ix

Preface xi

Acknowledgments xiii

The Author xv

One: Introduction 1

Learning Objectives 1

The Evaluation Framework 3

Summary 7

Key Terms 7

Discussion Questions 7

Two: Describing the Program 9

Learning Objectives 9

Motivations for Describing the Program 11

Common Mistakes Evaluators Make When Describing the Program 12

Conducting the Initial Informal Interviews 12

Pitfalls in Describing Programs 13

The Program Is Alive, and So Is Its Description 14

Program Theory 15

The Program Logic Model 20

Challenges of Programs with Multiple Sites 29

Program Implementation Model 30

Program Theory and Program Logic Model Examples 30

Summary 53

Key Terms 54

Discussion Questions 54

Three: Laying the Evaluation Groundwork 55

Learning Objectives 55

Evaluation Approaches 56

Framing Evaluation Questions 57

Insincere Reasons for Evaluation 60

Who Will Do the Evaluation? 60

External Evaluators 61

Internal Evaluators 62

Confidentiality and Ownership of Evaluation Ethics 63

Building a Knowledge Base from Evaluations 64

High Stakes Testing 65

The Evaluation Report 66

Summary 68

Key Terms 69

Discussion Questions 69

Four: Causation 71

Learning Objectives 71

Necessary and Sufficient 72

Types of Effects 81

Lagged Effects 81

Permanency of Effects 81

Functional Form of Impact 81

Summary 83

Key Terms 83

Discussion Questions 84

Five: the Prisms of Validity 85

Learning Objectives 85

Statistical Conclusion Validity 87

Small Sample Sizes 88

Measurement Error 90

Unclear Questions 91

Unreliable Treatment Implementation 91

Fishing 92

Internal Validity 92

Threat of History 93

Threat of Maturation 94

Selection 94

Mortality 95

Testing 96

Statistical Regression 97

Instrumentation 98

Diffusion of Treatments 99

Compensatory Equalization of Treatments 99

Compensatory Rivalry and Resentful Demoralization 100

Construct Validity 100

Mono-Operation Bias 102

Mono-Method Bias 102

External Validity 103

Summary 105

Key Terms 105

Discussion Questions 106

Six: Attributing Outcomes to the Program: Quasi-experimental Design 107

Learning Objectives 107

Quasi-Experimental Notation 108

Frequently Used Designs That Do Not Show Causation 109

One-Group Posttest-Only 109

Posttest-Only with Nonequivalent Groups 110

Participants’ Pretest-Posttest 111

Designs That Generally Permit Causal Inferences 112

Untreated Control Group Design with Pretest and Posttest 112

Delayed Treatment Control Group 118

Different Samples Design 120

Nonequivalent Observations Drawn from One Group 121

Nonequivalent Groups Using Switched Measures 122

Cohort Designs 123

Time Series Designs 125

Archival Data 127

Summary 128

Key Terms 128

Discussion Questions 129

Seven: Collecting Data 131

Learning Objectives 131

Informal Interviews 132

Focus Groups 132

Survey Design 136

Sampling 140

Ways to Collect Survey Data 143

Anonymity and Confidentiality 144

Summary 146

Key Terms 147

Discussion Questions 147

Eight: Conclusions 149

Learning Objectives 149

Using Evaluation Tools to Develop Grant Proposals 150

Hiring an Evaluation Consultant 152

Summary 152

Key Terms 153

Discussion Questions 153

Appendix A: American Community Survey 155

Glossary 157

References 163

Index 165

From the B&N Reads Blog

Customer Reviews