Methods Matter: Improving Causal Inference in Educational and Social Science Research

Hardcover (Print)
Rent
Rent from BN.com
$15.68
(Save 75%)
Est. Return Date: 11/02/2014
Used and New from Other Sellers
Used and New from Other Sellers
from $51.07
Usually ships in 1-2 business days
(Save 18%)
Other sellers (Hardcover)
  • All (9) from $51.07   
  • New (8) from $51.07   
  • Used (1) from $151.98   

Overview

Educational policy-makers around the world constantly make decisions about how to use scarce resources to improve the education of children. Unfortunately, their decisions are rarely informed by evidence on the consequences of these initiatives in other settings. Nor are decisions typically accompanied by well-formulated plans to evaluate their causal impacts. As a result, knowledge about what works in different situations has been very slow to accumulate.

Over the last several decades, advances in research methodology, administrative record keeping, and statistical software have dramatically increased the potential for researchers to conduct compelling evaluations of the causal impacts of educational interventions, and the number of well-designed studies is growing. Written in clear, concise prose, Methods Matter: Improving Causal Inference in Educational and Social Science Research offers essential guidance for those who evaluate educational policies. Using numerous examples of high-quality studies that have evaluated the causal impacts of important educational interventions, the authors go beyond the simple presentation of new analytical methods to discuss the controversies surrounding each study, and provide heuristic explanations that are also broadly accessible. Murnane and Willett offer strong methodological insights on causal inference, while also examining the consequences of a wide variety of educational policies implemented in the U.S. and abroad. Representing a unique contribution to the literature surrounding educational research, this landmark text will be invaluable for students and researchers in education and public policy, as well as those interested in social science.

Read More Show Less

Editorial Reviews

From the Publisher
"Policy discussions today routinely demand that proposals be evidence-based — without really understanding that the reliability and validity of what passes as evidence varies widely. Murnane and Willett have done a remarkable job of helping both producers and consumers to understand what is good evidence and how it can be produced. Methods Matter explains lucidly how the causal impact of educational and social interventions can be estimated from quantitative data, using a panoply of innovative empirical approaches." —Eric A. Hanushek, Senior Fellow, Hoover Institution, Stanford University

"Methods Matter is about research designs and statistical analyses for drawing valid and reliable causal inferences from data about real-world problems. The book's most telling feature is the wide range of education research examples that it uses to illustrate each point made. By presenting powerful research methods in the context of important research questions the authors are able to draw readers quickly and deeply into the material covered. New and experienced researchers from many fields will learn a lot from reading Methods Matter and will enjoy doing so."—Howard S. Bloom, Chief Social Scientist, MDRC

"Richard J. Murnane and John B. Willett provide a broadly accessible account of causal inference in educational research. They consider basic principles- how to define causal effects, frame causal questions, and design experiments- while also gently introducing important topics that have previously been obscure to non-specialists: randomization by group, natural experiments, instrumental variables, regression discontinuity, and propensity scores. Using a wide range or examples, the authors teach their readers to identify and challenge key assumptions underlying claims about what works in education. This book will improve educational research by challenging researchers and policy-makers to think more rigorously about the evidence and assumptions underlying their work." — Stephen W. Raudenbush, Lewis Sebring Distinguished Service Professor, Department of Sociology, University of Chicago

"I strongly recommend Methods Matter to anyone who intends to conduct research on the causal impact of education programs and policies. Henceforth, a graduate course in education research methods that doesn't rely on it should be considered suspect. Methods Matter should also be essential reading for those who want to be critical consumers of advanced education research. Methods Matter very much, and so does this book. It is a very good book that signals a coming of age of the field."
—Grover Whitehurst, Director, Brown Center on Education Policy, Brookings Institute

"To be useful for development policy, educational research has to shed more light on how resources for education can produce more learning, more knowledge, more skills. In this book, Professors Richard Murnane and John Willett discuss a range of empirical methods for estimating causal relationships and review their applications in educational research. They translate complex statistical concepts into clear, accessible language and provide the kind of analytical guidance that a graduate student or young researcher might obtain only after years of experience with these methods. This volume is a very readable companion to any statistics textbook or statistical program on evaluation methods."
—Elizabeth M. King, Director, Education, The World Bank

Read More Show Less

Product Details

  • ISBN-13: 9780199753864
  • Publisher: Oxford University Press, USA
  • Publication date: 9/17/2010
  • Edition description: New Edition
  • Pages: 416
  • Sales rank: 443,179
  • Product dimensions: 6.40 (w) x 9.30 (h) x 1.10 (d)

Meet the Author

Richard J. Murnane, Juliana W. and William Foss Thompson Professor of Education and Society at Harvard University, is an economist who focuses his research on the relationships between education and the economy, teacher labor markets, the determinants of children's achievement, and strategies for making schools more effective.

John B. Willett, Charles William Eliot Professor of Education at Harvard University, is a quantitative methodologist who has devoted his career to improving the research design and data-analytic methods used in education and the social sciences, with a particular emphasis on the design of longitudinal research and the analysis of longitudinal data .

Read More Show Less

Table of Contents

1 The Challenge for Educational Research
1.1 The Long Quest
1.2 The Quest is World-Wide
1.3 What this Book is About
1.4 What to Read Next

2 The Importance of Theory
2.1 What is Theory?
2.2 Theory in Education
2.3 Voucher Theory
2.4 What Kind of Theories?
2.5 What to Read Next

3 Designing Research to Address Causal Questions
3.1 Conditions to Strive for in All Research
3.2 Making Causal Inferences
3.3 Past Approaches To Answering Causal Questions in Education
3.4 The Key Challenge of Causal Research
3.5 What to Read Next

4 Investigator-Designed Randomized Experiments
4.1 Conducting Randomized Experiments
4.1.1 An Example of a "Two-Group" Experiment
4.2 Analyzing Data from Randomized Experiments
4.2.1 Better Your Research Design, the Simpler Your Data-Analysis
4.2.2 Bias and Precision in the Estimation of Experimental Effects
4.3 What to Read Next

5 Challenges in Designing, Implementing, and Learning from Randomized Experiments
5.1 Critical Decisions in the Design of Experiments
5.1.1 Defining the Treatment Being Evaluated
5.1.2 Defining the Population from Which Participants Will Be Sampled
5.1.3 Deciding Which Outcomes to Measure
5.1.4 Deciding How Long To Track Participants
5.2 Threats to Validity of Randomized Experiments
5.2.1 Contamination of the treatment-control contrast
5.2.2 Cross-Overs
5.2.3 Attrition from the sample
5.2.4 Participation in an Experiment Itself Affects Participants' Behavior
5.3 Gaining Support for Conducting Randomized Experiments: Examples from India
5.3.1 Evaluating an Innovative Input Approach
5.3.2 Evaluating an Innovative Incentive Policy
5.4 What to Read Next
6 Statistical Power and Sample Size
6.1 Statistical Power
6.1.1 Reviewing the Process of Statistical Inference
6.1.2 Defining Statistical Power
6.2 Factors Affecting Statistical Power
6.2.1 The Strengths and Limitations of Parametric Tests
6.2.2 The Benefits of Covariates
6.2.3 The Reliability of the Outcome Measure Matters
6.2.4 The Choice between One-Tailed and Two-Tailed Tests
6.3 What to Read Next

7 Experimental Research When Participants Are Clustered within Intact Groups

7.1 Using the Random-Intercepts Multilevel Model to Estimate Effect Size When Intact Groups of Participants Were Randomized To Experimental Conditions
7.2 Statistical Power When Intact Groups of Participants Were Randomized To
Experimental Conditions
7.2.1. Statistical Power of the Cluster-Randomized Design and Intraclass
Correlation
7.3 Using Fixed-Effects Multilevel Models to Estimate Effect Size When Intact Groups of Participants are Randomized To Experimental Conditions
7.3.1 Specifying a "Fixed-Effects" Multilevel Model
7.3.2. Choosing Between Random- and Fixed-Effects Specifications
7.2 What to Read Next

8 Using Natural Experiments To Provide "Arguably Exogenous" Treatment Variability
8.1 Natural- and Investigator-Designed Experiments: Similarities and Differences
8.2 Two Examples of Natural Experiments
8.2.1 The Vietnam Era Draft Lottery
8.2.2 The Impact of an Offer of Financial Aid for College
8.3 Sources of Natural Experiments
8.4 Choosing the Width of the Analytic Window
8.5 Threats to Validity in Natural Experiments with a Discontinuity Design
8.5.1 Accounting for the Relationship between the Forcing Variable and the Outcome in a Discontinuity Design
8.5.2 Actions by Participants Can Undermine Exogenous Assignment to
Experimental Conditions in a Natural Experiment with a Discontinuity Design
8.6 What to Read Next?

9 Estimating Causal Effects Using a Regression-Continuity Approach
9.1 Maimonides' Rule and the Impact of Class Size on Student Achievement
9.1.1 A Simple "First Difference" Analysis
9.1.2 A "Difference-in-Differences" Analysis
9.1.3 A Basic "Regression-Discontinuity" Analysis
9.1.4 Choosing an Appropriate "Window" or "Bandwidth"
9.2 Generalizing the Relationship between Outcome and Forcing Variable
9.2.1 Specification Checks Using Pseudo-Outcomes and Pseudo-Cutoffs
9.2.2 RD Designs and Statistical Power
9.3 Additional Threats to Validity in an RD Design
9.4 What to Read Next

10 Introducing Instrumental Variables Estimation
10.1 Introducing Instrumental Variables Estimation
10.1.1 Investigating the Relationship Between an Outcome and a Potentially-
Endogenous Question Predictor Using OLS Regression Analysis
10.1.2 Instrumental Variables Estimation
10.2 Two Critical Assumptions That Underpin Instrumental Variables Estimation
10.3 Alternative Ways of Obtaining the IV Estimate
10.3.1 Obtaining an IV Estimate by the Method of Two-Stage Least-Squares
10.3.2 Obtaining an IVE by Simultaneous Equations Estimation
10.4 Extensions of the Basic IVE Approach
10.4.1 Incorporating Exogenous Covariates into IV Estimation
10.4.2 Incorporating Multiple Instruments into the First-Stage Model
10.4.3 Examining the Impact of Interactions between the Endogenous
Question Predictor and Exogenous Covariates in the Second-Stage
Model
10.4.4 Choosing Appropriate Functional Forms for the Outcome/Predictor
Relationships in the First- and Second-Stage Models
10.5 Finding and Defending Instruments
10.5.1 Proximity of Educational Institutions
10.5.2 Institutional Rules and Personal Characteristics
10.5.3 Deviations from Cohort Trends
10.5.4 The Search Continues
10.6 What To Read Next

11 Using IVE to Recover the Treatment Effect in a Quasi-Experiment
11.1 The Notion of a "Quasi-Experiment"
11.2 Using IVE to Estimate the Causal Impact of a Treatment in a Quasi-Experiment
11.3 Further Insight into the IVE (LATE) Estimate, in the Context of Quasi-
Experimental Data
11.4 Using IVE to Resolve "Fuzziness" in a Regression Discontinuity Design
11.5 What To Read Next

12 Dealing with Bias in Treatment Effects Estimated from Non-Experimental Data
12.1 Reducing Observed Bias by the Method of Stratification
12.1.1 Stratifying on a Single Covariate
12.1.2 Stratifying on Covariates
12.2 Reducing Observed Bias by Direct Control for Covariates Using Regression
Analysis
12.3 Reducing Observed Bias Using A "Propensity Score" Approach
12.3.1 Estimation of the Treatment Effect by Stratifying on Propensity Scores
12.3.2 Estimation of the Treatment Effect by Matching on Propensity Scores
12.3.3 Estimation of the Treatment Effect by Weighting by the Inverse of the
Propensity Scores
12.4 A Return to the Substantive Question
12.5 What to Read Next

13 Substantive Lessons from High-Quality Evaluations of Educational Interventions
13.1 Increasing School Enrollments
13.1.1 Reduce Commuting Time
13.1.2 Reduce Out-of-Pocket Educational Costs
13.1.3 Reduce Opportunity Costs
13.1.4 Does Increasing School Enrollment Necessarily Lead To Improved
Long-Term Outcomes?
13.2 Improving School Quality
13.2.1 Provide More or Better Educational Inputs
13.2.1.1 Provide More Books
13.2.1.2 Teach Children in Smaller Classes
13.2.1.3 Recruit Skilled Teachers or Provide Training to Enhance
Teachers' Effectiveness
13.2.2 Improve Incentives For Teachers
13.2.3 Improving Incentives for Students
13.2.4 Increase Families' Schooling Choices
13.3 Summing Up

14 Methodological Lessons from the Long Quest
14.1 Be Clear About Your Theory of Action
14.2 Learn about Culture, Rules, and Institutions in the Research Setting
14.3 Understand the Counterfactual
14.4 Worry about Selection Bias
14.5 Measure All Possible Important Outcomes
14.6 Be On the Lookout for Longer-Term Effects
14.7 Develop a Plan for Examining Impacts on Subgroups
14.8 Interpret Your Research Results Correctly
14.9 Pay Attention to Anomalous Results
14.10 Recognize That Good Research Always Raises New Questions
14.11 Final Words

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)