Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing

Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing

Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing

Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing

Paperback

$40.99 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

Getting numbers is easy; getting numbers you can trust is hard. This practical guide by experimentation leaders at Google, LinkedIn, and Microsoft will teach you how to accelerate innovation using trustworthy online controlled experiments, or A/B tests. Based on practical experiences at companies that each run more than 20,000 controlled experiments a year, the authors share examples, pitfalls, and advice for students and industry professionals getting started with experiments, plus deeper dives into advanced topics for practitioners who want to improve the way they make data-driven decisions. Learn how to • Use the scientific method to evaluate hypotheses using controlled experiments • Define key metrics and ideally an Overall Evaluation Criterion • Test for trustworthiness of the results and alert experimenters to violated assumptions • Build a scalable platform that lowers the marginal cost of experiments close to zero • Avoid pitfalls like carryover effects and Twyman's law • Understand how statistical issues play out in practice.

Product Details

ISBN-13: 9781108724265
Publisher: Cambridge University Press
Publication date: 04/02/2020
Pages: 288
Sales rank: 381,234
Product dimensions: 5.98(w) x 8.90(h) x 0.55(d)

About the Author

Ron Kohavi is a Technical Fellow and corporate VP of Microsoft's Analysis and Experimentation, and was previously director of data mining and personalization at Amazon. He received his Ph.D. in Computer Science from Stanford University. His papers have over 40,000 citations and three of them are in the top 1,000 most-cited papers in Computer Science.

Diane Tang is a Google Fellow, with expertise in large-scale data analysis and infrastructure, online controlled experiments, and ads systems. She has an A.B. from Harvard and an M.S./Ph.D. from Stanford University, with patents and publications in mobile networking, information visualization, experiment methodology, data infrastructure, data mining, and large data.

Ya Xu heads Data Science and Experimentation at LinkedIn. She has published several papers on experimentation and is a frequent speaker at top-tier conferences and universities. She previously worked at Microsoft and received her Ph.D. in Statistics from Stanford University.

Table of Contents

Preface – how to read this book; 1. Introduction and motivation; 2. Running and analyzing experiments: an end-to-end example; 3. Twyman's law and experimentation trustworthiness; 4. Experimentation platform and culture; Part II: 5. Speed matters: an end-to-end case study; 6. Organizational metrics; 7. Metrics for experimentation and the Overall Evaluation Criterion (OEC); 8. Institutional memory and aeta-analysis; 9. Ethics in controlled experiments; Part III: 10. Complementary techniques; 11. Observational causal studies; Part IV: 12. Client-side experiments; 13. Instrumentation; 14. Choosing a randomization unit; 15. Ramping experiment exposure: trading off speed, quality, and risk; 16. Scaling experiment analyses; Part V: 17. The statistics behind online controlled experiments; 18. Variance estimation and improved sensitivity: pitfalls and solutions; 19. The A/A test; 20. Triggering for improved sensitivity; 21. Guardrail metrics; 22. Leakage and interference between variants; 23. Measuring long-term treatment effects.
From the B&N Reads Blog

Customer Reviews