Quantifying the User Experience: Practical Statistics for User Research

Paperback (Print)
Buy New
Buy New from BN.com
$41.17
Used and New from Other Sellers
Used and New from Other Sellers
from $35.20
Usually ships in 1-2 business days
(Save 29%)
Other sellers (Paperback)
  • All (12) from $35.20   
  • New (11) from $35.20   
  • Used (1) from $54.79   

Overview

You're being asked to quantify usability improvements with statistics. But even with a background in statistics, you are hesitant to statistically analyze the data, as you may be unsure about which statistical tests to use and have trouble defending the use of the small test sample sizes associated with usability studies.

The book is about providing a practical guide on how to use statistics to solve common quantitative problems arising in user research. It addresses common questionsyou face every daysuch as: Is the current product more usable than our competition? Can we be sure at least 70% of users can complete the task on the 1st attempt? How long will it take users to purchase products on the website? This book shows you which test to use, and how provide a foundation for both the statistical theory and best practices in applying them. The authors draw on decades of statistical literature from Human Factors, Industrial Engineering and Psychology, as well as their own published research to provide the best solutions. They provide both concrete solutions (excel formula, links to their own web-calculators) along with an engaging discussion about the statistical reasons for why the tests work, and how to effectively communicate the results.

  • Provides practical guidance on solving usability testing problems with statistics for any project, including those using Six Sigma practices
  • Show practitioners which test to use, why they work, best practices in application, along with easy-to-use excel formulas and web-calculators for analyzing data
  • Recommends ways for practitioners to communicate results to stakeholders in plain English
Read More Show Less

Editorial Reviews

From the Publisher
"Quantifying the User Experience will make a terrific textbook for any series of UX research courses…I highly recommend this book to anyone who wants to integrate quantitative data into their UX practice."—Technical Communication, May 2013 "…as a whole, it provides a pragmatic approach to quantifying UX, without oversimplifying or claiming too much. It delivers what it promises. This book is valuable for both practitioners and students, in virtually any discipline. It can help psychologists transfer their statistical knowledge to UX practice, practitioners quickly assess their envisioned design and analysis, engineers demystify UX, and students appreciate UX’s merits."—ComputingReviews.com, March 19, 2013 "The most unique contributions of this book are the logic and practicality used to describe the appropriate application of those measures…Sauro and Lewis strike a perfect balance between the complexity of statistical theory and the simplicity of applying statistics practically. Whether you wish to delve deeper into the enduring controversies in statistics, or simply wish to understand the difference between a t-test and Chi-square, you will find your answer in this book. Quantifying the User Experience is an invaluable resource for those who are conducting user research in industry."—User Experience, Vol. 13, Issue 1, 1st Quarter "Written in a conversational style for those who measure behavior and attitudes of people as they interact with technology interfaces, this guide walks readers through common questions and problems encountered when conducting, analyzing, and reporting on user research projects using statistics, such as problems related to estimates and confidence intervals, sample sizes, and standardized usability questionnaires. For readers with varied backgrounds in statistics, the book includes discussion of concepts as necessary and gives examples from real user research studies. The book begins with a background chapter overviewing common ways to quantify user research and a review of fundamental statistical concepts. The material provides enough detail in its formulas and examples to let readers do all computations in Excel, and a website offers an Excel calculator for purchase created by the authors, which performs all the computations covered in the book. An appendix offers a crash course on fundamental statistical concepts."—Reference and Research Book News, August 2012, page 186-7
Read More Show Less

Product Details

  • ISBN-13: 9780123849687
  • Publisher: Elsevier Science
  • Publication date: 3/30/2012
  • Pages: 312
  • Sales rank: 386,430
  • Product dimensions: 7.48 (w) x 9.02 (h) x 0.74 (d)

Meet the Author

Jeff Sauro is a six-sigma trained statistical analyst and founding principal of Measuring Usability LLC. For fifteen years he’s been conducting usability and statistical analysis for companies such as PayPal, Walmart, Autodesk and Kelley Blue Book or working for companies such as Oracle, Intuit and General Electric.
Jeff has published over fifteen peer-reviewed research articles and is on the editorial board of the Journal of Usability Studies. He is a regular presenter and instructor at the Computer Human Interaction (CHI) and Usability Professionals Associations (UPA) conferences.
Jeff received his Masters in Learning, Design and Technology from Stanford University with a concentration in statistical concepts. Prior to Stanford, he received his B.S. in Information Management & Technology and B.S. in Television, Radio and Film from Syracuse University. He lives with his wife and three children in Denver, CO.

Dr. James R. (Jim) Lewis is a senior human factors engineer (at IBM since 1981) with a current focus on the design and evaluation of speech applications and is the author of Practical Speech User Interface Design. He is a Certified Human Factors Professional with a Ph.D. in Experimental Psychology (Psycholinguistics), an M.A. in Engineering Psychology, and an M.M. in Music Theory and Composition. Jim is an internationally recognized expert in usability testing and measurement, contributing (by invitation) the chapter on usability testing for the 3rd and 4th editions of the Handbook of Human Factors and Ergonomics and presenting tutorials on usability testing and metrics at various professional conferences.
Jim is an IBM Master Inventor with 77 patents issued to date by the US Patent Office. He currently serves on the editorial boards of the International Journal of Human-Computer Interaction and the Journal of Usability Studies, and is on the scientific advisory board of the Center for Research and Education on Aging and Technology Enhancement (CREATE). He is a member of the Usability Professionals Association (UPA), the Human Factors and Ergonomics Society (HFES), the Association for Psychological Science (APS) and the American Psychological Association (APA), and is a 5th degree black belt and certified instructor with the American Taekwondo Association (ATA).

Read More Show Less

Read an Excerpt

Quantifying the User Experience

Practical Statistics for User Research
By Jeff Sauro James R. Lewis

Morgan Kaufmann

Copyright © 2012 Jeff Sauro and James R. Lewis
All right reserved.

ISBN: 978-0-12-384969-4


Chapter One

Introduction and How to Use This Book

INTRODUCTION

The last thing many designers and researchers in the field of user experience think of is statistics. In fact, we know many practitioners who find the field appealing because it largely avoids those impersonal numbers. The thinking goes that if usability and design are qualitative activities, it's safe to skip the formulas and numbers.

Although design and several usability activities are certainly qualitative, the impact of good and bad designs can be easily quantified in conversions, completion rates, completion times, perceived satisfaction, recommendations, and sales. Increasingly, usability practitioners and user researchers are expected to quantify the benefits of their efforts. If they don't, someone else will—unfortunately that someone else might not use the right metrics or methods.

THE ORGANIZATION OF THIS BOOK

This book is intended for those who measure the behavior and attitudes of people as they interact with interfaces. This book is not about abstract mathematical theories for which you may someday find a partial use. Instead, this book is about working backwards from the most common questions and problems you'll encounter as you conduct, analyze, and report on user research projects. In general, these activities fall into three areas:

1. Summarizing data and computing margins of error (Chapter 3).

2. Determining if there is a statistically significant difference, either in comparison to a benchmark (Chapter 4) or between groups (Chapter 5).

3. Finding the appropriate sample size for a study (Chapters 6 and 7).

We also provide:

• Background chapters with an overview of common ways to quantify user research (Chapter 2) and a quick introduction/review of many fundamental statistical concepts (Appendix).

• A comprehensive discussion of standardized usability questionnaires (Chapter 8).

• A discussion of enduring statistical controversies of which user researchers should be aware and able to articulate in defense of their analyses (Chapter 9).

• A wrap-up chapter with pointers to more information on statistics for user research (Chapter 10).

Each chapter ends with a list of key points and references. Most chapters also include a set of problems and answers to those problems so you can check your understanding of the content.

HOW TO USE THIS BOOK

Despite there being a significant proportion of user research practitioners with advanced degrees, about 10% have PhDs (UPA, 2011); for most people in the social sciences, statistics is the only quantitative course they have to take. For many, statistics is a subject they know they should understand, but it often brings back bad memories of high school math, poor teachers, and an abstract and difficult topic.

While we'd like to take all the pain out of learning and using statistics, there are still formulas, math, and some abstract concepts that we just can't avoid. Some people want to see how the statistics work, and for them we provide the math. If you're not terribly interested in the computational mechanics, then you can skip over the formulas and focus more on how to apply the procedures.

Readers who are familiar with many statistical procedures and formulas may find that some of the formulas we use differ from what you learned in your college statistics courses. Part of this is from recent advances in statistics (especially for dealing with binary data). Another part is due to our selecting the best procedures for practical user research, focusing on procedures that work well for the types of data and sample sizes you'll likely encounter.

Based on teaching many courses at industry conferences and at companies, we know the statistics background of the readers of this book will vary substantially. Some of you may have never taken a statistics course whereas others probably took several in graduate school. As much as possible, we've incorporated relevant discussions around the concepts as they appear in each chapter with plenty of examples using actual data from real user research studies.

In our experience, one of the hardest things to remember in applying statistics is what statistical test to perform when. To help with this problem, we've provided decision maps (see Figures 1.1 to 1.4) to help you get to the right statistical test and the sections of the book that discuss it.

What Test Should I Use?

The first decision point comes from the type of data you have. See the Appendix for a discussion of the distinction between discrete and continuous data. In general, for deciding which test to use, you need to know if your data are discrete-binary (e.g., pass/fail data coded as 1's and 0's) or more continuous (e.g., task-time or rating-scale data).

The next major decision is whether you're comparing data or just getting an estimate of precision. To get an estimate of precision you compute a confidence interval around your sample metrics (e.g., what is the margin of error around a completion rate of 70%; see Chapter 3). By comparing data we mean comparing data from two or more groups (e.g., task completion times for Products A and B; see Chapter 5) or comparing your data to a benchmark (e.g., is the completion rate for Product A significantly above 70%; see Chapter 4).

If you're comparing data, the next decision is whether the groups of data come from the same or different users. Continuing on that path, the final decision depends on whether there are two groups to compare or more than two groups.

To find the appropriate section in each chapter for the methods depicted in Figures 1.1 and 1.2, consult Tables 1.1 and 1.2. Note that methods discussed in Chapter 10 are outside the scope of this book, and receive just a brief description in their sections.

For example, let's say you want to know which statistical test to use if you are comparing completion rates on an older version of a product and a new version where a different set of people participated in each test.

1. Because completion rates are discrete-binary data (1 = pass and 0 = fail), we should use the decision map in Figure 1.2.

2. Start at the first box, "Comparing Data?," and select "Y" because we are comparing a data set from an older product with a data set from a new product.

3. This takes us to the "Different Users in Each Group" box—we have different users in each group so we select "Y."

4. Now we're at the "3 or More Groups" box—we have only two groups of users (before and after) so we select "N."

5. We stop at the "N - 1 Two-Proportion Test and Fisher Exact Test" (Chapter 5).

What Sample Size Do I Need?

Often the first collision a user researcher has with statistics is in planning sample sizes. Although there are many "rules of thumb" on how many users you should test or how many customer responses you need to achieve your goals, there really are precise ways of finding the answer. The first step is to identify the type of test for which you're collecting data. In general, there are three ways of determining your sample size:

1. Estimating a parameter with a specified precision (e.g., if your goal is to estimate completion rates with a margin of error of no more than 5%, or completion times with a margin of error of no more than 15 seconds).

2. Comparing two or more groups or comparing one group to a benchmark.

3. Problem discovery, specifically the number of users you need in a usability test to find a specified percentage of usability problems with a specified probability of occurrence.

To find the appropriate section in each chapter for the methods depicted in Figures 1.3 and 1.4, consult Table 1.3.

For example, let's say you want to compute the appropriate sample size if the same users will rate the usability of two products using a standardized questionnaire that provides a mean score.

1. Because the goal is to compare data, start with the sample size decision map in Figure 1.3.

2. At the "Comparing Groups?" box, select "Y" because there will be two groups of data, one for each product.

3. At the "Different Users in Each Group?" box, select "N" because each group will have the same users.

4. Because rating-scale data are not binary, select "N" at the "Binary Data?" box.

5. We stop at the "Paired Means" procedure (Chapter 6).

You Don't Have to Do the Computations by Hand

We've provided sufficient detail in the formulas and examples that you should be able to do all computations in Microsoft Excel. If you have an existing statistical package like SPSS, Minitab, or SAS, you may find some of the results will differ (e.g., confidence intervals and sample size computations) or they don't include some of the statistical tests we recommend, so be sure to check the notes associated with the procedures.

We've created an Excel calculator that performs all the computations covered in this book. It includes both standard statistical output (p-values and confidence intervals) and some more user-friendly output that, for example, reminds you how to interpret that ubiquitous p-value and that you can paste right into reports. It is available for purchase online at www.measuringusability.com/ products/expandedStats. For detailed information on how to use the Excel calculator (or a custom set of functions written in the R statistical programming language) to solve the over 100 quantitative examples and exercises that appear in this book, see Lewis and Sauro (2012).

KEY POINTS FROM THE CHAPTER

• The primary purpose of this book is to provide a statistical resource for those who measure the behavior and attitudes of people as they interact with interfaces.

• Our focus is on methods applicable to practical user research, based on our experience, investigations, and reviews of the latest statistical literature.

• As an aid to the persistent problem of remembering what method to use under what circumstances, this chapter contains four decision maps to guide researchers to the appropriate method and its chapter in this book.

CHAPTER REVIEW QUESTIONS

1. Suppose you need to analyze a sample of task-time data against a specified benchmark. For example, you want to know if the average task time is less than two minutes. What procedure should you use?

2. Suppose you have some conversion-rate data and you just want to understand how precise the estimate is. For example, in examining the server log data you see 10,000 page views and 55 clicks on a registration button. What procedure should you use?

3. Suppose you're planning to conduct a study in which the primary goal is to compare task completion times for two products, with two independent groups of participants providing the times. Which sample size estimation method should you use?

4. Suppose you're planning to run a formative usability study—one where you're going to watch people use the product you're developing and see what problems they encounter. Which sample size estimation method should you use?

Answers

1. Task-time data are continuous (not binary-discrete), so start with the decision map in Figure 1.1. Because you're testing against a benchmark rather than comparing groups of data, follow the "N" path from "Comparing Data?" At "Testing Against a Benchmark?," select the "Y" path. Finally, at "Task Time?," take the "Y" path, which leads you to "1-Sample t (Log)." As shown in Table 1.1, you'll find that method discussed in Chapter 4 in the "Comparing a Task Time to a Benchmark" section on p. 54.

2. Conversion-rate data are binary-discrete, so start with the decision map in Figure 1.2. You're just estimating the rate rather than comparing a set of rates, so at "Comparing Data?," take the "N" path. At "Testing Against a Benchmark?," also take the "N" path. This leads you to "Adjusted Wald Confidence Interval," which, according to Table 1.2, is discussed in Chapter 3 in the "Adjusted-Wald Interval: Add Two Successes and Two Failures" section on p. 22.

3. Because you're planning a comparison of two independent sets of task times, start with the decision map in Figure 1.3. At "Comparing Groups?," select the "Y" path. At "Different Users in Each Group?," select the "Y" path. At "Binary Data?," select the "N" path. This takes you to "2 Means," which, according to Table 1.3, is discussed in Chapter 6 in the "Comparing Values" section. See Example 6 on p. 116.

4. For this type of problem discovery evaluation, you're not planning any type of comparison, so start with the decision map in Figure 1.4. You're not planning to estimate any parameters, such as task times or problem occurrence rates, so at "Estimating a Parameter?," take the "N" path. This leads you to "Problem Discovery Sample Size," which, according to Table 1.3, is discussed in Chapter 7 in the "Using a Probabilistic Model of Problem Discovery to Estimate Sample Sizes for Formative User Research" section on p. 143.

(Continues...)



Excerpted from Quantifying the User Experience by Jeff Sauro James R. Lewis Copyright © 2012 by Jeff Sauro and James R. Lewis. Excerpted by permission of Morgan Kaufmann. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Read More Show Less

Table of Contents

Dedication Acknowledgements About the Authors Chapter 1: Introduction and How to Use This Book Chapter 2: Quantifying User Research Chapter 3: How Precise Are Our Estimates? Confidence Intervals Chapter 4: Did We Meet or Exceed Our Goal? Chapter 5: Is There a Statistical Difference between Designs? Chapter 6: What Sample Sizes Do We Need? Part 1: Summative Studies Chapter 7: What Sample Sizes Do We Need? Part 2: Formative Studies Chapter 8: Standard Usability Questionnaires Chapter 9: Six Enduring Controversies in Measurement and Statistics Chapter 10: Wrapping Up Appendix Index

Read More Show Less

Customer Reviews

Average Rating 5
( 1 )
Rating Distribution

5 Star

(1)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously
Sort by: Showing 1 Customer Reviews
  • Posted May 23, 2012

    VERY VERY HIGHLY RECOMMENDED!!

    Do you measure the behavior and attitudes of people as they interact with interfaces? If you do, then this book is for you! Authors Jeff Sauro and James R Lewis have done an outstanding job of writing a book that is about working backwards from the most common questions and problems you’ll encounter, as you conduct, analyze and report on user research projects. Authors Sauro and Lewis, begin by showing you how to quantify data from small sample sizes and use statistics to draw conclusions. In addition, the authors show you how to use confidence intervals around all point estimates to understand the most likely range of the unknown population mean or proportion. They then help you use the mid-probability from the binomial distribution in order to determine whether a certain percentage of users can complete a task for small and large sample sizes. The authors then, help you determine which statistical test you need to use, in order to identify whether your outcome measure is binary or continuous; and, whether you have the same users in each group or a different set of users. They continue by showing you how to obtain a sample size estimation formula, by taking the formula for the appropriate test and solve for n. In addition, the authors describe why the limited data available indicates that even with the overestimation problem, the discrepancies between observed and expected numbers of problems are not large. They then describe 24 standardized questionnaires designed to assess perceptions of usability or related constraints. The authors then show you why you should use two-tailed testing for most user research. Finally, they discuss the most common issues that arise in user research. The primary purpose of this most excellent book is to provide a statistical resource for those who measure the behavior and attitudes of people as they interact with interfaces. Perhaps more importantly, as an aid to the persistent problem of remembering what method to use under what circumstances, this book contains decision maps to guide researchers to the appropriate method.

    Was this review helpful? Yes  No   Report this review
Sort by: Showing 1 Customer Reviews

If you find inappropriate content, please report it to Barnes & Noble
Why is this product inappropriate?
Comments (optional)