ISBN-10:
0123748925
ISBN-13:
9780123748928
Pub. Date:
01/15/2010
Publisher:
Elsevier Science
Beyond the Usability Lab: Conducting Large-scale Online User Experience Studies

Beyond the Usability Lab: Conducting Large-scale Online User Experience Studies

Current price is , Original price is $58.95. You

Temporarily Out of Stock Online

Please check back later for updated availability.

Product Details

ISBN-13: 9780123748928
Publisher: Elsevier Science
Publication date: 01/15/2010
Edition description: New Edition
Pages: 328
Sales rank: 1,225,384
Product dimensions: 7.40(w) x 9.10(h) x 0.90(d)

About the Author

Bill Albert is Director of the Design and Usability Center at Bentley University. Prior to joining Bentley, Bill was Director of User Experience at Fidelity Investments, Senior User Interface Researcher at Lycos, and Post-Doctoral Research Scientist at Nissan Cambridge Basic Research. Bill is an Adjunct Professor in Human Factors in Information Design at Bentley University and a frequent instructor at the International Usability Professional’s Association Annual Conference. Bill has published and presented his research at more than thirty national and international conferences. He is coauthor (with Tom Tullis) of Measuring the User Experience and Beyond the Usability Lab. He is on the editorial board for the Journal of Usability Studies.

Tom Tullis is Vice President of Usability and User Insight at Fidelity Investments and Adjunct Professor at Bentley University in the Human Factors in Information Design program. He joined Fidelity in 1993 and was instrumental in the development of the company’s usability department, including a state-of-the-art Usability Lab. Prior to joining Fidelity, he held positions at Canon Information Systems, McDonnell Douglas, Unisys Corporation, and Bell Laboratories. He and Fidelity’s usability team have been featured in a number of publications, including Newsweek , Business 2.0 , Money , The Boston Globe , The Wall Street Journal , and The New York Times.

Donna Tedesco is a Senior User Experience Specialist with over ten years of user research experience. She has published and presented at local, national and international conferences, and is co-author with Bill Albert and Tom Tullis of the book, "Beyond the Usability Lab: Conducting Large-Scale Online User Experience Studies." Donna received a BS in Engineering Psychology/Human Factors from Tufts University School of Engineering and a MS in Human Factors in Information Design from Bentley University.

Table of Contents

1. Introduction to online usability methods
a. What is online usability, and how it differs from traditional usability methods b. Examples of different types of online usability studies c. Pros and cons of online and non-online methods d. When to use (and not use) online methods e. Combining online studies with lab testing

Chapter 1 provides an overview to online usability testing. Special attention will be paid to how it differs from traditional usability methods (including remote testing). There will be an in-depth discussion of the pros and cons of online testing, and when to use and not use online methods. We will provide real-world examples to highlight the value of this method. We will also discuss ways to complement traditional usability testing with online testing. Our intention is that the reader will be in the position to determine if an online usability study is appropriate for their organization.

2. Planning your study
a. Study goals b. Budgets and timeline c. Technology options d. Participant recruiting and panels e. Sample size f. Panel options g. Sampling strategy h. Study duration i. Participant compensation

Chapter 2 focuses on all the activities and decisions that need to be made place prior to actually putting the survey together. The first three activities (goals, budgets/timelines, and technology options) are all essential to accurately scope an online study. The next part of this chapter focuses on finding the right number of targeted participants. This includes a discussion of research panels, sample size determination, and sampling strategies. The chapter will conclude with a discussion of estimating study duration and participant compensation.

3. Designing your study
a. Introducing the survey b. Screener questions c. Starter questions d. Constructing Tasks e. Post-task questions and metrics f. Post-session questions and metrics g. Branching h. Progress indicators and navigation i. Speed traps j. Question types

Chapter 3 is devoted to developing the study design. The first half of the chapter (topics a through g) are the various sections that are typically included in an online usability study. For each section, we will review best practices and common pitfalls. We want to give the reader the confidence for putting together an effective online study. The last part of this chapter (topics h through k) deal with common techniques that are used in various parts of a study. They include topics such as branching, navigation, speed traps, and question types.

4. Launching your study
a. Piloting and Validating b. Timing the launch c. Phased launches d. Monitoring results

Chapter 4 deals with issues around launching an online study. This includes all the activities that happen after a study has been developed until the final data are available. This chapter discusses how to set up a pilot test and validate the study, timing a launch to maximize participation and quality results, and phased launches. The chapter concludes with a discussion on how to monitor results. This includes both participation rates as well as data quality.

5. Data preparation
a. Fraudulent participants b. Consistency checks c. Data reliability d. Outliers e. Recoding variables

Chapter 5 will help the reader prepare their data for the analysis stage. There some very important activities that need to take place prior to data analysis that must be done to ensure valid results. Topics in this chapter will include how to identify fraudulent participants, running consistency checks on the participant responses, and identifying outliers in the data that may need to be removed from the analysis. The chapter will conclude with a brief discussion of how to recode variables that will be most useful in the analysis stage.

6. Data analysis and presentation
a. Verbatim responses b. Task-based metrics c. Segmentation analysis d. Post-session analysis e. Behavioral data f. Combining data g. Identifying usability issues h. Presentation tips

Chapter 6 covers all the information the reader will need to know about how to analyze and present data derived from an online study. Each section of this chapter covers one type of data that are typically captured in an online study. Verbatim analysis focuses on how to derive meaningful and reliable findings from open-ended responses. Task-based metrics include success, completion times, and ease of use ratings. Segmentation analysis includes ways to identify how distinct groups performed and reacted differently. Post session analysis involves looking at metrics such as SUS scores, overall satisfaction and expectations, and ease of use ratings. Behavioral data analysis includes metrics such as clicks paths, page views, and time spent on each page. Combining data from more than one metric is a very important step in analysis. Methods for identifying usability issues from all the data will be described and examples given. This chapter will be very practically oriented, giving step-by-step direction on how to perform each type of analysis. Many examples will demonstrate different ways to present the results.

7. Building your own online study
a. Approaches to creating your own online study b. Presenting tasks and prototypes c. Capturing task completion status d. Capturing task time data e. Capturing self-reported data f. Examples

Chapter 7 shows readers how to create relatively simple online studies themselves. Approaches to presenting tasks and prototypes will be described, as will techniques for collecting task success, times, and various kinds of self-reported data, including rating scales, open-ended questions, and the System Usability Scale (SUS). While some examples of HTML and JavaScript will be shown, we will describe them in such a way that even someone new to those technologies could understand and use them. Complete examples will be shown that readers could easily adapt. Code samples will also be provided on a companion website.

8. Online solutions
a. Keynote b. RelevantView c. User Zoom d. MindCanvas e. Survey Monkey f. Opinion Lab g. ACSI
h. Others

Chapter 8 reviews the common online tools that can be used for running online testing. While the 'Do-It-Yourself' reader may want to use the techniques described in Chapter 7, others may want to use a commercial tool like those described in this chapter. Most of the chapter will be devoted to those tools that used most often to collect behavioral data such as Keynote, Relevant View, and User Zoom. There will also be a discussion of online tools that do not collect performance data such as Survey Monkey, ACSI, and Opinion Lab. Comparisons of the tools, including what kinds of data can be collected with each, will be included. The chapter will conclude with a brief discussion of other possible solutions such as agencies that specialize in online testing. Readers will also be referred to our companion website to keep up with updates and emerging software solutions.

9. Ten tips for a successful online study
a. Planning for metrics b. Deciding on the right tool c. Choosing the right participants d. Writing clear tasks e. Piloting your study f. Checking data g. Comparing to other data sources h. Being creative with the data i. Allow enough time for analysis j. Presenting only the top line results

Chapter 9 provides a summary of some of the key points made throughout the book. This summary will be in the form of the top ten tips that someone should know when conducting their own online study. These tips will be very practical in nature.

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews