Uh-oh, it looks like your Internet Explorer is out of date.
For a better shopping experience, please upgrade now.
- Get it by Friday, January 26 , Order by 12:00 PM Eastern and choose Expedited Delivery during checkout.
At last, an answer to the question that has bedeviled trainers for decades. Predictive evaluation enables you to effectively and accurately forecast training's value to your company, measure against these predictions, establish indicators to track your progress, make midcourse corrections, and report the results in a language that business executives respond to and understand.
Dave Basarab explains how to begin by identifying the specific goals and beliefs you want to instill in participants. The next step is to determine exactly what these will look like when put into action. Finally you develop quantifiable measures of how employees' adopting the target beliefs and goals will impact the business. A key strength of this process is that it is profoundly collaborative-supervisors and employees work together to establish standards for success each step of the way. A how-to guide filled with worksheets, examples, and other tools, Predictive Evaluation ensures that, rather than being regarded as an expense and an act of faith, training will be seen as an investment with a concrete payoff.
|Publisher:||Berrett-Koehler Publishers, Inc.|
|Product dimensions:||6.10(w) x 9.36(h) x 0.45(d)|
About the Author
Dave Basarab is the founder of V.A.L.E. Consulting. In 2005 he was awarded the Learning Innovation Award from Chief Learning Officer magazine. He is also the coauthor of The Training Evaluation Process.
Read an Excerpt
Predictive EvaluationEnsuring Training Delivers Business and Organizational Results
By Dave Basarab
Berrett-Koehler Publishers, Inc.Copyright © 2011 David Basarab
All right reserved.
Chapter OneThe Predictive Evaluation Approach
ASTD Study Shows Training Evaluation Efforts Could Be Enhanced
"The Value of Evaluation: Making Training Evaluations More Effective," an ASTD Research Study in 2009, shows that companies struggle with evaluating whether their programs meet the business needs of their organizations and whether they are meaningful to employees and business leaders. It also points out that the Kirkpatrick model is the most utilized evaluation approach followed by Brinkerhoff's success case method. Jack Phillips ROI (return on investment) model is also being employed. I was not surprised when in the report they stated that "The least likely metric is actual business outcomes, which nearly a quarter of respondents said they do not measure at all."
You may wonder why they don't measure business outcomes. The report states that the following are most common barriers:
1. It is too difficult to isolate training's impact on results versus the impact of other factors. 2. Evaluation is not standardized enough to compare well across functions. It paints an interesting picture of training evaluation in the United States, does it not?
What it tells me is that training evaluation's time has come, and it needs to be implemented like any of the other business measurement functions (Marketing, Finance, Customer Loyalty, and Service Quality). That is, sound business practices dictate that training collect data to judge progress toward meeting the organization's strategies and annual/multi-year operating plans. Predict (forecast) training's contribution to those plans. Collect data early and often with defined success gates, and implement mid-course corrections to address discrepancies and take advantage of new insights and opportunities. A company might make a mid-course correction because something is working very well and deserves more effort or resources.
Although current efforts to evaluate the impact of training do provide data, these data usually offer little insight on what corrections are needed in order to meet goals.
Predictive Evaluation: A New Approach
Predictive Evaluation (PE) is a new approach that provides compelling training data to executives, including (1) predicting success of training in the three areas of Intention, Adoption, and Impact and measuring to see if success has been achieved; (2) leading indicators of future adoption (transfer of learning) and Impact (business results); and (3) making recommendations for continuous improvement. PE has two major components: predicting, which is before-the-fact, to decide whether to train, and evaluating, which is an after-the-fact measurement against the predictions. The beauty of PE is that it uses leading measures (Intention and Adoption) as a signal of results (Impact). If the leading indicators are below predicted success gates, actions can be implemented to "right-the-ship" so that the desired results are realized.
Predictive Evaluation Benefits
What are the benefits of PE? You now can predict (forecast) training's value to the company, measure against those predictions, use leading indicators to ensure that you are on track, and report in a business format that executives easily understand. You can interweave outcomes and leading indicators into training during the design and delivery and move from an event-driven function to one that predicts success, measures its performance against those predictions, and is seen as returning significant shareholder value for the funds invested.
However, the greatest strength of the PE approach is not about how it is communicated to the executives, or the tools, or the results, but rather how it requires participation of the supervisors and the employees in setting their own intentions and measurement of adoption. The approach treats the employees as adults owning their learning versus students checking off a class from their list and being measured by someone else.
The key components of the approach are the training program, training outcomes, prediction of value, Intention (to use), Adoption (actual use), and Impact (the results to the company). The following sections provide an overview of the approach and each of its key components. Detailed descriptions and guidance are given in Chapters 2, 3, 4, and 5.
Where to Start
The PE approach starts with an existing training program or one that is on the drawing board. In other words, PE works for both existing courses and new ones in the production queue. PE is independent of course delivery—it works equally well for classroom-based training, on-the-job training, online learning, simulations, workshops, etc. The approach works with different content—PE has been conducted on Leadership Training, Sales Training, Business Management Training, and Basic Management Training. PE is also independent of audience and has been used for groups from senior executives to hourly employees. Finally, PE can be employed for courses that are developed and delivered in-house (those where the company has internal personnel create and deliver the training) or outsourced courses (those purchased from external vendors to meet a company's training need).
To begin a PE on an existing course, you need to obtain and review instructional design documents (if they were created), course materials (participant and instructor), any existing evaluation data (such as Level 1 evaluation survey results), budget (actual expenses and projected expenses), number and types of employees already trained, and the number of employees who need training in the future. This is only the starting point—you can gather other information, such as opinions from participants, their supervisors, suppliers, instructors, and executives who sponsor the training. The purpose is to thoroughly understand and describe the object being evaluated (the training course). Once you understand the course, you can begin the predictive portion of PE.
But the best place to start a PE is on a course that is still on the drawing board. You don't start PE with the instructional design process, but it comes in as a component to ensure that the training design creates the proper value the company needs. In many instructional design processes, Evaluate is the final stage of the process. PE starts before the course finalizes its design, using its predictive components, and is an input/requirement for the final training design.
Typically, the predictive portion of PE begins for new courses when analysis and design phases of design are completed. In the Analysis phase, the instructional problem is clarified, the instructional goals and objectives are established, and the learning environment and participant's existing knowledge and skills are identified. The design phase deals with learning objectives, assessment instruments, exercises, content, subject matter analysis, lesson planning, and media selection.
Whether the course to be evaluated currently exists or is still under design, the PE approach makes the assumption that training programs are designed to provide participants with the following benefits:
Knowledge: either new knowledge or a refresher of current knowledge Skills: new or improved techniques for getting work done Beliefs: the idea that the participants and/or their company can benefit from using the new knowledge and skills
Behaviors: on-the-job practices and habits that shift or are adopted to improve actions and thinking that impact the business
Predictive Evaluation Framework
So let's look at the framework and premise that PE is based on. Before, during, and when they leave training, participants have some level of motivation to use what they have learned. I refer to this as "intentions." Intentions can be strong or weak. On the basis of their intentions, participants "adopt" (apply) the new skills as part of their work behavior (routine). Adopted behaviors practiced over time (repetition) produce results (an "impact") for the business. The magnitude and value of the results are affected by all three factors: (1) Intention, (2) Adoption, and (3) Impact. Using this as a basis for mirroring employee learning and performance, you can predict Intention, Adoption, and, finally, Impact. But before doing that, you need to understand the three PE elements (Intention, Adoption, and Impact).
An Intention Evaluation (IE) addresses the following question: Are participant goals and beliefs upon course completion aligned with desired goals? Intentions are the goals that participants wish to achieve using the knowledge and skills they learned in training and supported by their beliefs. This is the first evaluation focus point, because there is little or no adoption or business impact if participants have little or no intent to use the training. Intention Evaluation involves judging participant-authored goals against a predefined standard. If participant goals meet the standard, those goals are labeled as acceptable. Goals that do not meet the standard are labeled as unacceptable.
An Intention Success Gate, which is the percentage of acceptable goals, is predicted (e.g., 90 percent). Intention data are collected from participants via a goal planning sheet during training and submitted, after course completion, to the evaluator, who judges each goal as acceptable or unacceptable. This in turn creates an Intention Score (percentage of goals judged acceptable). When the Intention Score exceeds the Success Gate, the course is deemed successful (in creating the proper Intentions). If the Intentions Score is below the gate, an analysis of why and what can be done to improve results is undertaken. Intention data are leading indicators to Adoption (transferring knowledge and skills from training to the job). When Intention Scores meet Success Gate standards, there is a higher likelihood that the goals will be adopted.
The following are some questions and things to consider about Intention goals.
Are these goals supposed to be based on the goals that are developed by the stakeholders or program owners? Answer: yes. Do the participants get to use those as a basis for developing their own goals? Answer: it is a good idea to share sample goals with participants so that they see how to construct a good goal. It also may stir their thinking on what they want to do. Why do they author their own? Why don't they just use the ones designed by the stakeholders and write the how of implementation in their area? Answer: by authoring their goal in their own words, they are creating their personal action plan. This is a method of determining how committed they are to implement the skills necessary to drive Adoption, which leads to predicted Impact. If you give them the list of stakeholder goals, you are testing the ability to choose versus understanding what it takes to perform the work. Can instructors help them with their goals? Answer: absolutely. A best practice is having the participant draft the goal(s), have it reviewed by the instructor, and then finalize it. Some courses have participants share the goals with each other—giving and receiving feedback to make the goals better. What if the goals they come up with are completely different than the designer's intentions? Answer: some analysis needs to be conducted to determine why this has happened. Typical causes are (1) the course teaches the wrong thing, (2) the course does not teach it well enough, (3) the participant is not from the target population and would have difficulty writing a good goal, (4) the participants are weak goal writers. Once the causes are identified, corrective actions can be implemented to eliminate or greatly reduce them.
The following are a few examples of well-written Intention goals:
Show a more positive attitude, because I tend to be a grump the first couple of hours in the morning. I will smile and thank my team for their input. I will also ask open-ended questions during our daily morning meeting. The outcome I expect is that my team members will be motivated/happy to do their work and feel that they have a sense of accomplishment. I will be more patient in everyday tasks and when working with my coworkers and other departments by being open to new ideas, asking open-ended questions, listening, and using a positive attitude. The outcome I expect is to have a fun work environment and to show people that it is great to speak up about concerns. Make sure that at least once a week I give the sales and the customer service teams a chance to hold a briefing. I will mentor them on how to hold a top-notch briefing and give them feedback. The outcome I expect is for them to become more involved and give input and to build their confidence, resulting in increased sales.
Beliefs are defined as "the idea that the participants and/or their company will benefit from using the new knowledge and skills." Belief data are also captured during goal creation but need not need be associated with a specific goal. Beliefs are derived from the course design and/or content and are answers to the question—What do our employees need to believe so that they successfully transfer training skills to the job? The following are a few belief examples:
When leading people, my attitude makes a difference.
I have a voice and can make a difference.
I own the customer experience.
Values drive results.
A fun workplace drives productivity.
Participant belief data are captured on the goal planning sheet by having participants rate how meaningful the beliefs are to them. Typically a 7-point semantic differential scale is used, where 1 = Meaningless and 7 = Meaningful. As with goals, a Success Gate for beliefs is predicted, for example, 90 percent of the participants will rate beliefs as "top box" (a 6 or 7 on the 7-point scale). If the Success Gate is achieved, the course is successful from a Belief Evaluation standpoint. If results are below the gate, the course is viewed as unsuccessful and investigation and/or corrective actions are undertaken.
When the Intentions Scores for both goals and beliefs meet their respective Success Gates, the entire course (for that delivery) is deemed as meeting Intention predictions: it is classified as successful. When either one of the two Success Gates (Intention or Beliefs) fails to be met, the course (for that delivery) is viewed as unsuccessful.
Corrective actions on Intention results include course redesign, instructor improvement, making sure participants are from the target population, etc. For the participants who just completed the course and whose goals are below standard, you can work with them one-on-one or in small groups to author the right goals. You can even have their supervisors meet with them to "beef up" the goals so that they are pointed in the proper direction (for performance and Adoption).
An Adoption Evaluation addresses the following question: How much of the training has been implemented on the job and successfully integrated into the participant's work behavior? An Adoption Evaluation analyzes participant performance (behaviors and actions that the employee has transferred to the job) and participant goal completion rate against a defined Adoption Success Gate (percentage of employees performing as predicted). A set of on-the-job adoptive behaviors is developed from the course design or material and from the Intention goal and belief statements. A few examples of adoption behaviors are the following:
Model a positive attitude by relating to coworkers as to what is currently going on with their problems and reward their positive attitude. Provide positive feedback when contacting my employees and providing recognition on sales milestones. Obtain and enhance Voice of Customer (VOC) intelligence for existing and potential customers. Estimate revenue and operating income for the annual and long-range business plans.
Excerpted from Predictive Evaluation by Dave Basarab Copyright © 2011 by David Basarab. Excerpted by permission of Berrett-Koehler Publishers, Inc.. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of Contents
Introduction: An Innovative Method of Training Evaluation 1
Chapter 1 The Predictive Evaluation Approach 7
Chapter 2 Predicting Training's Value 21
Chapter 3 Intention Evaluation 53
Chapter 4 Adoption Evaluation 69
Chapter 5 Impact Evaluation 93
Chapter 6 How to get Started-Implementing Predictive Evaluation 125
Bibliography and Recommended Readings 141
About the Author 149