Kirkpatrick’s learning evaluation model has been used for more than 50 years. In this post, find out more about each of the model’s four levels and how you can start applying them today.
What is the Kirkpatrick Learning Evaluation Model?
Published in 1959, Kirkpatrick’s Four Levels of Evaluation is one of the most ubiquitous and well-known learning evaluation models. It relies on four levels to evaluate and assess the results of formal and informal training: reaction, learning, behavior, and results.
Kirkpatrick's 4 levels for training evaluation
The Kirkpatrick learning model encourages us to evaluate learning on four levels:
1) Reaction
What did learners feel about the learning experience? Did participants enjoy the training? Did they like the trainer?
This level is normally captured by surveys following the training.
2) Learning
Did learners actually learn anything? Did their knowledge and skills improve?
The level is normally captured by assessments at the end of the training, and sometimes at the start to compare differences before and after training. It’s worth noting, a lot of elearning content is measured only up to Level 2.
3) Behavior
Did learners actually do anything different as a result of the training? For example, if training was designed to encourage salespeople to discuss customers’ problems before proposing solutions, are the salespeople who completed the sales training following through?
This level is sometimes evaluated by surveying learners and/or their managers after the training. Often it is not measured at all.
4) Results
What was the effect of the training on the business as a whole? For example, has there been an increase in sales?
This level can only really be measured by looking at business data relating to the training. Typically, this data is captured by the business, but it’s often not compared alongside training data.
Furthermore, L&D departments may not have access to it. Where data is captured, the challenge at this level is demonstrating the impact of the learning experience among the many other factors that can affect the business metric.
Pro Tip: Be sure to evaluate your learning program on each level so you can tell the best and most comprehensive story about its impact.
Recommended Reading
How should I use the Kirkpatrick Method?
The Kirkpatrick evaluation model is a useful and well-known starting point to learning evaluation. The lower levels (i.e., reaction and learning) are commonly used in learning and development (L&D).
The higher levels (i.e., behavior and results), however, are usually ignored in practice because they're often harder to evaluate.
But knowing the impacts on learner behavior and business results is fairly important. In fact, research suggests that learner reaction is a poor indicator of whether or not they learned anything or if their behaviors will change.
Of course, that doesn't mean you should dismiss the lower levels of the Kirkpatrick evaluation model. They provide early warning signs of problems with your learning program.
You need all four levels to tell the story and, if your program wasn't successful, identify areas for improvement.
A common criticism of Kirkpatrick isn’t the model itself, but how it’s applied in practice. Organizations generally have processes for evaluating at Levels 1 and 2, but then either don’t get around to or aren’t able to evaluate Levels 3 and 4.
However, xAPI makes it easier to evaluate training at all four levels, especially Levels 3 and 4. xAPI records learner behavior by:
- integrating xAPI directly into business systems to record learner activity, or
- providing mechanisms for learners to record and reflect on their performances.
Some organizations, for example, give learners mobile apps to photograph or video their work to be assessed by supervisors or mentors. These assessments can then be compared to data from the learning experiences themselves to measure the effectiveness of the experiences.
Business systems also contain data about the impact on the business, which xAPI can pull into a Learning Record Store (LRS) alongside data about the learning experience and other evaluation data.
A Kirkpatrick Criticism: Just Level 4?
Another criticism of Kirkpatrick is that Levels 1 to 3 are simply irrelevant. Investments in learning and development are (or should be) intended to drive positive business results.
Therefore, the impact on business key performance indicators (KPIs) is all that needs to be measured.
If the business goal is achieved, why does it matter what employees learned or how their behavior changed? Presumably, employees learned and did what we wanted them to, right?
Perhaps. Or maybe something else led to the observed business results. Without the data from Levels 1 to 3, there's no way to tell the whole story and fully understand how the end result was achieved.
Data from Levels 1 to 3 is critical when the desired business result isn't achieved because it helps pinpoint and analyze the training elements that need to be changed.
For example, perhaps the training successfully changed behavior and resulted in the sales team focusing on customers' challenges, but it didn't result in increased sales.
This finding challenges the assumption that focusing on customers' problems—instead of providing our solutions—was the desired behavior.
Based solely on Level 4 data, we might have assumed that the training had failed to change behavior. While in reality, the training worked, but the behavior didn't work.
Investments in L&D are intended to drive positive business results.
Is Kirkpatrick Level 1 meaningless?
Some critics have cited evidence that there's little to no correlation between what learners think of the learning experience (Level 1) and their learning or behaviors (Levels 2 and 3).
For instance, learners may dislike the learning experience but still benefit from it. Or, they may love the time away from their everyday work routines but learn nothing from the experience. This is a significant criticism that underlines the point that Level 1 evaluation on its own isn't effective.
While Level 1 evaluation is less critical than the higher levels, it has one important advantage: timeliness. You can respond to data about a poor learning experience right away, whereas the business impact from a learning intervention will take some time to manifest.
Even behavior and learning are difficult to measure immediately, especially if we want to ensure that the learning has stuck (remember the forgetting curve).
Getting immediate feedback from the learner right after the experience (or even during) is the best way to identify challenges quickly with the learning solution.
For instance, these might include a broken website, a trainer who failed to arrive, or any other obstacle in the way of a learning solution's intended impact. It could also be something more minor, such as a technical problem with a specific learning interaction or an ineffective trainer.
The key here is collecting, reviewing, and acting on the feedback as quickly as possible. This approach means monitoring learner feedback as it comes in and providing channels for learners to communicate with you at any time—not just at the end of a particular learning experience.
Kirkpatrick Level Zero
The Kirkpatrick Model assumes that we've already implemented a learning solution and now want to know if it's effective. Yet, most learning—perhaps as much as 90 percent—doesn't happen during formal learning solutions.
That's why many organizations are eager to discover the learning within their organizations and then evaluate the resulting impacts.
We could consider discovering "what learning experiences are occurring" as a level below "what learners feel about the experience" (i.e., Level Zero).
Understanding these informal learning events and their impacts on behavior and business results is essential. After all, if we don't know these events are happening, then we can't influence them.
Although we can't force workplace and social learning, we can shape, encourage, and discourage learning (i.e., not all learning is positive; bad habits are learned). The first step is understanding these learning experiences.What else should I know?
Much of the learning that happens in your organization isn't initiated by the L&D department, so be sure to consider informal and operational learning in your evaluation.
Learners tend to forget information over time, so monitor your metrics continuously to evaluate the long-term impact of your program and any reminder elements you’ve built in.
Kirkpatrick model forgets remembering
Other important aspects missing from Kirkpatrick are remembering learning and persistence in behavioral change. The model doesn't say anything about the ongoing measurement of the four levels over time.
Perhaps the training does have an initial impact, but then the learning is forgotten, and the impact fades. Learning solutions that include refreshers to enhance retention are generally more effective than a "one and done" approach. And evaluations must be ongoing and measure the lasting impact.
Up Next: Kaufman’s levels of learning evaluation
We'll be covering Kaufman’s learning evaluation method in our next blog post. Subscribe to Watershed Insights so you can get the latest posts delivered right to your inbox.
About the author
As a co-author of xAPI, Andrew has been instrumental in revolutionizing the way we approach data-driven learning design. With his extensive background in instructional design and development, he’s an expert in crafting engaging learning experiences and a master at building robust learning platforms in both corporate and academic environments. Andrew’s journey began with a simple belief: learning should be meaningful, measurable, and, most importantly, enjoyable. This belief has led him to work with some of the industry’s most innovative organizations and thought leaders, helping them unlock the true potential of their learning strategies. Andrew has also shared his insights at conferences and workshops across the globe, empowering others to harness the power of data in their own learning initiatives.
Subscribe to our blog