Kaufman's Levels of Learning Evaluation is one of several training evaluation models that builds on or reacts to Kirkpatrick’s model. This blog post explores the Kaufman method, including the two main differences that set it apart from Kirkpatrick's evaluation method.
What is Kaufman's Five Levels of Evaluation Model?
Developed by Roger Kaufman, this learning evaluation model is a reaction to and development of the Kirkpatrick Model’s four levels for evaluation. Where Kirkpatrick divides evaluation by type of impact, mainly to the learner, Kaufman’s method evaluates the impact on different groups.
Kaufman’s main developments from Kirkpatrick are:
- the splitting of Level 1 into input and process (or Level 1a and Level 1b),
- the grouping of Levels 2 and 3 under the “micro” level, and
- the addition of a fifth level, mega.
This model also sees Kirkpatrick’s model as restricted to training delivery, while Kaufman’s model considers both delivery and learning impact. One interpretation of Kaufman’s levels is summarized in the following table, including the corresponding Kirkpatrick levels.
(Note: This is not how Kaufman presents the final form of his five levels. We’ll explain why later.)
Kaufman's Levels | Kirkpatrick Equivalent | Kaufman Level's Explanation |
---|---|---|
Input | 1a | Resource availability and quality. These are training materials, digital resources, etc., used to support the learning experience. |
Process | 2b | Process acceptability and efficiency. This is the actual delivery of the learning experience. |
Micro | 2 and 3 | Individual and small group payoffs. This is the result for the “micro-level client” (normally the learner). Did the learner “acquire” the learning? Did he or she apply it on the job? |
Macro | 4 | Organizational payoffs. This is the result for the “macro-level client,” the organization, and includes evaluation of performance improvement and cost benefit/cost consequence analysis. |
Mega | n/a | Societal contributions. This is the result for the “mega-level client,” either society as a whole or a company’s clientele. |
Kaufman adds a fifth level, referred to as "mega," that looks at the benefits to society as a whole and to a business’ clients. This approach contrasts Kirkpatrick's evaluation levels, which only look at benefits to the organization itself.
Kirkpatrick Level 1 Divided: Input & process
The division of Kirkpatrick's Level 1 into Input and Process is perhaps the most practical and valuable of Kaufman's suggestions.
In a world that allows quick and easy access to websites such as Google, Wikipedia, and YouTube, web-based resources' availability and quality are becoming increasingly essential evaluation factors.
Different types of questions need to be asked when evaluating resource availability versus delivery, so it’s helpful to think about them separately.
Focusing on resource availability may be seen similarly to our suggested introduction of a Level Zero to Kirkpatrick—evaluating any informal learning that’s happening socially or in the workplace. It’s important to consider all available resources, not just those formally created within the organization.
Kaufman also replaces Kirkpatrick’s measure of learner satisfaction with the learning experience, looking directly at learning resources and delivery. It’s helpful that Kaufman recognizes that, while input from learners is important when evaluating these elements, it’s not the only source of data.
Recommended Reading
Micro-level training evaluation
The grouping of Kirkpatrick’s Levels 2 and 3 is less helpful, as learning and job performance can and should be evaluated separately. While we can’t see inside the learner’s brain, good assessments and simulations can capture data about learning.
We can then track job performance to evaluate whether that learning has been correctly applied in the workplace.
This evaluation data is crucial because it will determine the best way to resolve any issues. For example, the solutions for learners failing to apply their learning in the workplace differ from those for learners failing to learn in the first place.
Six levels of learning evaluation?
In Kaufman’s final presentation of his five levels of evaluation, he attempts to mirror Kirkpatrick’s levels, presumably to cater to those familiar with Kirkpatrick.
This approach results in Kaufman keeping Input and Process together as Levels 1a and 1b of his model. At the same time, he keeps Kirkpatrick’s Levels 2 and 3 separate but titles them both “micro-level.” This attempt at continuity with Kirkpatrick is understandable but confusing.
Therefore, it may be more practical to consider Kaufman's model as having six levels and remove the mega/macro/micro terminology, as illustrated in the following example.
Mega-level training evaluation
Alongside the confusing terminology, the additional requirement to evaluate societal consequences and customer benefits makes Kaufman’s model less practical than Kirkpatrick’s.
We might be able to gather some anecdotal evidence about societal and customer impacts, but getting robust data at such a high level is often not feasible.
While it's helpful to consider the impact of learning on customers and society in some contexts, this evaluation often can be included in the business goal that the learning is expected to achieve.
For example, if the training is expected to improve sales, more customers will benefit because they're using your fantastic product. It's not necessarily helpful to evaluate that customer benefit separately from achieving the business goal, though.
So even when the goal is something such as "improving customer satisfaction," it doesn't need to be seen as a separate level from business results.
What about informal training?
Kirkpatrick’s original model was designed for formal training—not the wealth of informal learning experiences that happen in organizations today.
Kaufman’s model is almost as restricted, aiming to be useful for “any organizational intervention” and ignoring the 90% of learning that’s uninitiated by organizations.
Further, it’s hard to see how Kaufman’s model is any better at tracking non-training interventions than Kirkpatrick’s model.
In practice, Kirkpatrick is often applied in contexts outside of formal training. While the model was designed with formal training in mind, most L&D practitioners are pragmatic enough to reinterpret the model for their own particular contexts.
We recommend this approach with any evaluation model; there will always be bits that work and bits that don’t in any given context.
Kaufman vs. Kirkpatrick (our opinion)
Kaufman's model provides useful lessons that you can incorporate into your organization's learning evaluation strategy, but we don't recommend using Kaufman's approach verbatim.
In particular, the most valuable points are the division of resources from delivery and the move away from learner satisfaction. Conversely, the least helpful facets are the addition of societal consequences and the overly complex terminology.
What’s hot?
We recommend evaluating your learning materials separately from delivery. This approach helps you identify problems with materials earlier in the process and more easily discern where you need to make improvements.
You also should define separate quality standards for your materials and delivery method.
What’s not?
We do not suggest capturing data about your learning program's impact on society as a whole; it is too far removed to be valuable.
As for benefits to the customer, however, this data is critical and should be incorporated into the business metrics you're evaluating at Level 4.
Up Next: Brinkerhoff's Success Case Method
In the next installment of our Learning Evaluation blog series, we will cover Brinkerhoff's learning evaluation method. Don't miss out! Be sure to subscribe to Watershed Insights to get the latest posts delivered straight to your inbox.
About the author
As a co-author of xAPI, Andrew has been instrumental in revolutionizing the way we approach data-driven learning design. With his extensive background in instructional design and development, he’s an expert in crafting engaging learning experiences and a master at building robust learning platforms in both corporate and academic environments. Andrew’s journey began with a simple belief: learning should be meaningful, measurable, and, most importantly, enjoyable. This belief has led him to work with some of the industry’s most innovative organizations and thought leaders, helping them unlock the true potential of their learning strategies. Andrew has also shared his insights at conferences and workshops across the globe, empowering others to harness the power of data in their own learning initiatives.
Subscribe to our blog