LEO Learning’s Chain of Evidence is a learning model that lets you make meaningful connections between business impact, behavior, learning, and learners. It roughly maps to Kirkpatrick’s learning evaluation levels one to four; however, LEO’s model has been developed through practical experience to be more pragmatic in a blended learning context. As a result, the model is based on identifying a clear chain of evidence from learning engagement to business shift.
How is Chain of Evidence different from other learning measurement strategies?
The chain of evidence model is similar to Kirkpatrick, but adjusted to incorporate the practical reality of corporate learning. Actually, we’ve called on a lot of the learning evaluation models you may be familiar with—Kirkpatrick, Kaufman, Phillips, Anderson, and the like.
Our major adjustment was to support the telling of the evidence-based story by linking the chains (Kirkpatrick’s levels, Kaufman’s distinctions, Phillips’ ROI evidence, and Anderson’s sense of value and range of methods).
Recommended Reading
How does the Chain of Evidence model work?
The crux of LEO Learning's model is the chain: the stories that can be told by looking at each set of evidence and how and if it links to other sets of evidence.
On the far left are drivers—which we often show in grey because these drivers are usually your catalyst for putting together some training. In most cases, something at least resembling these drivers has been established to get you to the point of creating your learning initiative.
1. Learning Delivery
Has the learning landed? In other words: Has it reached the intended audience, and did the learner connect to it? Simply knowing a learner has opened a nugget of information or attended a workshop isn’t enough to confirm learning has landed. Rather, it’s important to consider:
- Did learners engage?
- Did they react?
- How did they feel?
- Was the channel successful in reaching learners—regardless of any lasting learning change it may or may not have delivered?
2. Learner Change and Impact
From there, consider if behaviors have changed. Learning only really happens if behavior has changed. This aspect is addressed in two stages in the chain of evidence model:
- Has the learning been effective in driving a change in the learner (i.e. Has there been a measurable shift in their learning?) and, more important,
- Has the learner actually applied the learning successfully through new or different behaviors?
In this link of the chain, we are looking at capabilities and attitudes—which I believe to be the most critical step in the chain of evidence method.
3. Business Impact
And then there is “business impact.” To what extent has the learning made the impact—at a business level—that you are looking for?
This can be measured at a team level or group level; most usefully, it will be reflective of an organizational shift and improvement, or shift in baseline measures associated with an existing KPI.
A critical point is that you can (and should and probably will be forced to) go backwards and forwards on the chain of evidence. Let’s say you find out that response time has been lowered or that email traffic has gone down (whatever your measure). Well, you still need to know why by following the chain of evidence backwards.
When not implemented properly, false correlations can happen.
If you don’t look at the evidence and follow the chain to seek further evidence to tell the full story, you run the risk of falling back on assumptions and easy-to-make, but invalid correlations.
No model is a substitute for curiosity; the LEO chain of evidence model puts the power of curiosity and evidence at its core. It’s easy to see a change that, timing-wise, appears to be because of a learning initiative.
Imagine this scenario:
After a quick e-learning module on proper email etiquette that was intended to free people up for more working and less email, you notice that the amount of time people spend sifting through email has lowered. Hooray! This was your goal.
However, if you don’t follow the chain you may miss the real evidence: that everyone simply switched to more meetings and people still don’t have enough time in the day to do their work.
Are there misconceptions of the chain training evaluation model?
The main misconception of LEO’s chain of evidence model is that it’s too difficult to get started—that you have to go big and invest a lot of time/money to get it started. That’s simply not true.
You can start using the chain of evidence model with one small project (like the email example) or even start by establishing the sources of evidence you do or don’t have to complete the chain.
We always say it’s much easier to get started small and get comfortable, build the business case, and show powerful results (by using the chain to tell human stories) fast.
Recommended Resource
Want to read more about the chain of evidence, including real-world stories and examples? Visit LEO's website to get your copy of Creating the Chain of Evidence: Real Stories of Practical Learning Measurement Strategies.
Get started with these two approaches.
I can’t stress enough that it’s easier than most people think to get started with this model. There are two approaches you can take:
- The evaluative approach: start small with one learning initiative and apply the model
- The big-data approach: start by gathering all the data and evidence you currently collect and search for patterns and connections throughout the chain
For any approach, it just requires that you sit with your data for awhile and get comfortable. It’s not a huge investment, and it reaps amazing rewards.
Up Next: BALDDIE Learning Design Model
Join us for our next blog post as Andrew Downes explains Watershed's instructional design method, BALDDIE, which incorporates elements from all the models and methods we've discussed in this blog series. In the meantime, sign up for Watershed Insights so you don't miss out!
About the author
As the strategic consulting lead at LEO Learning, Rose Benedicks has more than 15 years’ experience crafting business solutions, learning systems, and L&D strategies, specializing in problem-based learning and performance-driven solutions.
Subscribe to our blog