Advanced evaluation is less common than data evaluation, which means there’s no well-worn path leading the way. Organizations venturing into this level of complexity often do so in ways that are different from others. We see this in our Learning Analytics Research Study data, where no analysis type has significantly more report views than the rest.
That’s why we’ve divided the advanced evaluation complexity into five analysis types, which we’ll explore in this post, along with examples of what organizations are doing in terms of this complexity.
What Is Advanced Evaluation?
As a quick reminder, advanced evaluation looks at things—such as correlations and regression analysis—and applies statistical techniques to understand what happened and why it happened. This type of evaluation also creates theories about causation, which allows you to focus on what’s working best while scrapping ineffective learning.
In other words, advanced evaluation asks: Why is this happening?
Recommended Resources
Advanced Evaluation & Analysis Types
Now, it's time to learn more about the five analysis types we identified that fall under the advanced evaluation complexity:
- Chain of Evidence
- Drop Off
- Segment
- Workflow
- Qualitative Responses
Our Learning Analytics Research Study data shows there isn’t one analysis type with significantly more report views than the rest regarding advanced evaluation.
1) Chain of Evidence
The term chain of evidence relates to the idea of showing the training's impact on business performance by tracking evidence of the chain of events—ranging from the learning experience and knowledge gained to improved performance and business impact.
This process is loosely based on Kirkpatrick’s four levels of learning evaluation.
One type of advanced evaluation is to pick two links in this chain (as illustrated above) and evaluate the extent to which they are related. This analysis looks to validate the logic of the chain envisaged by the learning design and the effectiveness of the learning strategy in practice.
For instance, this example correlation report from Watershed shows the relationship between an assessment score (a measure of learning) against a customer satisfaction rating (a business KPI).
2) Drop-Off Analysis
Drop-off analysis looks at where people are exiting a particular process. For example, this might mean looking at the following:
- how far through people watch a video
- the slides where people drop out of an e-learning course
- how far people get through a MOOC before they disengage
The following example shows a drop-off analysis for an xAPI-tracked game we hosted during a conference. Perhaps, confusingly, there’s negative dropoff from launching the game through starting an attempt—which was because:
- not everyone used the launcher, and
- a single player could start multiple games from the same launch.
We then see a massive drop off between people starting and finishing the game.
This insight helped reinforce that, while the game might be great in a work context, people just didn’t have time and interest to finish several rounds of gameplay at a large-scale conference. (And they didn’t need to play every round—as a few rounds were enough to show off the game).
3) Segment Analysis
Segment analysis involves identifying a specific group of people and then selecting that group for further analysis. For instance, you might want to:
- know what learning activities are most popular amongst your top salespeople, or
- compare average scores for people who generally use mobile versus those who favor desktop.
The following example shows a scatter plot, which is one way to identify a segment. The highlighted yellow area identifies managers with high point-of-sale gross profit (POS GP) percentages and low chargeback (i.e., rebates) percentages.
This group also can be filtered in other reports for further analysis. For instance, you can see how people with high POS GP percentages and low chargeback percentages performed on an assessment compared to people with low POS GP percentages and high chargeback percentages to assess the assessment's effectiveness at predicting KPI values.
4) Workflow Analysis
Workflow analysis looks at how learners find or access learning resources. For instance, did they come from a search, recommendation, or homepage link? This kind of analysis can help determine the best way to promote new or featured content while using the general information architecture of your platform.
The following example looks at the number of times people launched different items from a particular panel on a platform. The 4-Hour Workweek is clearly the most-clicked recommended item.
So, what's special about it? Perhaps it's the one in the top left, has a catchy image, or everybody just wants to know about four-hour workweeks. But with some further digging, you can use this information to improve the clickability of future recommendations.
5) Qualitative Responses
Qualitative survey responses can help you understand the reasons behind your quantitative data. Data makes much more sense when you know the context, and qualitative information provides that context. Don’t overlook it!
For example, if everybody fails question 37, and the feedback says there's a bug with question 37 (i.e., all the options are the same), you know why everyone failed that question. You also can use it to explore significant successes and failures further, as directed by Brinkerhoff's Success Case Method.
Actionable Insights
Currently, there’s no clear path as you move into the advanced evaluation phase of learning analytics; you just have to make one. Either pick the most relevant analysis type we’ve listed in this post and implement it in your organization or explore other ways to understand why things are happening.
Recommended Reading
Up Next: The Predictive and Prescriptive Complexity
Next week, we reach the dizzy heights of the “Predictive and Prescriptive” complexity. So hold on to your hats; it gets windy up there!
About the author
As a co-author of xAPI, Andrew has been instrumental in revolutionizing the way we approach data-driven learning design. With his extensive background in instructional design and development, he’s an expert in crafting engaging learning experiences and a master at building robust learning platforms in both corporate and academic environments. Andrew’s journey began with a simple belief: learning should be meaningful, measurable, and, most importantly, enjoyable. This belief has led him to work with some of the industry’s most innovative organizations and thought leaders, helping them unlock the true potential of their learning strategies. Andrew has also shared his insights at conferences and workshops across the globe, empowering others to harness the power of data in their own learning initiatives.
Subscribe to our blog