Business Challenge
Providing care to more than 500,000 patients each year, MedStar Health is the largest healthcare provider in the Washington, D.C./Maryland region. As an organization, they’re committed to providing the best care during emergency situations in which patients are in cardiopulmonary arrest (referred to as “Code Blues”).
During a Code Blue, the stakes are literally life and death—which is why it’s vital that MedStar resuscitation team members are well trained. Speed is vital in the seconds and minutes that follow a Code Blue, including the amounts of time for performing chest compressions and defibrillation and administering medications to a patient.
As a result, MedStar’s Code Blue training and learning resources have focused on improving clinician performance to reduce these times. However, MedStar didn’t have extensive information on the effectiveness of various training programs and learning resources.
Data Challenge
MedStar is dedicated to delivering a robust Code Blue training program and rely on several training systems to provide a comprehensive learning experience:
- MedStar’s Learning Management System
- In-person simulations via Xapify (formerly xAPIapps)
- A mobile defibrillator training app via Zoll
Not all of these systems, however, were originally designed with xAPI in mind, which means they’re unable to communicate and share data with one another. Only Xapify—which observes performance during Code Blue simulations—was designed to support xAPI natively.
Meanwhile, Code Blue KPIs are recorded by a different team and stored in a separate location from the Code Blue training data.
Because access to the program’s data was limited, MedStar was unable to understand how each training system contributed to their overall goals and determine which learning activities were most impactful.
Solution
To improve their Code Blue KPIs, MedStar needed a better understanding of the most impactful learning activities. From there, they’d be able to focus on those learning activities to drive further improvement while revising or removing less effective learning activities.
To achieve this, MedStar began collecting the following data:
- Course completion data from the MedStar LMS
- Usage data from MedStar’s mobile defibrillator training app (Zoll)
- In-person observation data from Code Blue simulations (Xapify)
- Times and results from Code Blue incidents
Using Watershed to aggregate and visualize this data from multiple data sources, MedStar is now able to answer a range of questions about the usage and effectiveness of their training systems (see Figure 1).
They also have a better understanding of where they need to target their attention to improve performance. In particular, they can test the “chain of cause and effect” from training to simulations to final results. For example:
- How well are learning materials being used (e.g., LMS courses and Zoll)?
- Is there a relationship between how often a team member access learning materials and how well he or she performs during related assessments?
- Do high scores in assessments lead to good performances during Code Blue simulations?
- Do good performances during Code Blue simulations lead to good performances during real Code Blues?
Achieving MedStar’s Goals
To help MedStar make this happen:
- We worked with Xapify to deploy their observation checklist application that reports proctor observations from Code Blue simulations.
- We outfitted MedStar’s Zoll app to send xAPI data to Watershed.
- We enabled their LMS to pull and convert data into Watershed.
- We're in the process of helping MedStar pull data from real Code Blue incidents into Watershed.