Your LMS Completion Rate
Is a Vanity Metric
94% completion. The training worked.
Did it? Or did 94% of your workforce learn to click Next fast enough to avoid a follow-up email from HR?
Completion rate is the most commonly reported learning metric in corporate L&D. It is also the least meaningful. It measures a behaviour — opening and finishing a module — not the behaviour you actually care about, which is doing something differently at work.
Here's how completion rates get inflated without anyone lying:
- Slides are skippable. Learners click through at maximum speed.
- Assessments use recognition-based multiple choice. Learners guess. They pass.
- Time-on-task is not tracked or is tracked but not acted on.
- The "completion" triggers off of the final slide being reached, not assessed performance.
I've audited compliance programmes with 90%+ completion rates where the average time on the highest-stakes module was under 3 minutes — for a module designed to take 12. The data was there the whole time. Nobody looked at it.
What to measure instead
The right metrics depend on what you're trying to change. But as a starting point:
- Time-on-task per module — is it plausible? Is anyone spending 2 minutes on a 12-minute module?
- Assessment question analysis — which questions have suspiciously high pass rates? Which have distractor patterns that suggest guessing?
- Pre/post behaviour data — if the training was about compliance, did compliance incidents change? If it was about sales, did conversion rates move?
- Spaced retrieval scores — if you're using spaced practice, what's the retention curve? Where does knowledge drop off?
Completion rate tells you your LMS is working. It tells you nothing about your training.
The harder question
If completion rate is your primary success metric, ask yourself: who chose that metric, and why? Usually it's because it's easy to measure and easy to report. Easy metrics get used. Useful metrics require more effort.
Start tracking one better metric this quarter. Not ten. One. Decide in advance what a "good" number looks like, and what you'll do if you see a bad one. That discipline — measure, interpret, act — is what separates a learning function from a content factory.