01 — Your LMS Completion Rate Is a Vanity Metric 02 — The Training That Should Never Have Been Built 03 — Why Most Scenarios Don't Actually Work 04 — My Framework for Solving Performance Problems
01
Learning Analytics 6 min read

Your LMS Completion Rate
Is a Vanity Metric

94% completion. The training worked.
Did it? Or did 94% of your workforce learn to click Next fast enough to avoid a follow-up email from HR?

Completion rate is the most commonly reported learning metric in corporate L&D. It is also the least meaningful. It measures a behaviour — opening and finishing a module — not the behaviour you actually care about, which is doing something differently at work.

Here's how completion rates get inflated without anyone lying:

I've audited compliance programmes with 90%+ completion rates where the average time on the highest-stakes module was under 3 minutes — for a module designed to take 12. The data was there the whole time. Nobody looked at it.

What to measure instead

The right metrics depend on what you're trying to change. But as a starting point:

Completion rate tells you your LMS is working. It tells you nothing about your training.

The harder question

If completion rate is your primary success metric, ask yourself: who chose that metric, and why? Usually it's because it's easy to measure and easy to report. Easy metrics get used. Useful metrics require more effort.

Start tracking one better metric this quarter. Not ten. One. Decide in advance what a "good" number looks like, and what you'll do if you see a bad one. That discipline — measure, interpret, act — is what separates a learning function from a content factory.

"Completion rate tells you your LMS is working. It tells you nothing about your training."
02
Performance Consulting 7 min read

The Training That Should
Never Have Been Built

The most valuable thing I can tell a client is: you don't need training.
Half the performance problems I see are process problems, tool problems, or incentive problems dressed up as knowledge gaps.

When a manager says "my team needs training on X," they're usually right that there's a problem. They're often wrong about the cause. And if the cause is wrong, the solution is wrong — and you've spent time, money, and credibility on a course that changes nothing.

The five real causes of underperformance

Thomas Gilbert's Behaviour Engineering Model has been around since 1978. It's still underused. Before designing any training, ask:

In my experience, genuine knowledge and skill gaps account for maybe 20-30% of performance problems. The rest are environmental. Training can't fix a bad process. It can't fix a tool that doesn't work. It can't fix a manager who rewards the wrong behaviour.

What to do when training isn't the answer

Say so. Clearly. With evidence.

This is uncomfortable because it can feel like you're talking yourself out of a contract. In my experience, the opposite is true. Clients remember the consultant who told them the truth. They trust them for the next project. They refer them to colleagues.

When you identify that training isn't the answer, give them an alternative recommendation. A process redesign. A job aid. A manager briefing guide. A feedback loop. Something that will actually fix the problem.

The goal isn't to build courses. The goal is to improve performance. Sometimes those are the same thing. Often they're not.

"Half the performance problems I see are process problems, tool problems, or incentive problems dressed up as knowledge gaps."
03
Scenario Design 8 min read

Why Most Scenarios
Don't Actually Work

A scenario where the wrong answer is obviously wrong teaches nothing. Real decisions are hard because all the options are defensible. If your learner can guess the right answer without reading it, your scenario is decoration.

Scenario-based learning is everywhere right now. It's in every eLearning template pack. It's the answer to every "how do we make this more engaging?" question. And most of it is doing almost nothing pedagogically useful.

The three signs your scenario is broken

1. The wrong answers are obviously wrong. If one option is "document everything carefully" and another is "ignore the problem and go to lunch," you haven't built a decision — you've built a filter for people who can read. Real decisions involve choosing between options that all have merit. The tension is the learning.

2. Failure leads to a "Try Again" button. In real life, mistakes have consequences that unfold over time. A wrong decision in a compliance scenario doesn't produce a red screen and a "That's incorrect" message. It produces a delayed audit finding, a manager conversation six weeks later, a damaged client relationship. Your scenario should do the same. Delayed consequences are more realistic and more memorable.

3. The scenario is generic. "Sarah, a new employee, is unsure about the company policy" is not a scenario. It's a placeholder. Effective scenarios use specific, named characters in specific, recognisable situations that your audience will see themselves in. The more specific and real, the more transfer.

What makes a scenario actually work

A well-designed scenario has four components that most templates skip:

The test I use for every scenario I build: could a competent person choose any of these options and defend their reasoning? If yes, it's a real scenario. If one answer is clearly right to anyone paying attention, it's a dressed-up knowledge check.

"The tension is the learning. If there's no tension, there's no learning."
04
Framework 5 min read

My Framework for Solving
Performance Problems

Every project I take on starts with the same five questions. Not a needs analysis template. Not a stakeholder form. Five questions that separate a training problem from a performance problem.

01

What are people doing vs. what should they be doing?

Not "what do they need to learn." What is the observable behaviour gap? Describe it in terms of actions, not knowledge. "Managers are not giving feedback within 48 hours of incidents" is a performance gap. "Managers need to understand the importance of timely feedback" is a training assumption.

02

Why aren't they doing it?

Run through Gilbert's six causes before assuming training. Information, resources, incentives, knowledge, capacity, motivation. Only proceed to training design if the answer is knowledge or skill.

03

What does "good" look like, and how will we measure it?

Define success before designing anything. What will be different in 30, 60, 90 days? How will we know? If we can't agree on a measure, we can't agree on a solution.

04

What's the minimum effective dose?

What is the simplest, fastest intervention that would produce the behaviour change? Sometimes that's a job aid. Sometimes it's a 5-minute module. Sometimes it's a manager conversation guide. Build the minimum effective solution, measure it, then iterate.

05

What will reinforce the learning after the intervention ends?

Training without reinforcement decays. What happens after the course? Is there manager follow-up? Spaced retrieval? Performance support at point-of-need? The intervention is the start, not the end.

Agree? Disagree?
Let's talk about it.

The best client relationships start with a conversation about how learning actually works. If something here resonated — or made you argue with your screen — I'd like to hear from you.

Start a Conversation →