Frameworks & Arguments

01
Learning Analytics6 min read

Your LMS Completion Rate
Is a Vanity Metric

94% completion. The training worked. Did it? Or did 94% of your workforce learn to click Next fast enough to avoid a follow-up email from HR?

Completion rate is the most commonly reported learning metric in corporate L&D. It is also the least meaningful. It measures a behaviour — opening and finishing a module — not the behaviour you actually care about, which is doing something differently at work.

Here's how completion rates get inflated without anyone lying:

  • Slides are skippable. Learners click through at maximum speed.
  • Assessments use recognition-based multiple choice. Learners guess. They pass.
  • Time-on-task is not tracked — or is tracked but not acted on.
  • The "completion" triggers off the final slide being reached, not assessed performance.

I've audited compliance programmes with 90%+ completion rates where the average time on the highest-stakes module was under 3 minutes — for a module designed to take 12. The data was there the whole time. Nobody looked at it.

What to measure instead

  • Time-on-task per module — is it plausible? 2 minutes for a 12-minute module?
  • Assessment question analysis — which questions have suspiciously high pass rates?
  • Pre/post behaviour data — did incidents change? Did conversion rates move?
  • Spaced retrieval scores — where does knowledge drop off?
"Completion rate tells you your LMS is working. It tells you nothing about your training."

Start tracking one better metric this quarter. Decide in advance what a "good" number looks like, and what you'll do if you see a bad one. That discipline — measure, interpret, act — is what separates a learning function from a content factory.

02
Performance Consulting7 min read

The Training That Should
Never Have Been Built

The most valuable thing I can tell a client is: you don't need training. Half the performance problems I see are process problems, tool problems, or incentive problems dressed up as knowledge gaps.

When a manager says "my team needs training on X," they're usually right that there's a problem. They're often wrong about the cause.

The five real causes of underperformance

  • Information: Do people know what's expected? Do they get clear, timely feedback when they do it wrong?
  • Resources: Do they have the tools, time, and support to perform correctly?
  • Incentives: Is correct performance rewarded? Is incorrect performance penalised?
  • Knowledge: Do they actually lack the knowledge or skill to perform? (This is the only cause where training helps.)
  • Motivation: Do they want to perform correctly? If not, why not?

Genuine knowledge and skill gaps account for maybe 20–30% of performance problems. The rest are environmental. Training can't fix a bad process. It can't fix a tool that doesn't work.

What to do when training isn't the answer

Say so. Clearly. With evidence. Clients remember the consultant who told them the truth. They trust them for the next project.

"Half the performance problems I see are process problems, tool problems, or incentive problems dressed up as knowledge gaps."
03
Scenario Design8 min read

Why Most Scenarios
Don't Actually Work

A scenario where the wrong answer is obviously wrong teaches nothing. Real decisions are hard because all the options are defensible. If your learner can guess the right answer without reading it, your scenario is decoration.

The three signs your scenario is broken

1. The wrong answers are obviously wrong. If one option is "document everything carefully" and another is "ignore the problem and go to lunch," you haven't built a decision.

2. Failure leads to a "Try Again" button. In real life, mistakes have consequences that unfold over time. Your scenario should do the same.

3. The scenario is generic. Effective scenarios use specific, named characters in recognisable situations. The more specific and real, the more transfer.

What makes a scenario actually work

  • Authentic tension — all choices are defensible. The learner has to actually think.
  • Consequence fidelity — outcomes match what would actually happen, not an idealised version.
  • Reflective debrief — not "here's the right answer" but "here's what different choices set in motion."
"The tension is the learning. If there's no tension, there's no learning."
04
Framework5 min read

My Framework for Solving
Performance Problems

Every project I take on starts with the same five questions. Not a needs analysis template. Five questions that separate a training problem from a performance problem.

01

What are people doing vs. what should they be doing?

Describe the observable behaviour gap. Not "they need to understand X" — but "they are doing A when they should be doing B."

02

Why aren't they doing it?

Run through Gilbert's six causes before assuming training. Only proceed if the answer is knowledge or skill.

03

What does "good" look like, and how will we measure it?

Define success before designing anything. If you can't agree on a measure, you can't agree on a solution.

04

What's the minimum effective dose?

What is the simplest intervention that would produce the behaviour change? Build the minimum, measure it, iterate.

05

What will reinforce the learning after the intervention ends?

Training without reinforcement decays. What happens after the course? Manager follow-up? Spaced retrieval? Job aids?

More From The Lab

Shorter reads. Bigger questions. The kind of things that should come up in every L&D conversation but rarely do.

🧠
Cognitive Science

The Forgetting Curve Is Real — and Your LMS Is Making It Worse

Ebbinghaus showed we forget 70% of new information within 24 hours. Your one-and-done course isn't fighting that. It's feeding it.

Read →
🎯
Strategy

Stop Writing Learning Objectives. Start Writing Performance Promises.

"Learners will be able to identify…" means nothing to a business. "Managers will give feedback within 48 hours of an incident" means everything.

Read →
📊
Measurement

Kirkpatrick Level 4 Is Not a Nice-to-Have — It's the Only Level That Matters

If you can't show how training moved a business metric, you're not measuring impact. You're measuring activity. There's a difference.

Read →
Rapid Design

The 80/20 Rule of eLearning: Most of Your Slides Are Doing Nothing

In every course I've ever audited, at least 30% of screens could be cut without affecting learning outcomes. Your SME's favourite slide is usually first to go.

Read →
💪
Behaviour Change

Why "Awareness Training" Is the Laziest Phrase in L&D

Awareness without behaviour change is just expensive knowledge. If your training goal starts with "raise awareness," you don't have a training goal yet.

Read →
💬
Manager Enablement

The Most Underused Learning Tool in Every Organisation: The Line Manager

Managers control the work environment that follows training. Without their involvement, your course's half-life is about 72 hours.

Read →
🌟
AI in L&D

AI Will Not Save Your Bad Training. It Will Just Produce Bad Training Faster.

AI tools are extraordinary at generating content. They are terrible at diagnosing whether content is what a learner actually needs. That gap is where instructional designers live.

Read →
📖
Adult Learning

Adults Don't Learn Like Students — So Why Do We Design Training Like They Do?

Andragogy vs pedagogy isn't academic jargon. It's the difference between training people treat as relevant and training they endure on a Tuesday afternoon.

Read →
🚫
Compliance Training

Compliance Training Is Broken by Design — and Here's Who Keeps Breaking It

The brief is "cover the content." The measure is completion. The outcome is unchanged behaviour. Until you change what you're optimising for, you'll keep getting the same result.

Read →
🚀
Career

The Instructional Designer Who Gets Hired Is Not the Best Designer. It's the Best Communicator.

Your Storyline skills matter. But what gets you the contract — and keeps it — is your ability to explain design decisions to stakeholders who don't speak learning science.

Read →

Agree? Disagree?
Let's talk about it.

The best client relationships start with a real conversation. If something here resonated — or made you argue with your screen — reach out.

Start a Conversation →