The Forgetting Curve Is Real — and Your LMS Is Making It Worse
Ebbinghaus showed we forget 70% of new information within 24 hours. Your one-and-done course isn't fighting that. It's feeding it.
Read →This is where I think out loud. No thought leadership fluff. No listicles. Just honest, research-grounded perspectives on why most corporate learning fails — and what actually works.
If you disagree with something, even better. Let's talk.
94% completion. The training worked. Did it? Or did 94% of your workforce learn to click Next fast enough to avoid a follow-up email from HR?
Completion rate is the most commonly reported learning metric in corporate L&D. It is also the least meaningful. It measures a behaviour — opening and finishing a module — not the behaviour you actually care about, which is doing something differently at work.
Here's how completion rates get inflated without anyone lying:
I've audited compliance programmes with 90%+ completion rates where the average time on the highest-stakes module was under 3 minutes — for a module designed to take 12. The data was there the whole time. Nobody looked at it.
Start tracking one better metric this quarter. Decide in advance what a "good" number looks like, and what you'll do if you see a bad one. That discipline — measure, interpret, act — is what separates a learning function from a content factory.
The most valuable thing I can tell a client is: you don't need training. Half the performance problems I see are process problems, tool problems, or incentive problems dressed up as knowledge gaps.
When a manager says "my team needs training on X," they're usually right that there's a problem. They're often wrong about the cause.
Genuine knowledge and skill gaps account for maybe 20–30% of performance problems. The rest are environmental. Training can't fix a bad process. It can't fix a tool that doesn't work.
Say so. Clearly. With evidence. Clients remember the consultant who told them the truth. They trust them for the next project.
A scenario where the wrong answer is obviously wrong teaches nothing. Real decisions are hard because all the options are defensible. If your learner can guess the right answer without reading it, your scenario is decoration.
1. The wrong answers are obviously wrong. If one option is "document everything carefully" and another is "ignore the problem and go to lunch," you haven't built a decision.
2. Failure leads to a "Try Again" button. In real life, mistakes have consequences that unfold over time. Your scenario should do the same.
3. The scenario is generic. Effective scenarios use specific, named characters in recognisable situations. The more specific and real, the more transfer.
Every project I take on starts with the same five questions. Not a needs analysis template. Five questions that separate a training problem from a performance problem.
Describe the observable behaviour gap. Not "they need to understand X" — but "they are doing A when they should be doing B."
Run through Gilbert's six causes before assuming training. Only proceed if the answer is knowledge or skill.
Define success before designing anything. If you can't agree on a measure, you can't agree on a solution.
What is the simplest intervention that would produce the behaviour change? Build the minimum, measure it, iterate.
Training without reinforcement decays. What happens after the course? Manager follow-up? Spaced retrieval? Job aids?
Shorter reads. Bigger questions. The kind of things that should come up in every L&D conversation but rarely do.
Ebbinghaus showed we forget 70% of new information within 24 hours. Your one-and-done course isn't fighting that. It's feeding it.
Read →"Learners will be able to identify…" means nothing to a business. "Managers will give feedback within 48 hours of an incident" means everything.
Read →If you can't show how training moved a business metric, you're not measuring impact. You're measuring activity. There's a difference.
Read →In every course I've ever audited, at least 30% of screens could be cut without affecting learning outcomes. Your SME's favourite slide is usually first to go.
Read →Awareness without behaviour change is just expensive knowledge. If your training goal starts with "raise awareness," you don't have a training goal yet.
Read →Managers control the work environment that follows training. Without their involvement, your course's half-life is about 72 hours.
Read →AI tools are extraordinary at generating content. They are terrible at diagnosing whether content is what a learner actually needs. That gap is where instructional designers live.
Read →Andragogy vs pedagogy isn't academic jargon. It's the difference between training people treat as relevant and training they endure on a Tuesday afternoon.
Read →The brief is "cover the content." The measure is completion. The outcome is unchanged behaviour. Until you change what you're optimising for, you'll keep getting the same result.
Read →Your Storyline skills matter. But what gets you the contract — and keeps it — is your ability to explain design decisions to stakeholders who don't speak learning science.
Read →The best client relationships start with a real conversation. If something here resonated — or made you argue with your screen — reach out.
Start a Conversation →