01 — When the Algorithm Is Wrong 02 — The 5-Minute Field Guide 03 — 3 Minutes a Day 04 — The Conversation You've Been Avoiding 05 — The Training That Wasn't Working
01
Scenario-Based Simulation

When the Algorithm Is Wrong

Redesigning AI Governance Training for Senior Leaders

AudienceC-Suite & Senior Operations Managers
Timeline8 weeks
ToolsArticulate Storyline 360 · Miro · Google Workspace
TypeJudgment Simulation · Branching Scenario
Problem

The Business Problem

After adopting AI-powered pricing and demand forecasting tools, a financial services organisation experienced three high-profile errors in 12 months: an algorithmic pricing anomaly that caused customer complaints, a forecasting failure that led to overstocking, and a compliance flag from their legal team about undocumented AI decisions.

The L&D team's initial request: "We need a 30-minute compliance course on the EU AI Act."

I pushed back.
Diagnosis

Performance Gap Analysis

Root cause wasn't knowledge. Executives knew the regulations existed. The real gaps were:

  • Leaders couldn't identify when an algorithmic output was suspicious vs. normal variation
  • No one owned the decision to override or escalate
  • The organisation had no mental model for algorithmic risk — they treated AI outputs like spreadsheet outputs
This wasn't a compliance training problem. It was a decision-making and judgment problem.
Strategy

Why Training — and Why This Kind

Training was appropriate here because the knowledge gap, the skill gap, and the attitude gap were all real. A compliance course would have addressed none of them. A scenario simulation would address all three.

Recognition-Primed Decision Making (Klein, 1998) — experts make decisions by recognising patterns, not running checklists. The simulation built pattern recognition.
Cognitive Load Theory — chunked decision trees, no extraneous information at point of decision.
Dual Coding — data visualisations paired with narrative context to engage both verbal and visual channels.
Emotional Engagement — real consequences built into branching paths, delayed the way real mistakes happen.
Design

The Design

7 modules. 14 screens. 9 interaction types.

▶ Key Scenario: The Pricing Anomaly

Learners are placed in the role of Operations Director. An AI pricing tool has flagged a 34% price increase on a high-demand product. Three colleagues give conflicting advice. The learner must:

  1. Identify the right questions to ask
  2. Decide whether to override, escalate, or accept
  3. Document their decision rationale
  4. Face the consequences of their choice
Path A — Override without documentation
Compliance breach 6 weeks later. Audit finding. Executive scrutiny.
Path B — Escalate with documented rationale
Praised during audit. Cited as model governance behaviour.

Wrong answers are not punished with a "Try Again" button. They play out as realistic consequences — delayed, the way real mistakes happen.

Results

Measurable Outcomes

40%reduction in undocumented AI override decisions at 90 days
85%of participants identified 3 algorithmic risk indicators in post-assessment scenario
0documentation failures in Q3 (vs. 3 in Q2 prior to training)

Try the Decision Scenario

Experience one branching decision with two consequence paths. See what the simulation feels like from the learner's seat.

Try Sample Module →
Modules7
Screens14
Interaction types9
Decision branches23
02
Performance Support Tool

The 5-Minute Field Guide

Replacing a 47-Page SOP Manual With a Tool People Actually Use

AudienceFrontline service staff, 200+ employees
SectorHospitality
ToolsCanva · Adobe Illustrator · LMS integration
TypePerformance Support · Job Aid
Problem

The Business Problem

The organisation had a 47-page Standard Operating Procedures manual for customer service recovery. New staff read it during onboarding. Nobody looked at it again. Customer complaint scores were rising. The L&D manager wanted a "refresher course."

Diagnosis

Performance Gap Analysis

I conducted 6 interviews with frontline staff and 3 with supervisors. What I found:

  • Staff knew the theory. They'd been trained.
  • The failure was at the moment of performance — under pressure, in front of an upset customer, they blanked on the recovery steps
  • This is a classic performance support problem, not a training problem
Training more wouldn't fix a working memory problem under stress. A job aid at point-of-need would.
Strategy

Why Not a Course

"A refresher course would have given staff more information they already had. Under stress, people don't retrieve course content — they reach for familiar patterns or freeze. The solution needed to be at the point of performance, not stored in long-term memory."
Design

The Solution: SERVICE Recovery Framework

A laminated 2-sided quick reference card and a mobile-optimised digital version:

SStop and listen fully
EEmpathise without defending
RResolve with one clear offer
VVerify satisfaction before leaving
IInform supervisor within 2 hours
CComplete incident log same shift
EEscalate if unresolved at 24 hours

Side 2 was a decision tree for the 6 most common complaint types with exact language, what NOT to say, escalation triggers, and documentation requirements.

Results

Measurable Outcomes

+23%customer recovery satisfaction score within 60 days
34%→91%incident log completion rate
78%of staff reported feeling more confident in recovery situations
-4 hrstraining time saved per new hire cohort
📋

View the Job Aid

See the SERVICE recovery framework and decision tree in the actual format delivered to staff.

Download Sample →
FormatPrint + Digital
Pages2 sides
Complaint types covered6
Read timeUnder 5 min
03
Microlearning System

3 Minutes a Day

Building a Sales Onboarding System That Fits Inside a Busy Schedule

AudienceNew sales hires, B2B SaaS
Duration90-day learning journey
ToolsRise 360 · Articulate Storyline · LMS push notifications
TypeMicrolearning · Spaced Practice System
Problem

The Business Problem

90-day ramp time to first closed deal was too long. New hires were overwhelmed by a 2-week onboarding bootcamp. Manager feedback: "They come out of onboarding not knowing our product, our buyer, or our process. We're basically re-training them on the job."

Diagnosis

The Real Problem: Cognitive Overload

The bootcamp was 40 hours of content delivered in 10 days. Cognitive overload was guaranteed. Ebbinghaus forgetting curve analysis showed that without spaced retrieval, 70% of content would be forgotten within a week.

The problem wasn't content quality. It was delivery architecture. Too much, too fast, with no reinforcement system.
Design

The 90-Day Microlearning Journey

Weeks 1–2
Core knowledge — 5 micromodules per week, 3 minutes each. One knowledge check per module. Manager debrief guide for weekly 1:1.
Weeks 3–8
Skill building — 3 scenario-based practice modules per week. Peer learning pairs with structured prompts. Real deal coaching integrated into modules.
Weeks 9–12
Performance reinforcement — 2-question daily retrieval push. Deal review templates tied to learning objectives. Self-assessment against competency framework.
Spaced Practice (Ebbinghaus, Cepeda et al.) — content distributed over time, not front-loaded.
Interleaving — mixing product, buyer, and process content rather than siloing by topic.
Retrieval Practice — daily questions, not re-reading content. Testing as learning, not testing as evaluation.
Desirable Difficulty — questions designed to be slightly hard to maximise encoding.
Results

Measurable Outcomes

67 daysramp time (down from 90) — 26% faster to first closed deal
71%knowledge retention at 30 days (vs. 34% with old bootcamp)
+44%manager satisfaction with new hire readiness

Try a Micromodule

Experience one 3-minute micromodule from the onboarding journey, including the retrieval practice mechanism.

Try Sample Module →
Total modules47
Avg module length3 min
Journey duration90 days
Daily retrieval questions2
04
Interactive Branching Scenario

The Conversation You've Been Avoiding

Building Manager Capability for Difficult Feedback Conversations

AudienceMid-level managers, professional services
ToolsArticulate Storyline 360 · Character animation
TypeBranching conversation simulation
Trigger360 feedback: 61% said manager avoided difficult conversations
Problem

The Business Problem

360 feedback data showed that 61% of direct reports felt their manager "avoided difficult performance conversations." The impact was measurable: underperformance left unaddressed, team morale declining, HR escalations rising. HR's request: "A course on giving feedback." My response: a simulation, not a course.

Strategy

Why the Distinction Matters

"You cannot learn to have difficult conversations by watching a video about difficult conversations. Motor skills require practice. Interpersonal skills require practice. A course about feedback is like reading about swimming — intellectually useful, practically useless without getting in the water."

The scenario needed to simulate the discomfort, ambiguity, and emotional complexity of a real feedback conversation.

Design

The Branching Conversation Simulation

Characters: Alex (the manager — the learner's role) and Jordan (an underperforming team member with legitimate personal challenges).

  • No "obviously wrong" answers — all choices are defensible, which mirrors reality
  • Emotional state tracker — Jordan's trust level visible in real time, changing with each response
  • Three possible endings based on accumulated decisions, not a single choice
  • Annotated replay — post-scenario debrief with design rationale for each branch
Ending A
Performance improves. Relationship intact. Jordan promoted 6 months later.
Ending B
Performance improves. Relationship damaged. Jordan leaves within a year.
Ending C
Performance deteriorates. Escalation required. HR involvement.
Results

Measurable Outcomes

61%→28%direct reports reporting manager avoidance at 6-month re-survey
-38%HR escalations for unaddressed underperformance
91%of managers rated simulation as "realistic" or "very realistic"
💬

Try the First Conversation

Experience one 4-choice branch with visible emotional consequence. Then ask yourself: what would you do differently?

Try the Scenario →
Decision points18
Possible endings3
Choices per branch4
Est. completion time25–40 min
05
Data-Driven Learning Intervention

The Training That Wasn't Working

Using Learning Analytics to Redesign a Failing Compliance Programme

ContextAnnual mandatory compliance training, 4 years running
Completion rate94% — but incidents not improving
ToolsLMS analytics · Articulate Storyline 360 · xAPI
TypeLearning audit · Data-driven redesign
Problem

The Business Problem

An organisation had been running mandatory compliance training annually for 4 years. Completion rates were high at 94%. Incident rates were not improving. They asked me to "make the course more engaging."

I asked for the data instead.
Diagnosis

What the Data Showed

I requested assessment scores by question, time-on-task per module, incident data by department and role, and post-training survey results.

  • Average assessment score: 87% — suggesting mastery. It wasn't.
  • Average time on module 3 (highest incident correlation): 2.4 minutes for a module designed to take 12 minutes
  • Incident rate in departments with highest assessment scores was no different from lowest scorers
  • Question 7 (highest-stakes scenario): 82% correct rate, but 71% of those chose the same wrong distractor — suggesting guessing patterns not comprehension
Conclusion: Learners were clicking through. The assessment was testing recognition, not application. High scores were measuring nothing real.
Design

Three Interventions

1
Assessment redesign
Replaced multiple choice with scenario-based constructed response. Pass mark: demonstrate correct procedure, not select it.
2
Module 3 restructure
Removed 6 slides of text. Added one 8-minute branching simulation. Added time gates to prevent click-through.
3
Manager activation
Built a 15-minute manager debrief guide — how to discuss the training with their team in the weekly meeting. Training without social reinforcement decays. This extended learning into the workflow.
Results

Measurable Outcomes

2.4→9.8min average completion time on module 3 — evidence of actual engagement
71%first-attempt pass rate on new assessment (down from 87% — but now measuring real comprehension)
-31%incidents in Q1 post-redesign vs. same period prior year
68%manager debrief adoption within 60 days
📊

Before & After

See a side-by-side comparison of the original click-through assessment and the redesigned scenario-based version.

View Comparison →
Original pass rate87%
New pass rate71%
Incident reduction31%
Engagement time ↑

Have a performance problem
that needs solving?

Tell me the gap. I'll tell you whether training will close it — and if it will, exactly how I'd design it.

Start a Project →