I want to be clear from the outset: I use AI tools every week. I use them to generate first drafts, to explore scenario variations, to research unfamiliar domains quickly, and to process feedback from stakeholders. They are genuinely useful. They save real time. And they have no ability whatsoever to determine whether your course is addressing the right problem.

This distinction matters enormously as AI content generation becomes cheaper, faster, and more convincing. The risk is not that AI will replace instructional designers. The risk is that organisations will use AI to dramatically accelerate the production of training that was never going to work — and will be surprised when it does not.

What AI Is Actually Good At

AI language models are, at their core, extraordinarily capable text prediction machines trained on vast corpora of human writing. They are very good at generating content that sounds like what a competent human would write about a topic. They can produce scenarios, write assessment questions, draft facilitator guides, suggest learning objectives, and create course outlines in a fraction of the time these tasks previously required.

For straightforward content domains — explaining a process, describing a policy, presenting factual information — AI-generated content can be adequate to excellent with relatively light human review and editing. If your training genuinely is about information transfer in a low-complexity domain, AI tools can compress your development time significantly.

They can also support the ideation phase of design. When I am working on a branching scenario, I sometimes use AI to generate a dozen variations on a decision point, which I then edit, combine, or discard. The AI generates quantity; I select quality. This is a productive workflow.

What AI Cannot Do

AI cannot conduct a performance gap analysis. It cannot interview subject matter experts and surface the tacit knowledge that never makes it into documentation. It cannot observe a customer service team in action and identify that the real problem is not knowledge of the recovery process but anxiety about escalating to a manager. It cannot recognise that the stakeholder's training request is actually a symptom of a poorly designed incentive structure that training will not fix.

These are diagnostic skills. They require contextual understanding, human judgment, and the ability to navigate organisational dynamics — to push back diplomatically when the brief is wrong, to ask the uncomfortable question about whether training is actually the right solution, to see the pattern across multiple stakeholder conversations that reveals what nobody wants to say directly.

No current AI system has these capabilities. This is not a temporary limitation waiting to be overcome in the next model release. It reflects a fundamental difference between generating plausible text about a topic and understanding the human and organisational system the training is meant to change.

The Acceleration Problem

Here is where the risk becomes concrete. AI tools dramatically reduce the cost and time of production. When production is cheap, the pressure to produce increases. When production is cheap, the relative investment in upfront diagnosis decreases — because why spend three days doing a needs analysis when you can have a course prototype ready by Thursday?

This logic is exactly backwards. The value of upfront diagnosis is not in proportion to the time it takes. It is in proportion to the resources you are about to commit to a solution that may or may not address the real problem. Making the production phase faster does not reduce the value of getting the diagnosis right. It increases the penalty for getting it wrong — because now you can produce wrong solutions at industrial scale.

The correct use of AI in learning design is to compress the production phase after the design phase has been done rigorously by a human being. The analysis, the problem definition, the decision about whether training is the right intervention, the identification of the performance gap and its causes — these remain human work. Not because AI could not eventually assist with them, but because right now, in 2025, the technology is not there and the methodology is not established.

What This Means for Instructional Designers

The skills that AI is worst at replacing are the skills that define the best instructional designers: consulting ability, systems thinking, organisational navigation, performance analysis, the capacity to say "no, this is the wrong brief" and be believed. These are the skills to develop, demonstrate, and defend.

The skills that AI is replacing are the production skills: writing, editing, formatting, generating initial drafts. This is not a threat to the profession. It is a reorganisation of where human time should be spent. Less time on first-draft content generation. More time on diagnosis, design, and the human relationships through which training actually gets implemented and supported.

"AI can generate content at remarkable speed. The question it cannot answer is whether your learners needed that content in the first place."