From prompts to agents: the shift L&D can’t ignore.

By

Tom Bryant

Artificial intelligence is evolving faster than most organisations can keep up with. Over the past year, the focus has been on prompting, helping people ask better questions and get better answers from AI. 

But a new shift is already underway. We are moving from asking AI questions to designing how work gets done. This shift, from prompts to AI agents, presents both a challenge and an opportunity for L&D professionals. 

From tools to workflows 

Early use of AI was simple. Ask a question, receive an answer. 

But real work is not a single step. It is made up of stages involving judgement, iteration and synthesis. AI agents reflect this reality. An AI agent is a structured system that can plan and carry out a task across multiple steps within defined boundaries.  Instead of prompting repeatedly, we define a workflow once and allow the agent to support or execute it. 

For example, a research workflow might include gathering evidence, summarising inputs, identifying risks and drafting outputs. What was once a series of prompts becomes a repeatable process. 

The shift is simple, but profound. The value is no longer only in the prompt, it is in the process. 

This is already happening 

This is not future technology. It is already emerging in the tools people use every day. Platforms such as Microsoft Copilot are moving beyond chat into workflows, integrating across documents, meetings and data, and enabling early forms of task automation. In practice, instead of asking AI to summarise a document each time, a team might design an agent that gathers sources, extracts key themes, and drafts a consistent briefing in minutes. 

Findings from Microsoft’s Work Trend Index in 2025 was already highlighting a shift from task-based work towards AI-enabled workflows.  In other words, AI is not just helping us do work faster, it’s starting to reshape how work is done.   

For L&D, this changes the focus. We are no longer just teaching tool use, we are helping people rethink how work itself is designed. 

From doing work to designing work 

As organisations begin to adopt AI agents, several shifts become clear: 

  • Work moves from completing tasks to designing workflows. 
  • People move from writing prompts to structuring processes. 
  • Focus shifts from outputs to outcomes. 
  • Capability moves from individual productivity to team effectiveness. 

This is not a technical shift. It is a thinking shift. 

The key skill is no longer getting a good answer from AI. It is designing a good process that AI can support. 

The emerging skills that matter 

As AI takes on more execution, the skills that remain most valuable are human. 

Staff need to break work into clear steps, design instructions with clarity, evaluate outputs critically and apply judgement rather than rely blindly on results.  For example, there are several considerations in designing clear agent instructions such as: 

  • Purpose & scope – ‘your boundaries are…’ 
  • Safety, privacy & compliance – ‘ensure outputs are impartial…’ 
  • Handling ambiguity – ‘when information is missing take the action of…’ 
  • Style & communication guidelines – ‘tailor output to intended audience of…’ 

And this is where the conversation becomes more serious.  Users also need a strong sense of risk awareness and governance, particularly in environments where accountability and transparency matter. 

These are not technical skills. They are critical thinking skills. 

The role and opportunity for L&D 

Traditional AI training has focused on tools and prompting. That is no longer enough. 

L&D now has an opportunity to move beyond this and embed AI into real work. This means teaching workflow thinking, building capability in designing and refining AI-supported processes, and developing judgement and risk awareness. 

In practice, this involves helping people map their work, identify where AI can add value, and experiment with simple agent-based workflows. 

At the same time, L&D has a role in shaping responsible use. As highlighted in the UK Government’s AI Playbook, AI must be used in ways that are ‘responsible and beneficial… safeguarding the security, wellbeing, and trust of the public.’ 

As AI agents begin to carry out work, considerations such as data use, transparency, auditability and accountability, including Freedom of Information implications, become more important.  

This is a moment where L&D can move from supporting change to helping lead it. 

By Tom Bryant

Sign up for our fortnightly newsletter with the best travel inspirations.