AI adoption is accelerating. Capability often isn’t.
In many organisations, people experiment with an AI feature, try a prompt, generate a result… then stall when the task becomes complex, the output feels unreliable, or the risk is unclear. What’s missing isn’t enthusiasm. It’s fluency: the practical confidence to use AI safely, intentionally and effectively within real workflows.
Lander & Rogers AI in Practice series previously explored where AI creates risk — in customer-facing chatbots, in M&A transactions (who owns the output?), and in governance contexts where boards rely on AI-generated summaries. This article complements that discussion by focusing on the capability behind safe, value-based adoption. Because purposeful innovation isn’t about implementing the latest tool. It’s about building skills that help people make better decisions, deliver stronger outcomes and reduce friction without compromising trust, quality or accountability.
The shift: from AI use to AI fluency
Many teams remain at surface-level use and mostly use AI for drafting quick communications, summarising notes, or searching internal documents. That’s a useful starting point. But it isn’t fluency.
AI fluency means knowing how to choose the right tool for the task, frame effective prompts with appropriate constraints, verify outputs rigorously, integrate AI into repeatable workflows - and critically, know when not to use it.
That final point matters. As executives and boards increasingly use AI for summaries, minutes and analysis, the expectation remains unchanged: humans are accountable, informed and exercising judgment. Fluency makes that accountability practical in day-to-day work.
A simple, tiered framework for capability uplift
To make progress visible and scalable, we use a three-tier model designed to move people from experimentation to confident, contextual use.
Safe and useful foundations
At this stage, people learn to refine prompts, summarise and reformat content, brainstorm effectively and understand what information should never be shared with AI tools.
You know this tier is working when people can consistently produce a usable first draft, articulate how they verified the output, and demonstrate awareness of basic data-handling obligations.
Workflow integration
The next shift is from one-off use to repeatable patterns. Teams develop shared prompt libraries for common tasks, apply structured review checklists (accuracy, tone, completeness), and use playbooks for recurring scenarios such as meeting notes, research summaries and client updates. Escalation pathways are clear when confidence in an output is low.
Progress becomes visible when teams reuse proven patterns rather than reinventing prompts each time, quality improves without additional rework, and review processes become routine rather than reactive.
Optimisation and responsible scale
At this level, AI is embedded thoughtfully across functions — from business development and operations to delivery and risk. Organisations apply lightweight evaluation methods such as sampling, benchmarking and red-teaming, and integrate automation where appropriate, always with a human in the loop.
Here, fluency is measurable. Teams can articulate risks and controls for specific use cases, track adoption and quality, and coach others in safe and effective practices.
Turning policy into practical tools
Policies are essential, but they only reduce risk when they translate into daily behaviour. The key is converting principles into practical enablement assets that people actually use.
Prompt libraries provide structured templates embedded within existing systems such as Outlook or SharePoint. These might include instructions to summarise material for a board-ready audience while highlighting assumptions and unknowns, draft options while labelling speculation, or convert analysis into a neutral, action-oriented email.
Output review checklists reinforce critical habits, particularly for higher-risk tasks. Effective reviews consider accuracy (what sources were relied upon and what may be missing), completeness, tone and potential bias, confidentiality, and clear human sign-off.
Workflow playbooks bring everything together. For common processes, a one-page standard can clarify when AI is appropriate, what inputs are permitted, the prompt pattern to use, required review steps, examples of acceptable outputs, and escalation pathways. This is especially important in front-line environments where the cost of a "hallucinated" response is high.
Together, these tools align with a broader governance principle: AI can assist, but humans verify and remain accountable.
Measuring fluency - not just adoption
Tracking logins or message counts tells only part of the story. Real fluency shows up across adoption, quality and readiness.
Adoption becomes meaningful when usage is active, role-specific and sustained over time - particularly when approved prompt libraries and playbooks are consistently used.
Quality improves when first-pass outputs are stronger, turnaround times accelerate without compromising standards, and fewer flawed outputs reach stakeholders.
Feedback mechanisms including in-tool feedback, discussion sessions and rated prompt libraries, help convert real-world lessons into updated guidance.
Readiness checks, such as pulse surveys and scenario-based spot checks, validate that teams understand limitations and apply review processes consistently. For boards and senior leaders, this is particularly important when AI supports the preparation of meeting materials or summaries.
Adoption is a starting point. Capability is the goal.
Embedding fluency through continuous learning
AI fluency is not achieved through a single training session. It develops through repetition, practical application and visible progression.
Effective programs often include induction modules covering essentials and verification practices, role-based pathways tailored to job functions, short monthly challenges tied to real work, and community forums where prompts and lessons are shared. Micro-credentials can reinforce progress across the three tiers and signal capability maturity.
This approach is especially important in sensitive contexts such as meeting documentation and workplace processes, where confidentiality, accuracy and judgment must remain front of mind.
Programs that build stickiness through practice
Capability becomes durable when learning is practical and social.
One format Lander & Rogers implemented, involved a team-based, scenario-driven challenge. Participants work through ambiguous tasks under time pressure, apply review checklists, compare outputs and refine approaches together, reinforcing structured prompting and critical evaluation skills.
Another is a structured AI Fluency Series delivered in short, repeatable sessions. These focus on foundational practices that apply across tools: when to use AI, when not to, how to prompt effectively, and how to manage risk and ethical considerations.
Both approaches treat AI as any other professional capability — something learned through practice, supported by tools, reinforced by community and aligned to purpose.
Why this matters
AI is already embedded in most workflows. Sometimes it's visible, sometimes not. The objective isn’t to slow innovation. It’s to make it meaningful: aligned to outcomes, embedded responsibly and supported by real capability.
Legal and governance perspectives rightly emphasise that accountability does not disappear when AI enters the room. Building AI fluency is how we make that accountability practical.
If you would like support in designing and embedding an AI fluency program within your organisation, reach out to our Artificial Intelligence experts.
All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted.