Building an AI Literacy Programme

AI StrategyIntermediate45 min readPublished 2026-02-18Last reviewed 2026-03-05AI Primer

The organisations that will do best with AI are not those with the best technology. They are those with the most AI-literate workforces. This is already playing out in the data. 86% of employers expect AI to transform their business by 2030, yet only one in ten workers currently possesses in-demand AI skills. The companies closing this gap (JPMorgan Chase, Amazon, PwC) are investing billions in structured AI literacy programmes and seeing measurable returns: 20–40% productivity gains, higher retention, faster innovation.

Most AI training vendors will not tell you this, but the majority of corporate AI training programmes fail. They fail because they treat AI literacy as a technology problem when it is a human one. They fail because they default to self-paced e-learning with completion rates of 3–15%. They fail because they teach tools without building understanding.

This guide is designed to help you avoid those failures. It draws on academic research, industry surveys, and real-world case studies to provide a practical, evidence-based framework for building AI literacy programmes that actually change how your people work. Whether you are an HR leader designing a company-wide initiative, an L&D professional scoping curriculum, or a business leader making the case for investment, everything here is grounded in what the evidence says works, not what sounds good in a pitch deck.

The business case, and the urgency

The AI skills gap is no longer a future concern. It is an active drag on organisational performance today.

The World Economic Forum's 2025 Future of Jobs Report, drawing on data from over 1,000 employers across 55 economies, projects that 170 million new jobs will be created and 92 million displaced by 2030, with 39% of workers' core skills changing or becoming outdated within that timeframe. McKinsey's 2025 workplace research found that 46% of leaders identify skill gaps as the single most significant barrier to AI adoption, while Deloitte reports 68% of executives face a moderate-to-extreme AI skills gap.

The financial stakes are real. PwC's 2025 Global AI Jobs Barometer, analysing roughly one billion job postings across 15 countries, found that industries most exposed to AI saw productivity growth nearly quadruple, from 7% to 27% over six years, while least-exposed industries stagnated. Workers with AI skills now command a 56% wage premium, up from 25% the prior year. A Harvard Business School field experiment with 758 BCG consultants found that those using GPT-4 completed 12.2% more tasks, 25.1% faster, and produced results of 40% higher quality. Below-average performers saw the largest gains, a 43% improvement.

Yet most organisations are failing to act at scale. The Microsoft/LinkedIn 2024 Work Trend Index found that only 39% of workers using AI at work had received any company-provided training, and just 25% of companies planned to offer generative AI training. This vacuum has consequences: 78% of AI users are now bringing their own tools to work without organisational approval, creating ungoverned security, compliance, and quality risks that most leadership teams are only beginning to understand. As McKinsey argued in 2025, companies that treat upskilling as a training rollout miss the larger point: it is a change management effort.

The macroeconomic context reinforces the urgency. PwC's modelling projects AI could contribute up to $15.7 trillion to global GDP by 2030. McKinsey estimates generative AI alone could add $2.6–4.4 trillion annually. Organisations that prioritise career development are 51% more likely to consider themselves frontrunners in generative AI adoption. And the cost of inaction is rising: 41% of employers now plan to reduce staff whose skills are becoming irrelevant.

The question has shifted from whether to invest in AI literacy to how quickly you can get it right.

What "AI literacy" actually means

Before designing a programme, it helps to be precise about what you are building towards. "AI literacy" is not simply knowing how to use ChatGPT. The foundational academic definition comes from Long and Magerko (2020), whose influential paper at the ACM CHI conference defined it as a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool in the workplace and beyond. Their framework identified 17 competencies organised across five themes: understanding what AI is, what AI can do, how AI works, how AI should be used, and how people perceive AI.

This matters because it clarifies something that most corporate AI training gets wrong. Knowing which buttons to click in a specific tool is not AI literacy. It is tool training, and tool training becomes obsolete when the tool changes. AI literacy is the durable layer beneath it: the conceptual understanding, critical thinking, and ethical awareness that allows someone to evaluate any AI tool, adapt to new ones, and make sound professional judgements about when and how to use them.

Subsequent researchers have refined and simplified these frameworks for practical use. Ng, Leung, Chu, and Qiao (2021) distilled them into four practical aspects based on a review of 30 peer-reviewed articles: Know and Understand (foundational AI knowledge), Use and Apply (effective tool use), Evaluate and Create (critical evaluation and solution design), and Ethical Issues (awareness of bias, privacy, and societal impact). This four-part model has become widely adopted in practitioner contexts for its clarity.

More recently, UNESCO released two AI competency frameworks in September 2024 (one for students and one for teachers) both structured around a three-level progression from Acquire through Deepen to Create. The OECD and European Commission jointly published an AI Literacy framework in 2025 with four core domains (Engage with AI, Create with AI, Manage AI, and Design AI) encompassing 22 specific competencies. The OECD's accompanying report drew a distinction that matters for programme design: most workers need general AI literacy, not specialised technical skills, yet the majority of current training programmes focus on advanced capabilities. This creates a gap at the foundational level, which is exactly where most of your workforce sits.

A systematic review of AI literacy measurement scales published in npj Science of Learning (Lintner, 2024) evaluated 16 validated instruments and identified the most promising for organisational use. These include the MAILS (Meta AI Literacy Scale) by Carolus et al. (2023), the SNAIL (Scale for the Assessment of Non-experts' AI Literacy) by Laupichler et al. (2023), specifically designed for non-technical professionals, and the more recent AICOS (AI Competency Objective Scale, 2025), a 51-item performance-based test that overcomes self-report bias. One of the most important findings from this review: subjective and objective AI knowledge are often weakly correlated. People's self-assessment of their AI skills does not reliably reflect what they actually know. This has direct implications for needs assessment. Self-report surveys alone will mislead you.

The regulatory picture has changed. The EU AI Act, which became partially effective on 2 February 2025, has given AI literacy legal force. Article 4 requires all providers and deployers of AI systems to take measures to ensure a sufficient level of AI literacy among their staff. This applies regardless of the AI system's risk classification and extends to contractors and service providers. Non-compliance is treated as an aggravating factor for penalties that can reach €35 million or 7% of global annual turnover.

The European Commission's guidance published in May 2025 encourages a layered approach: general AI awareness for all employees, role-specific training for those interacting with specific AI systems, and advanced compliance training for those in governance roles. While no formal certification is mandated, organisations are expected to maintain internal records. The IAPP's analysis recommends establishing an AI working group that includes representatives from senior leadership, process owners, risk management, compliance, and security to oversee programme design.

For any organisation operating within or serving the EU market (which includes most multinational businesses) AI literacy is no longer a strategic option. It is a compliance requirement.

A practical competency model for organisations

Pulling these frameworks together into something actionable, six competency domains emerge for non-technical professionals.

Conceptual understanding means recognising AI in everyday tools, understanding the difference between narrow and general AI, and knowing how AI systems learn from data, without requiring coding ability. Critical evaluation means verifying the accuracy of AI outputs, recognising potential biases, and applying domain expertise to assess AI recommendations rather than accepting them uncritically. Effective AI interaction covers prompt engineering (structuring requests to get useful outputs), understanding how prompt design affects output quality, and knowing when AI tools are appropriate versus when they are not. Ethics and responsibility encompasses algorithmic bias, data privacy implications, accountability for AI-assisted decisions, and intellectual property considerations. Data literacy involves evaluating data quality, understanding how data collection affects AI outcomes, and interpreting AI-generated analytics. Domain application means identifying AI use cases in one's specific profession and integrating AI responsibly into existing workflows.

These competencies layer into a practical tiering model that maps to organisational roles:

Tier 1 — AI Awareness. Can explain what AI is in plain language. Recognises AI in products and services they use. Understands that AI has limitations. Aware of ethical considerations. This is the minimum for all employees.

Tier 2 — Functional Literacy. Uses AI tools effectively for work tasks. Crafts well-structured prompts. Critically evaluates outputs before acting on them. Understands data privacy implications of AI tool use. This is the target for most individual contributors.

Tier 3 — Strategic Proficiency. Identifies optimal AI use cases for their function. Designs AI-augmented workflows. Evaluates vendor tools and solutions. Mentors colleagues on AI usage. This is the target for managers and senior professionals.

Tier 4 — AI Leadership. Shapes AI strategy and governance. Drives adoption initiatives across the organisation. Ensures regulatory compliance. Champions an ethical AI culture. This is for executives and programme leads.

Assessing your organisation's starting point

Designing a programme without understanding your baseline is guesswork. Several established frameworks exist for organisational AI readiness assessment.

Microsoft's AI Readiness Assessment measures preparedness across seven pillars (Business Strategy, AI Governance and Security, Data Foundations, AI Strategy and Experience, Organisation and Culture, Infrastructure, and Model Management) categorising organisations into five maturity stages from Exploring to Realising. Deloitte offers both data readiness and trustworthy AI frameworks covering dimensions including privacy, fairness, and accountability. McKinsey's QuantumBlack division focuses assessments on three categories: technology, employees, and safety.

For measuring individual AI literacy (which you will need both before and after training) the validated instruments mentioned earlier (MAILS, SNAIL, AICOS) offer different approaches. The critical finding from the Nature systematic review bears repeating: you should use both self-report instruments (for attitudes and confidence) and objective tests (for actual knowledge). Self-assessment alone will mislead you, and the gap between perceived and actual competence is often largest among those who need training most.

The Johnson & Johnson case study, documented by MIT CISR, illustrates a more advanced approach: using AI itself to assess workforce capabilities. They defined 41 future-ready digital skills grouped into 11 capabilities, identified employee data sources across HR, recruiting, learning, and project management systems, and trained a machine learning model to infer skills proficiency passively. A critical success factor: leadership made explicitly clear that skills insights were de-identified and would never be used in performance reviews. Without that assurance, employees would have gamed the system or refused to participate.

Beyond individual skills, your readiness assessment should cover data infrastructure quality and accessibility, leadership alignment and commitment (McKinsey found that only 39% of Fortune 100 companies disclosed board oversight of AI as of 2024), cultural openness to experimentation, existing digital literacy baselines, and governance frameworks already in place.

Designing for how adults actually learn

This is where most AI literacy programmes go wrong. Not in choosing the wrong content, but in delivering it in ways that contradict decades of research on how adults learn. The science of adult learning provides clear principles, and most corporate training violates them.

Start with motivation, not content

Malcolm Knowles' theory of andragogy (the study of how adults learn, as distinct from pedagogy for children) identifies six assumptions about adult learners that should shape every design decision. Adults need to understand why they are learning something before they engage with what they are learning. They bring substantial prior experience that should be acknowledged and built upon, not ignored. They prefer self-direction over being told what to do. They are most ready to learn when the content solves an immediate, real problem. They prefer problem-centred learning over subject-centred learning. And they are primarily driven by internal motivation (relevance and mastery) rather than external rewards like certificates.

Applied to AI training, this means every module must answer the question "Why does this matter for my actual work?" within the first two minutes. Content must connect directly to tasks the learner performs weekly, not to abstract scenarios. Learners need autonomy over pace and sequence. And the most effective motivator is not fear of falling behind but the tangible experience of AI making their work noticeably better.

Build around experience, not lectures

David Kolb's Experiential Learning Theory is particularly relevant for AI literacy. His four-stage cycle (Concrete Experience, Reflective Observation, Abstract Conceptualisation, Active Experimentation) maps directly onto effective AI training. First, use an AI tool for a real task. Then reflect on what worked and what did not. Connect those observations to broader principles about AI capabilities and limitations. Finally, apply the learning to a different work scenario. This cycle repeats, building deeper understanding with each iteration.

This aligns with the well-established 70-20-10 model: roughly 70% of learning comes from on-the-job experience, 20% from social and peer interactions, and only 10% from formal instruction. The implication is clear: if your AI literacy programme consists primarily of courses and workshops, you are investing almost exclusively in the 10% channel while neglecting the 90% where real learning happens. Sandbox environments, safe spaces where learners can experiment with AI tools without production risk, are not a nice-to-have. They are essential.

The forgetting curve demands spaced practice

The evidence on memory retention is grim for traditional corporate training formats. Ebbinghaus's foundational research established that people forget approximately 70% of new information within 24 hours and up to 90% within a week without reinforcement. More recent studies confirm these findings: learners who practised retrieval scored 87%, while those who merely re-read material scored approximately 53%. The Association for Talent Development reports that only 12% of learners apply new skills from training without structured follow-up.

Yet most corporate AI training still follows a massed model: intensive one-off workshops that work directly against the biology of memory consolidation. The alternative is spaced repetition, distributing learning across multiple shorter sessions with increasing intervals between reviews. This is not a marginal improvement. It is the difference between training that sticks and training that evaporates.

Microlearning outperforms traditional formats

Microlearning (modules of 3–10 minutes focused on a single concept or skill) achieves approximately 80% completion rates compared to roughly 20% for conventional long-form e-learning. It boosts knowledge retention by 25–60% compared to traditional formats and can reduce total training time by 45–80% while maintaining productivity. Cognitive Load Theory explains the mechanism: breaking complex information into smaller segments reduces extraneous cognitive load, allowing the brain to process and encode information more deeply.

For AI literacy specifically, microlearning works because the field changes rapidly and learners need frequent, small updates rather than infrequent comprehensive overhauls. A ten-minute module on evaluating AI output for hallucinations is more likely to be completed, retained, and applied than a two-hour workshop covering the same topic alongside six others.

Cohort-based learning transforms completion rates

The starkest evidence concerns the difference between cohort-based and self-paced learning. Self-paced MOOCs, the format most organisations default to when they buy platform licences, achieve completion rates of just 3–15%. Fewer than half of enrollees look at more than one lecture.

Cohort-based programmes routinely achieve 85–96% completion. AltMBA reports 96%. Harvard's case-method online programmes achieve 85%. Esme Learning reports 98–100%. The mechanism is straightforward: cohorts create accountability through peer presence, transform passive consumption into active knowledge construction through discussion and collaboration, and maintain momentum through structured deadlines. Learners have a 69% greater chance of retaining information in cohort programmes compared to self-paced formats.

The practical implication: if you are choosing between giving 500 employees access to a self-paced AI course and running 20 cohorts of 25 people through a structured programme, the cohort model will produce dramatically better outcomes, even though it requires more coordination.

Blended beats everything

The U.S. Department of Education meta-analysis by Means, Toyama, Murphy, and Baki, examining 50 effect sizes from 45 studies, found that blended approaches (combining online and face-to-face elements) produced significantly larger learning advantages than either purely online or purely in-person instruction. The important finding: improved outcomes are the result of better pedagogy, not the mere presence of technology. Blending formats forces instructional designers to think more carefully about which activities are best suited to which medium, and the variety itself maintains engagement.

For AI literacy, a blended model might combine asynchronous microlearning modules (conceptual foundations), live cohort sessions (discussion, problem-solving, peer learning), supervised sandbox time (hands-on tool experimentation), and on-the-job application challenges (real-world practice with manager check-ins). Each element serves a different learning purpose, and together they cover the full 70-20-10 spectrum.

Why most AI training programmes fail

Understanding failure modes is as important as understanding best practices. The global corporate training market has reached $400 billion (Josh Bersin Company, 2026), yet 74% of senior leaders admit their organisations still lack the skills to compete in an AI-driven economy. Josh Bersin has called this "The Learning Paradox."

AI-specific training faces even steeper challenges. BCG's research across 1,400 C-suite executives found that while AI was a top-three technology priority, 66% expressed ambivalence or dissatisfaction with progress and only 6% had begun upskilling meaningfully. An estimated 70–85% of AI projects fail to reach production, often because the workforce is not prepared to integrate outputs into operational reality.

The failure modes are well-documented and remarkably consistent across the research literature.

Lack of relevance is the most corrosive. 91% of employees want personalised, relevant training, yet one in three says their organisation's training content is outdated. Generic "Introduction to AI" courses that teach the history of neural networks to a marketing manager who needs to know how to use AI for customer segmentation are not just ineffective. They actively erode trust in the programme.

Absence of management support compounds the problem. 66% of HR practitioners cite it as a challenge, and McKinsey warns that if employees are trained on AI but still measured against old KPIs, adoption will stall. When managers do not visibly use AI themselves, or when performance metrics do not reflect new capabilities, employees correctly perceive that the organisation is not serious.

One-size-fits-all approaches ignore that a finance director and a customer service representative need fundamentally different AI competencies. A programme that treats everyone identically will bore some participants and overwhelm others while serving none of them well.

Training that is too theoretical produces the statistic that 70% of employees feel they have not mastered the skills they need. Deloitte finds learners are 75% more likely to retain knowledge when they can apply it immediately. Abstract explanations of how transformer architectures work do not help someone who needs to draft a client proposal with AI assistance this afternoon.

No follow-up or reinforcement means the forgetting curve erases most learning within days. A single training event, however excellent, produces almost no lasting behaviour change without structured follow-up.

And a technology-first mindset, deploying tools before building understanding, creates the widespread shadow AI problem while failing to change how work actually gets done.

Chesamel's 2025 analysis of AI upskilling fatigue identified five additional sticking points: training not treated as an ongoing process, learning modules that are generic while actual work is specific, conflicting messages from leadership about job security, over-focus on tool skills versus transferable human capabilities, and no alignment between training investment and behaviour change measurement.

What distinguishes effective programmes? The evidence converges on a consistent set of principles: strategic alignment with business objectives, blended multi-modal design, cohort-based structures with peer accountability, spaced reinforcement rather than one-off events, microlearning integration, experiential hands-on application, role-specific relevance, active manager involvement, psychological safety for experimentation, continuous rather than event-based cadence, and measurement that goes beyond completion rates to track actual behaviour change and business impact. Organisations that adopt what Bersin calls "dynamic enablement" (continuous learning embedded in daily workflow rather than periodic training events) are 6× more likely to exceed financial targets and 2× more likely to innovate.

A practical curriculum structure

A well-designed AI literacy programme operates on multiple tiers simultaneously, serving different audiences with different depths while maintaining a coherent overall architecture.

The foundational layer: for everyone

Every employee, regardless of role, should develop a baseline understanding of what AI is and is not, including its current capabilities and genuine limitations. They should grasp how AI works conceptually (how models learn from data) without needing coding ability. They need hands-on experience with approved tools, including prompt engineering fundamentals. They should be able to critically evaluate AI outputs, recognising hallucinations, bias, and confidence errors. They must understand responsible use, including data privacy, fairness, and intellectual property considerations. And they need to know their organisation's specific AI policies and approved tool list.

PwC's foundational e-learning curriculum covers this in approximately two hours. The Digital Education Council's "AI Literacy for All" programme runs to four hours. This foundational training satisfies EU AI Act Article 4 requirements and should become part of standard employee onboarding.

For executives and senior leaders

The focus shifts from tool usage to strategic understanding: AI's competitive implications for the industry, ROI frameworks for evaluating AI investments, governance and risk management at board level, workforce transformation planning, and regulatory compliance obligations. Programmes from Harvard Business School Online, MIT xPRO, and Oxford Saïd address this tier, ranging from 6 to 12 weeks.

For middle managers

Managers are the most critical tier for programme success because they sit at the junction between strategy and execution. They need to identify AI use cases within their function, lead teams through AI-driven change, evaluate vendor tools and solutions, create team cultures of safe experimentation, and measure AI impact on productivity. This tier should include training-the-trainer capabilities, because managers are the primary mechanism through which learning gets reinforced on the job. If your managers cannot model AI usage and coach their teams, your programme will stall.

For individual contributors

This tier requires the deepest hands-on skill development: proficient prompt engineering (the most in-demand AI skill per AWS 2024 research), using AI effectively for writing, research, data analysis, and summarisation, integrating AI tools into daily workflows, critical evaluation before acting on AI outputs, and, just as importantly, understanding when not to use AI at all.

Role-specific competencies

Layered on top of the foundation, each function needs tailored content.

Finance and accounting professionals need AI-powered forecasting, anomaly detection, automated reconciliation, and audit capabilities. JPMorgan's internal "ChatCFO" tool demonstrates what role-specific AI integration looks like at scale. Marketing teams should develop competencies in AI-assisted segmentation, content generation, predictive analytics, and campaign optimisation. McKinsey's research shows marketing represents 28% of total potential economic value from generative AI, the highest of any function. HR professionals need skills in AI-powered recruitment screening (with bias awareness), engagement analytics, workforce planning, and learning personalisation. Operations and supply chain teams should focus on demand forecasting, predictive maintenance, logistics optimisation, and quality control. Legal and compliance functions require competence in contract review, regulatory monitoring, AI governance frameworks, and understanding the liability implications of AI-assisted decisions.

Gartner recommends combining three learning methods across all tiers: formal learning (courses augmented with just-in-time content), social learning (communities of practice, coaching, centres of excellence), and on-the-job experiential learning (AI experiments, real-world application, initiative support). No single method is sufficient on its own. Effective programmes deliberately blend all three.

Managing the human side

Every L&D professional knows that the hardest part of training is not building the content. It is getting people to engage with it honestly. AI literacy programmes face a unique version of this challenge: the subject matter itself triggers anxiety about professional relevance and job security.

52% of US workers are worried about AI's future workplace impact, and EY's AI Anxiety in Business Survey found that 75% are concerned AI will make certain jobs obsolete. This anxiety is not irrational, but it actively undermines the learning and adoption that organisations need. Fear triggers neurological threat states that reduce cognitive capacity for experimentation and openness to change. An emerging concept, FOBO (Fear of Becoming Obsolete), captures how employee fears have evolved from "will I have a job?" to "will I still matter in five years?"

Psychological safety is not optional

Amy Edmondson's research on psychological safety (a shared belief that it is safe to express ideas, ask questions, take risks, and admit mistakes without fear of negative consequences) is directly relevant. Her foundational 1999 study demonstrated that team psychological safety drives learning behaviour, which in turn drives performance. Google's Project Aristotle confirmed this: psychological safety was the single strongest predictor of team effectiveness.

Edmondson has addressed AI specifically: psychological safety and building fearless organisations are more important than ever in the age of AI, because navigating uncertain terrain requires willingness to experiment, fail, and learn openly. The core paradox facing organisations is that the people who most need to experiment with AI, those in routine cognitive roles, experience the highest psychological threat from it. Most AI rollouts violate both conditions for proactive learning (autonomy and psychological safety) by mandating tool use while simultaneously triggering job security fears.

A Psychology Today analysis found that teams in top-quartile psychological safety are dramatically more likely to experiment with new AI tools. Practical strategies for building this safety include creating explicit "risk bands" that transform vague existential threats into specific, manageable boundaries ("experiment freely in these areas; seek approval in those"), establishing sandbox environments for exploration without production risk, separating criticism of AI outputs from judgments of personal competence, and, perhaps most importantly, leaders modelling vulnerability by publicly sharing their own AI experiments and failures.

Change management structures that work

Kotter's 8-step model works well for enterprise-wide AI literacy initiatives: create urgency through market data and competitive analysis, form a guiding coalition (AI working group including senior sponsors, process owners, risk, compliance, and security representatives), establish a clear vision for AI's role in the organisation, communicate relentlessly across multiple channels, empower action by removing barriers and providing tools, generate visible short-term wins, consolidate gains by building on successes, and anchor changes in organisational culture and processes.

Prosci's ADKAR model complements this at the individual level: building Awareness of why AI matters for the individual, Desire to participate and learn, Knowledge through structured training, Ability through practice and application, and Reinforcement through recognition, feedback, and updated performance expectations. MIT Sloan research found that companies achieving the highest AI ROI invest 70% more in change management, workflow redesign, and capability building than their peers. The technology is the easy part. The transformation is the hard part.

AI champions are the most powerful lever

The most underused accelerator of AI adoption is the AI champions network: a distributed group of volunteer peer advocates who model AI usage, coach colleagues, and surface friction points in real time.

Citi built a network of over 4,000 "AI Accelerators" and 25–30 senior AI Champions, achieving over 70% adoption of firm-approved AI tools across 182,000 employees in 84 countries. Champions spend just 30–60 minutes per week embedded in real work: showing AI in action, helping peers overcome obstacles, sharing examples, and feeding practical feedback back to the programme team. Critically, Citi did not mandate AI use. Adoption was driven socially, not hierarchically.

GitHub's internal playbook treats AI adoption as a change management challenge, not a technology rollout, with volunteer champions who serve as equal parts coach, translator, and feedback loop. Research consistently shows that organisations with champion networks achieve implementation success rates three times higher than those relying solely on top-down mandates.

The reason is simple: people copy people. When a colleague in the next desk demonstrates that AI made their quarterly report faster and better, the effect is more powerful than any corporate communication or mandatory training module. Champions make AI adoption feel normal, achievable, and safe, which is exactly what anxious employees need.

Measuring what matters

If you cannot demonstrate that your AI literacy programme changes behaviour and produces business results, you will eventually lose your budget. The good news is that robust measurement frameworks exist. The bad news is that fewer than one in four companies currently use them.

The Kirkpatrick Model adapted for AI literacy

The Kirkpatrick Model remains the most widely used training evaluation framework and maps directly to AI literacy programmes across four levels.

Level 1 — Reaction captures whether participants found the training engaging and relevant. Post-session surveys, Net Promoter Scores, and engagement analytics (completion rates, time spent, drop-off points) provide this data. This is necessary but insufficient. High satisfaction does not guarantee learning.

Level 2 — Learning measures actual knowledge acquisition. This is where the validated instruments discussed earlier (AICOS, SNAIL, MAILS) become valuable, applied as pre- and post-assessments. The combination of objective knowledge tests and self-report confidence measures gives you a complete picture, and the gap between the two is itself informative.

Level 3 — Behaviour tracks whether learning translates into practice. This is the level most organisations skip, and it is the most important. Are employees actually using AI tools? How frequently? For which tasks? AI tool usage analytics, manager evaluations, and 30/60/90-day follow-up surveys provide this evidence. Without Level 3 data, you are measuring training delivery, not training impact.

Level 4 — Results measures business outcomes: productivity gains, cost savings, quality improvements, error reduction, innovation metrics, and speed-to-market changes. The Harvard/MIT/Wharton 2023 study found AI-assisted workers completed tasks 37% faster. That kind of productivity data, tracked against trained versus untrained teams, makes the business case concrete.

Jack Phillips' ROI Methodology extends Kirkpatrick with an explicit financial calculation: ROI = (Total Programme Benefits – Total Programme Costs) / Total Programme Costs × 100%. Phillips emphasises isolating training effects from other variables using control groups or trend analysis. This level of rigour is worth applying to high-investment strategic programmes.

AI-specific adoption metrics

Beyond the Kirkpatrick framework, several AI-specific metrics provide actionable operational data.

Active AI User Rate (the percentage of employees regularly using approved AI tools) is the most important adoption metric. Zapier tracked their own internal adoption climbing from 63% in late 2023 to 97% by 2025; a benchmark of 60–80% active users within 12 months is healthy for most organisations.

Usage intensity matters as much as breadth. Healthy engagement runs approximately 15–25 prompts per active user per day. Low intensity may indicate that employees have accounts but are not integrating AI into real workflows.

Manager adoption rate is a critical leading indicator. If managers are not visibly using AI, their teams will not either. Track this separately and address gaps early.

Time-to-proficiency (days from first tool access to consistent daily usage) should benchmark at 7–14 days with adequate training and support. If it is taking longer, your onboarding process needs work.

Experiment-to-production rate reveals whether AI exploration is translating into permanent workflow changes. If employees are experimenting but not adopting, there is likely a process or permission barrier that training alone cannot solve.

Governance, policy, and guardrails

AI literacy programmes do not operate in a vacuum. They need governance structures and policies that define the boundaries within which employees can safely experiment and work.

Only 28% of organisations currently have formal comprehensive AI policies, despite the majority of employees already using AI tools at work, many without approval. This gap creates risk: a 2024 study found that 38% of AI-using employees admitted sharing sensitive work data with AI tools.

A robust AI acceptable use policy should cover the purpose and scope of AI use, clear definitions of key terms, specific acceptable and prohibited uses, a list of approved tools and vendors, data privacy and security requirements (what can and cannot be entered into AI tools), ethical guidelines, human oversight requirements for AI-generated outputs, quality assurance processes, intellectual property and attribution rules, governance structure and escalation paths, monitoring and compliance mechanisms, and a regular review cadence (quarterly at minimum given the pace of change).

The policy should be treated as a living document, not a one-time compliance exercise. Secureframe's analysis recommends involving legal, IT, HR, compliance, and business leadership in drafting, and running the policy through scenario testing before publication. Write it in plain language, not legalese, and cover its existence and contents in the AI literacy programme itself.

Ethical frameworks from Microsoft (fairness, reliability, privacy, inclusiveness, transparency, accountability), Google, and IBM provide useful templates. However, Gartner notes that only 25% of organisations have operationalised ethics governance principles, despite 79% of executives calling AI ethics important. Bridging this gap between stated values and operational reality is a key function of the AI literacy programme.

Budgeting and resources

The 2025 ATD State of the Industry Report found average direct learning spend of $1,054 per employee, with organisations investing 2.9% of revenue in training, the highest level in five years. Training Magazine's 2025 data puts the per-learner figure at $874, with significant variation by company size: $468 at large firms versus $1,091 at small firms.

AI-specific training costs range widely. Basic online AI courses run $500–1,000 per person. Comprehensive specialised programmes cost $2,000–4,000 per person. Enterprise-scale packages from training vendors range from €12,000 to €250,000 depending on scope and customisation. The ROI evidence supports the investment: Deloitte's Q4 2024 survey of 1,854 executives found that 74% of organisations say their most advanced generative AI initiatives are meeting or exceeding ROI expectations.

Build versus buy

The decision should be guided by specificity needs. Off-the-shelf platforms (LinkedIn Learning at $350–500 per user annually at enterprise scale, Coursera for Business, Udemy Business, Pluralsight) are cost-effective for foundational AI literacy and general skills content. 78% of Fortune 100 companies already use LinkedIn Learning.

Custom-built content is necessary for organisation-specific use cases and workflows, internal policy and governance training, role-specific applications involving proprietary tools and data, and EU AI Act compliance documentation that references the specific AI systems your organisation deploys. The recommended approach is hybrid: use commercial platforms for the foundational tier and invest in custom development for the organisation-specific layers that make training relevant and applicable.

Staffing and infrastructure

Running a programme requires an AI Literacy Programme Lead, instructional design capacity, subject matter experts (from both the AI and business domains), change management expertise, technology support, legal and compliance input, and an executive sponsor with genuine authority and visibility.

The AI Champions Network (volunteer peer advocates spending 30–60 minutes per week) is the most scalable resource lever. Citi's model of 4,000+ Accelerators and 25–30 Champions for 182,000 employees provides a useful ratio. Technology infrastructure needs include an LMS or learning experience platform, AI sandbox environments, approved AI tool licences, a community and discussion hub, and assessment tools. Training Magazine 2025 reports average spending on learning tools of approximately $291,000 per organisation.

What the pioneers have learned

The organisations investing most aggressively in AI literacy share clear commonalities, even as they differ in scale and sector. Their experiences are the closest thing we have to a proven playbook.

JPMorgan Chase: AI literacy as strategic infrastructure

JPMorgan has onboarded 200,000 employees to its proprietary LLM Suite within eight months of launch, with roughly half using generative AI tools daily. The bank added prompt engineering to onboarding for all new hires, took a segment-by-segment approach with different training for different business units, and emphasised learn-by-doing over theoretical instruction. With 450+ AI use cases in production or development and projected value approaching $2 billion, CEO Jamie Dimon's personal, visible championing of AI was repeatedly cited as a factor in adoption momentum. The bank also began incorporating AI usage into performance evaluations, a signal that this is not optional exploration but a core competency expectation.

The lesson: executive sponsorship is not ceremonial. When the CEO treats AI literacy as strategic infrastructure on the same level as financial controls or client relationship management, the organisation follows.

PwC: peer-led social learning at scale

PwC invested $1 billion over three years, achieving 95% employee engagement with generative AI by year-end 2023 and over 360,000 hours voluntarily dedicated to AI skill-building. Their innovations demonstrate the power of social, peer-led, experiential learning. "Prompting parties," real-time collaborative AI experimentation in risk-free environments, lowered the barrier to entry by making first contact with AI social and fun rather than intimidating and individual. A 350-person GenAI Super User Network distributed expertise across the organisation. And the PowerUp trivia game introduced gamification without undermining the serious purpose.

The lesson: making AI exploration social, visible, and peer-driven overcomes resistance in ways that mandatory training cannot.

Amazon: meeting people where they are

Amazon's AI Ready initiative trained 2 million people in one year, ahead of their initial target. Their most popular course was non-technical: "Introduction to Generative AI — The Art of the Possible." They discovered that social media and streaming formats (YouTube, Twitch) outperformed traditional e-learning for engagement, particularly among younger employees. They also found that people who started with hands-on learning progressed faster and retained more than those who started with theory.

The lesson: understand how your workforce actually consumes information and meet them there, rather than forcing them into formats that look professional but do not engage.

AT&T: the long game

AT&T's Future Ready initiative, described by Fortune as the most ambitious programme for retraining workers in corporate history, invested $1 billion to reskill 250,000 employees when roughly 100,000 were in hardware-focused roles likely to become obsolete. With 88% of employees participating and 50% of roles filled internally, AT&T showed that sustained, long-term commitment (now running over a decade) outperforms one-off initiatives.

The lesson: workforce transformation is not a project with an end date. It is a permanent organisational capability.

Singapore: a national model

Singapore's approach through AI Singapore, SkillsFuture, and the National AI Strategy 2.0 offers a government-level model. The city-state targets tripling its AI workforce to 15,000, provides subsidised AI training for all citizens, and has achieved 75% of surveyed workers regularly using AI tools. Singapore ranks first per capita on Stanford's Global AI Vibrancy Index and on the IMF's AI Preparedness Index.

The lesson: when AI literacy is treated as national infrastructure, supported by funding, governance, and cultural expectation, adoption follows quickly.

Five insights that go beyond conventional wisdom

The research and case studies in this guide surface five conclusions that challenge common assumptions about AI literacy programmes.

First, AI literacy is now a legal obligation, not just a strategic advantage. The EU AI Act's Article 4 requirements, effective since February 2025, mean that every organisation operating within or serving EU markets must demonstrate that staff working with AI systems have sufficient AI literacy. The business case has shifted from "should we invest?" to "how do we comply?"

Second, the completion rate gap between cohort-based and self-paced learning is the most actionable finding for programme design. With cohort-based programmes achieving 85–96% completion versus 3–15% for self-paced MOOCs, organisations that default to self-paced e-learning are statistically designing for failure. The coordination cost of cohort-based delivery is real, but the difference in outcomes is so large that it justifies the investment.

Third, self-assessed AI competence is unreliable. Validated research consistently shows weak correlation between subjective and objective AI knowledge. Pre-programme surveys that ask employees to rate their own AI skills will systematically mislead you. Use objective assessment instruments for honest needs analysis.

Fourth, AI champion networks consistently deliver three times higher implementation success than top-down mandates. Yet most organisations still rely on mandatory training rollouts and mass email announcements. Investing in a structured champion network, even a small one, will produce better adoption outcomes than a more expensive programme without one.

Fifth, the organisations seeing the highest AI ROI invest 70% more in change management than their peers. The bottleneck is not technology. It is not even skills. It is the human and cultural dimensions: psychological safety, management reinforcement, workflow redesign, and sustained commitment.

Where to go from here

Building AI literacy is not an L&D project. It is an organisational transformation that demands executive sponsorship, evidence-based instructional design, cultural change, sustained investment, and rigorous measurement. The organisations that treat it as such (JPMorgan, PwC, Amazon, Singapore) are pulling ahead in ways that will be difficult for late movers to close.

The window for establishing AI literacy as an organisational capability is narrowing. Not because AI will stop evolving, but because the competitive advantage of having an AI-literate workforce compounds over time. Organisations that start now build institutional knowledge, develop internal expertise, and create cultures of experimentation that accelerate every subsequent AI initiative. Those that wait will find themselves behind not just on skills but on the culture of learning that makes rapid adaptation possible.

The frameworks are here. The case studies show what works. What remains is execution.

Key sources and further reading

Academic research: Long, D. & Magerko, B. (2020). "What is AI Literacy? Competencies and Design Considerations." Proceedings of the 2020 CHI Conference, ACM.

Ng, D.T.K., Leung, J.K.L., Chu, S.K.W. & Qiao, M.S. (2021). "Conceptualizing AI literacy: An exploratory review." Computers and Education: Artificial Intelligence, 2.

Lintner, P. (2024). "A systematic review of AI literacy scales." npj Science of Learning, Nature.

Edmondson, A. (1999). "Psychological Safety and Learning Behavior in Work Teams." Administrative Science Quarterly, 44(2).

Means, B. et al. (2013). "The Effectiveness of Online and Blended Learning." Teachers College Record, 115(3).

Industry reports: World Economic Forum — Future of Jobs Report 2025

PwC — 2025 Global AI Jobs Barometer

McKinsey — AI in the Workplace 2025

OECD — Bridging the AI Skills Gap (2025)

EU AI Act — Article 4 AI Literacy Requirements

European Commission — AI Literacy Guidance (2025)

Practitioner guides: Gartner — AI Literacy: Why and How Business Leaders Must Build It

BCG — Five Must-Haves for Effective AI Upskilling

McKinsey — Redefine AI Upskilling as a Change Imperative

IAPP — Designing an AI Literacy Program

UNESCO — AI Competency Framework

Key takeaways

  • AI literacy is now a legal requirement, not just a nice-to-have. The EU AI Act's Article 4, effective since February 2025, requires any organisation deploying AI systems to ensure staff have sufficient AI literacy, with penalties up to €35 million or 7% of global turnover. If you serve EU markets, this is a compliance issue today.
  • Self-paced e-learning is statistically designed to fail. Cohort-based programmes achieve 85–96% completion rates versus 3–15% for self-paced MOOCs. Most organisations default to giving employees platform licences and hoping for the best. The coordination cost of running cohorts is real, but the difference in outcomes is so large it's not really a close call.
  • People's self-assessment of their AI skills is unreliable. Research consistently shows weak correlation between how competent people think they are with AI and how competent they actually are. Pre-programme surveys that ask employees to rate themselves will mislead you. Use objective assessment instruments or you're building on bad data.
  • Champion networks outperform mandates by 3×. Citi's network of 4,000+ peer AI advocates drove 70%+ adoption across 182,000 employees without mandating AI use. The mechanism is simple: people copy people. A colleague showing you something useful at their desk is more persuasive than any corporate training module.
  • The bottleneck isn't technology or even skills. It's change management. Organisations seeing the highest AI ROI invest 70% more in change management, workflow redesign, and cultural enablement than their peers. Psychological safety, manager reinforcement, and updated performance expectations matter more than curriculum quality.

Stay informed

The AI Primer Briefing is a weekly digest of what matters in AI — curated for professionals, free of breathless hype.