It’s Not a Skills Gap — It’s a Confidence Crisis
- Sue Cunningham
- Apr 8
- 5 min read
Why experienced professionals struggle with AI, even when they’re using it

AI is reshaping work. That’s no longer debatable. What is still poorly understood - and rarely discussed - is what’s actually stopping experienced professionals from engaging with it.
The dominant narrative says it’s a skills problem. Train people. Upskill them. Run workshops on prompt engineering. But what if the biggest barrier to AI adoption for experienced professionals isn’t technical at all?
The evidence paradox
AI usage is rising fast among workers globally — but confidence is not rising with it.
ManpowerGroup’s 2026 Global Talent Barometer, surveying nearly 14,000 workers across 19 countries, found that regular AI usage increased by 13% in the past year while confidence in using the technology fell by 18%. The steepest declines were among the most experienced workers: a 35% drop among baby boomers and 25% among Gen X.
This is not a story about people refusing to engage. It is a story about people engaging — and feeling less sure of themselves as they do.
The confidence gap has consequences beyond how individuals feel. Generation and YouGov’s 2024 survey of over 4,000 workers and employers across five countries found that 90% of US hiring managers would consider candidates under 35 for AI-related roles, compared to just 32% for those over 60 — even though employers consistently rate their experienced staff as strong performers. When confidence drops, and that drop is visible, it shapes how people are perceived, invested in and included.
If the barrier were simply skills, training would close the gap. But the ManpowerGroup data suggests something more uncomfortable: the more people use AI without adequate support, the less confident they become.
That’s not a skills gap. It’s a confidence gap — and it requires a fundamentally different response.
What I’m actually seeing
Through the AI Leadership Circle — a structured, small-group program I run for experienced professionals navigating AI — I’ve worked across multiple cohorts over the past year. The patterns are remarkably consistent. They tell a story the skills narrative misses.
People don’t arrive resistant. When I ask participants to describe how they feel about AI in a single word at the start of the program, almost half express something positive or forward-leaning: curious, excited, hopeful, intrigued, amazed, inspired. About a third express uncertainty or fear: anxious, overwhelmed, scared, trepidatious, confused. The rest sit in the ambivalent middle: conflicted, cautious, FOMO. One person described it as heading up a roller coaster — excited, but knowing there’s a lot of screaming ahead.
Most people aren’t paralysed by fear. They are curious but cautious — and often holding both states at once. That emotional complexity is the territory that skills training doesn’t enter and cannot resolve.
The shift, when it comes, is rarely about learning a new tool. It happens when someone realises that the quality of what AI produces depends entirely on what they bring to it — their context, their judgement, their domain knowledge. Vague inputs produce vague outputs. But when an experienced professional brings depth and specificity, the results are qualitatively different.
That realisation is pivotal. Their expertise is not diminished by AI — it is what makes AI useful. And it is this moment, more than any technical instruction, that moves people from cautious to engaged.
But that shift doesn’t happen through content delivery alone. It happens in environments where people feel safe to be uncertain — where they can watch peers at their level experiment, stumble and improve. MIT Technology Review, in research conducted with Infosys in late 2025, found that 83% of business leaders believe a culture that prioritises psychological safety measurably improves the success of AI initiatives. What I observe in practice aligns closely with that finding: confidence is built through supported experience alongside peers, not through instruction alone.
Across my cohorts, every participant has reported increased confidence by the end of the program. What’s more telling is what follows. Most go on to actively share what they’ve learned with colleagues and direct reports. They move from uncertain users of AI to advocates within their own teams.
That ripple effect — experienced professionals who enter uncertain and leave enabling others — is one of the most consistent patterns I observe and one of the most organisationally significant.
What happens next
What is less visible, but equally important, is what happens after the formal program ends.
Through a monthly conversation circle I facilitate for alumni, I hear a consistent tension. Momentum is hard to maintain. People describe feeling capable by the end of the program, and then unsettled again weeks or months later as the landscape shifts beneath them.
One participant described it as no longer being about tools, but about constantly re-evaluating their role in relation to themselves. Another offered a metaphor that captures the experience precisely: that society has been collectively promoted to manager — everyone is learning how to delegate, except to a machine and everyone is doing it at once.
Confidence in this environment is not something you build once. It needs to be sustained — and that is something most training programs and policy discussions have yet to account for.
The organisational implications
If the barrier is skills, you invest in training programs. If the barrier is confidence, you invest differently — in psychological safety, peer learning and supported experimentation. These are not variations on the same theme. They are fundamentally different interventions.
The organisations getting this right are not simply running more workshops. They are creating conditions where it is safe to experiment, where leaders model their own learning publicly and where the narrative shifts from “you need to upskill” to “your expertise is what makes AI useful.”
Because what often gets lost in the disruption narrative is this: AI is powerful at pattern recognition, content generation and processing speed. It is far less capable when it comes to contextual judgement, ethical reasoning under pressure, reading organisational dynamics and recognising when its own outputs are flawed.
Experienced professionals do not need to compete with AI. They need to learn to direct it. And that requires confidence as much as competence.
The question worth asking
The dominant question around AI and the experienced workforce is often framed as: “Can they adapt?” It is the wrong question. It frames a generation of accomplished professionals as a problem to be solved.
A more useful question is this: what conditions enable experienced professionals to engage with AI from a position of strength?
The answer begins with recognising that some of the most significant barriers are internal, not external. That emotional readiness precedes technical capability. That confidence is not built by being told AI is important — it is built by experiencing, in a supported environment, that your expertise is what makes AI work.
Once that shift happens, it doesn’t just change how someone uses a tool. It changes how they see their own relevance, their own future, and their own capacity to lead through what comes next.
-----
Sue Cunningham is the founder of The Uncertainty Lab. A former executive with 25 years leading transformation and crisis response, she now helps experienced professionals build confidence and agency in the age of AI. She holds credentials in AI strategy from MIT Sloan, Oxford and INSEAD. She runs the AI Leadership Circle for experienced professionals and is writing a book on building agency under uncertainty.
Sources
ManpowerGroup, “2026 Global Talent Barometer” (January 2026). Survey of nearly 14,000 workers across 19 countries.
Generation & YouGov, “Age-Proofing AI: Enabling an Intergenerational Workforce to Benefit from AI” (2024). Survey of over 4,000 workers and employers across five countries.
MIT Technology Review / Infosys, “Creating Psychological Safety in the AI Era” (December 2025). Survey of 500 business leaders.
AI Leadership Circle and AI Conversation Circle program data, The Uncertainty Lab (2025–2026)


Comments