Why Smart Leaders Are Slowing Down Their AI Adoption
I used AI to filter a long list of freelancers. Quick, efficient, and initially impressive. But it almost fooled me. Some had clearly used AI to inflate their profiles. The language gave them away in the corners that most people don’t read. Without instinct, experience, and a bit of human detective work, I’d have made the wrong call.
That’s the trap. Leaders can look faster, leaner, more decisive – while actually making worse choices.
AI isn’t the strategist. It’s the sparring partner. It can push you, stretch you, and reveal openings you hadn’t seen. But like any sparring partner, it can also repeat the same moves, overreach, or leave gaps. The leader’s job is to step into the ring with discipline – sharpened by the spar, but never outsourcing the fight.
The Speed Trap
I’ve seen the same with my own drafts. AI produces something fluent and confident, but key points go missing, details shift, and sometimes it invents facts. The pace is dazzling. The risk is hidden.
That’s The Speed Trap: like racing a sports car with blindfolded passengers cheering you on. You’re moving fast, but no one can see the cliff edge.
The Evidence Behind the Experience
This tension is playing out at scale.
- McKinsey (2024): Leaders and employees want to move faster with AI. Yet, half of employees worry about inaccuracy and cybersecurity – even as most say they trust their own company to “get it right.”
- Global losses: More than 60 billion dollars were lost in 2024 through AI hallucinations.
- Decision blind spots: Nearly half of leaders admit they’ve already made bad calls on incorrect AI outputs. Executives guessed that only 4% of employees rely on AI for 30% or more of their work. The reality is 13% – three times higher.
- Confidence gap: 68% of leaders believe AI is often misrepresented as flawless. 54% feel unprepared to lead through its growth.
The picture is consistent across contexts. McKinsey shows employees want to move faster, but already fear inaccuracy. The Slack Workforce Index shows adoption stalling at 33% as workers hesitate to embed AI into daily work. At the same time, regulators are tightening the screws – the EU AI Act (from August 2025) will raise the bar on transparency and risk.
Different industries, same hesitation. Internal caution and external scrutiny are converging.
Boards won’t excuse poor calls because “the model said so.” At AI speed, accountability doesn’t disappear. It sharpens.
Are Leaders Starting to Slow Down?
While the dominant pressure is still to accelerate, there are signs of reassessment.
The Slack Workforce Index (Aug 2024) found worker adoption barely moved in a quarter (32% to 33%) after earlier surges – evidence of hesitation to embed tools into daily workflows. Yet at the organisational level, adoption is accelerating: 78% of companies reported using AI in 2024. The gap is clear: leaders are switching tools on, but employees aren’t always switching behaviours. That tension is a leadership challenge, not a technical one.
And at the top level, a Dataiku survey reported by Business Insider found 70% of CEOs admit anxiety about their AI strategies. Far from charging ahead blindly, many are pausing to re-examine governance and pace. That pause isn’t retreat. It’s recognition that advantage won’t come from speed alone, but from using AI with discipline.
The Strategic “So What”
If AI is a sparring partner, leadership at every level has its discipline:
CEO / Board: Own accountability. Build governance that ties every decision to a human owner.
Business Unit / Middle Manager: Design the checkpoints. Make the pause a feature, not a flaw.
Team Lead / Practitioner: Challenge fluency. Don’t just accept smooth outputs – ask what’s missing.
Different levels, same principle. AI can deliver speed. You must keep hold of judgement.
Advantage in the Commodity Era of AI
Everyone has access to the same tools. The advantage lies in how you use them.
It comes from three things:
Judgement at scale – fewer unforced errors, stronger trust, a reputation that compounds.
Culture as the edge – teams that feel safe to challenge outputs, expected to show their working, and rewarded for getting the decision right, not just fast.
Pattern recognition – leaders who spot when AI misses context or repeats bias will make sharper calls than those who don’t.
Yes, in some markets, reckless speed can create a short-term first-mover advantage; however, it rarely lasts. The gains vanish when trust is broken, mistakes multiply, or regulators step in. The edge isn’t a choice between fast and slow. It is moving at a pace your governance and culture can sustain.
And timing matters. Adoption is spreading, and regulation is tightening. The EU AI Act’s next enforcement wave (August 2025) raises the bar on transparency and risk. The window to set your own standards before they are imposed is narrowing.
Five Leadership Disciplines for the AI Age (rules of sparring)
Delegate speed, retain judgement
Use AI for scanning, filtering, and first drafts. Keep the decision. If a call has reputational, financial, ethical, or people impact, your name belongs on it.
Stay visible
Map where AI is already in the loop. If you don’t know, you’re not leading – you’re guessing.
Challenge fluency
Fluency isn’t the truth. Treat confident outputs as claims to test. Ask: What’s missing? What assumptions is this built on? What would validate this elsewhere?
Protect culture under pressure
Model the pause. Reward the person who says, “I know the system says X, but here’s why I think it’s wrong.”
Treat AI as a sparring partner
Expect repetition, optimism bias, and omission. Build resistance into your workflows. Like any sparring partner, it makes you sharper because you didn’t let your guard down.
The Cultural Work: Safety to Slow Down
The harder part is cultural. How do you build an organisation where people feel safe to slow down when everything else is screaming “move faster”?
- Leaders model the pause. If you never say “hold on, let’s check,” no one else will.
- Ritualise the review. Bake in a two-source check or red-team pass on AI-assisted work above a risk threshold.
- Make decisions visible. Show the assumptions, risks, and sources behind outputs.
- Celebrate the avoided error. Advantage doesn’t just come from fast wins. It comes from the big mistakes you avoided.
A culture that rewards avoiding error is a culture that protects trust and reputation.
If You’re Already in The Speed Trap
- Pause to map reality – where AI is already influencing decisions.
- Reset decision rights – what is draft, what is decision, who owns it.
- Repair trust – admit where speed outran oversight, explain what changes now.
- Retrain the rhythm – build sparring rules into workflows.
- Re-accelerate deliberately – don’t ditch AI, use it better.
From Sparring to Strategy
The risks of rushing and the signs of hesitation point to the same conclusion: most organisations lack a cohesive AI strategy. Not piecemeal pilots, not adoption by accident, but a strategy that embeds AI responsibly into decisions, workflows, governance, and culture.
Used this way, AI is what it should be: a sparring partner that stretches leaders, sharpens ideas, accelerates work – but never replaces human judgement.
In my work as an executive leadership coach and business advisor, I see the same pattern: the leaders who stand out are the ones who pause long enough to spot the signal in the noise, then move forward with clarity. That discipline is where real competitive advantage lives.
The Provocation
If your AI disappeared tomorrow, could your culture still make strong decisions?
If not, who is really in the ring – you, or the machine?





