In the last edition of The Elevatexec Brief, I wrote about AI as a sparring partner – a tool that sharpens your thinking if you keep your guard up. However, there’s a deeper insight emerging. AI isn’t just helping leaders think better. It’s showing them how they already think.
Your AI strategy isn’t a technology problem. It’s a mirror.
The Uncomfortable Truth
Leaders who struggle with AI aren’t struggling with the software. They’re struggling with the same weaknesses in strategic thinking they’ve always had, and AI just makes those weaknesses visible and costly.
The leader who disappears down AI rabbit holes was already prone to analysis paralysis. The one who blindly trusts outputs was already poor at challenging assumptions. The executive who can’t explain AI decisions to their board couldn’t explain their other decisions either.
AI magnifies whatever patterns of thinking you already have. The real question is: what is it magnifying in your leadership?
The Real Blind Spots
Many leaders lack a clear understanding of what AI actually is and how it’s being used in their organisations. This isn’t about coding knowledge; it’s about their awareness and judgement.
Too often, they see AI as a magic productivity button rather than a system with clear limits. They believe it “thinks” rather than processes information differently from humans. They treat it as plug-and-play rather than something that needs human challenge at the right points.
These misconceptions lead to risky behaviour – rushing ahead without weighing consequences, failing to spot how AI is already embedded across the business, treating it purely as a cost cutting lever, or simply handing responsibility to IT or other functions without clear leadership direction.
The Delegation Trap
When leaders don’t understand AI, they rely on someone else’s interpretation, someone who may not fully understand it themselves, or who doesn’t push back on flawed outputs. As a result, unsound decisions follow.
Worse still, AI is often rolled out across a business without real oversight. When things go wrong – and they will – accountability falls back to leaders who never understood what they were accountable for.
I’ve seen and heard this from many peers: a leader believes what they’re told about AI, makes decisions based on unchecked outputs, and later discovers the foundations were weak.
AI doesn’t fix leadership gaps. It amplifies them.
The Strategic Opportunity
There is, however, an opportunity. While many leaders either blindly trust AI or allow it to spread without checks, some take a different path.
These leaders know that real advantage comes from slowing down to go faster. They run AI and human processes in parallel, checking information, outputs, and assumptions until they’re satisfied. Only then do they adapt ways of working.
That isn’t inefficiency. It’s discipline under pressure. It’s the same principle I described in AI as a Sparring Partner: use AI to push and stretch your thinking but always keep your own guard up. Running the two in parallel isn’t duplication, it’s the discipline that protects judgement.
And research backs this. A study of over 6,000 UK micro-businesses found that first movers often gained measurable innovation benefits from adopting AI early (Springer, 2023). In manufacturing, adoption follows a “J-curve”: productivity dips at first but then overtakes non-adopters, with evidence from both the U.S. Census Bureau (Census Bureau, 2025) and MIT-backed studies. Commercial teams have reported similar results – with two-thirds seeing ROI in the first year, and some within just three months (ITPro, 2025).
Competitors who rush ahead may look faster in the short term, but the errors and correction costs are far higher.
The lesson isn’t that speed on its own creates success. It’s that discipline turns adoption, whether early or late into real advantage. Without it, risks mount. With it, gains multiply.
The Organisational Design Error
Most organisations are making a critical structural error. They put technical experts, CTOs, data scientists, risk officers in charge of AI strategy. These people may understand the tools but can miss the wider business picture.
What’s needed is an “AI business translator” someone with enough business knowledge and influence to question the technical teams and keep AI tied to strategy.
The Right Profile: This person must back disciplined adoption. They need authority to push back on both reckless roll-out and paralysing caution. Most of all, they need direct access to the boardroom so that AI decisions stay aligned with strategy.
The Structural Fix: This might be a dedicated role or part of an existing senior position, but it cannot simply be handed off to IT.
The Culture Question
Structure only works with the right culture. Leaders must create workplaces where people feel safe to question AI outputs, admit when they don’t understand them, and pause without being seen as inefficient.
That means leaders who refuse to be blinded by jargon, who model the discipline of asking “what’s missing here?”, and who reward the avoided error as much as the quick win.
The cultural work is harder than the technical work, but it’s where lasting advantage lies.
In my experience leading major transformations, the organisations that thrive aren’t necessarily those with the newest tools. They’re the ones with the discipline to use any tool well.
And that’s exactly what I see in my work with senior leaders.
Whether it’s through coaching or broader strategic conversations, when a recurring outcome surfaces, a frustrating pattern, stalled initiative, or unexpected result, the real insight usually lies beneath.
We follow the thread downward: into the beliefs, behaviours, structural dynamics, or decision-making assumptions that often go unexamined.
AI is fast becoming one of the clearest mirrors for that kind of work.
The Strategic Test
While others argue about how fast to adopt, the best leaders are using AI to strengthen leadership itself. They’re asking not just “how do we use this tool?” but “what does our relationship with this tool say about how we think and decide?”
This shifts AI from a technology project to a test of leadership. It builds the muscles for thinking under uncertainty, checking assumptions, and keeping judgement intact while working with powerful tools.
Monday Morning Questions
Two immediate questions for leaders:
First: Do you know who is really making AI-related decisions in your organisation, and how those decisions are being checked?
Second: Do you have someone with the authority and awareness to ensure AI serves your goals rather than setting them?
What You Can Do Now
If you want to lead well in the age of AI, here are three immediate actions I recommend to senior leaders:
- Map where AI is already embedded in your organisation – from marketing tools to recruitment software. Ask: who’s making decisions, and are those decisions being checked?
- Run a parallel test: take one live business issue and compare how your team solves it with and without AI input. Look for where judgement strengthens or weakens.
- Create space for challenge: establish a forum (or make time in existing meetings) where AI use can be questioned safely, especially by those without technical backgrounds.
These aren’t tech tasks. These are good leadership practices.
The Mirror’s Edge
AI doesn’t create strategic thinkers. It shows who already is one.
The leaders who come out stronger won’t be those who moved fastest or slowest. They’ll be those who used AI as a mirror to see their thinking clearly then did the hard work to improve what they saw.
The evidence shows both truths: reckless adoption leads to blind spots, but disciplined adoption sometimes even early adoption can drive real innovation. The dividing line isn’t speed. It’s the quality of leadership.
Your AI strategy reveals your leadership strategy. The question isn’t whether you’re ready for AI.
The question is: are you ready for what AI reveals about you?





