Artificial intelligence is now part of how leaders think and decide – not just how they execute. Most are navigating that individually, without a shared framework, which means the choices are largely personal.
The question worth asking isn’t whether to use it. It’s whether those choices are being made deliberately.
The transparency question is the wrong question
There’s a lot of conversation right now about being transparent with teams about AI use. The instinct is understandable. But it tends to obscure something more important.
Transparency implies a shared, visible practice. The reality is messier. Leaders are using these tools differently, in different moments, for different purposes. There’s no clean way to make that visible across an organisation – and arguably, that’s not where the real accountability sits anyway.
The more useful question is whether leaders are being honest with themselves about what they’re handing over to AI, and what that means for the decisions that carry their name.
What happens to the time?
One of AI’s genuine gifts to leaders is time. Tasks that used to take an hour can take minutes. First drafts, research, synthesis – all faster.
But time returned doesn’t automatically become time well used.
The default is to fill it with more output. More tasks completed, more messages sent. Which has its place. But it isn’t the same as reinvesting that time in the things that genuinely require a human leader – presence, judgment, the conversations that can’t be prepared for with a prompt.
What leaders choose to do with the space AI creates is itself a leadership decision. One worth making consciously.
Where does judgment still live?
AI is capable at pattern recognition, synthesis, and generating options. It is not capable of reading a room, holding the weight of a genuinely difficult decision, or understanding what’s at stake for the specific people involved.
The leaders who navigate this well tend to know where their judgment adds something that can’t be replicated – and they protect that capacity deliberately. Not because they’re resistant to the tools, but because they’re clear about what they’re there for.
It’s a question worth sitting with: where, specifically, does your human perspective change the outcome?
The thing that quietly disappears
Here’s the part that gets less attention…
Constant AI assistance can erode something that doesn’t announce its absence: reflection.
The slow processing of a hard problem or the pause before a decision where something important surfaces. These are not inefficiencies, but they’re often where the best thinking happens.
The pauses get crowded out by the next prompt or the next output.
And when leaders stop exercising their own judgment regularly, they gradually stop trusting it. That erosion of confidence is quiet, and it compounds. The effect isn’t just dependence on a tool – you’ll be arriving at important decisions feeling less sure of yourself than you once did.
Guardrails as a form of clarity
The leaders most confident about AI aren’t the ones with the most rules around it. They’re the ones who are clear about their own practice – when they use it to think with, and when they deliberately don’t.
There’s a difference between reaching for AI because it’s useful, and reaching for it because sitting with an unformed problem feels uncomfortable. The former is a good tool well used. The latter is where the gradual handover begins.
What that practice looks like will be different for every leader. But it’s worth identifying consciously, rather than letting habit decide.
The question AI can’t answer
AI will keep developing. Its presence in leadership contexts will deepen.
The leaders who navigate that well will be the ones who stay clear on something harder to measure: which decisions are theirs, what it means to own them, and what kind of thinking they want to be known for.
That’s not a question AI can help with.






Add comment