I’ve led transformations before.

At ThyssenKrupp, building risk forecasting systems across a multi-billion euro procurement portfolio. At Apple, managing large teams through product cycles that left no room for hesitation. I understood what it meant to lead through change. I thought I had the pattern down.

AI broke the pattern.

Not because transformation itself is new. It isn’t. But because AI transformation carries a weight that other changes don’t. It’s faster. It’s less predictable. And it reaches into something deeply personal — the question of whether the skills that built your career still matter when a machine can replicate them in seconds.

Over the past year, I’ve written extensively about what AI leadership actually requires. Not the conference-talk version. Not the vendor pitch. The real version — the one that involves exhausted leaders, anxious teams, broken performance metrics, and decisions that have to be made before anyone has enough information.

This guide brings those threads together. It’s everything I’ve learned about leading through AI transformation, organized into a single resource.

It won’t give you certainty. Nobody has that right now.

But it might give you clarity. And in this environment, clarity is worth more.

Why AI leadership is different from every other transformation

I keep hearing people say AI transformation is just like any other change initiative. Digitisation. Lean. Agile. Same playbook, different technology.

They’re wrong.

Here’s what makes AI different.

The pace doesn’t let you stabilise. Traditional transformations came in waves. You’d implement, stabilise, learn, then prepare for the next phase. AI doesn’t work that way. The technology shifts monthly. What felt like progress last quarter feels like table stakes this quarter. There is no stable state to arrive at.

It threatens identity, not just process. When you automated a factory floor, workers worried about their jobs. When you automate analysis, leaders worry about their relevance. AI doesn’t just change what people do — it changes what people believe they’re worth. That’s a fundamentally different kind of disruption.

Nobody has the expertise. In previous transformations, you could hire consultants who’d done it before. You could study case studies. You could learn from others’ mistakes. With AI, the experienced practitioners don’t exist yet. Everyone is figuring this out in real time. Including you. Including me.

The decisions are irreversible before you know if they’re right. AI investments shape your organisation for years. But the information you need to make those investments well doesn’t exist yet. You’re choosing direction at speed, with incomplete data, knowing that course-correcting later will be expensive.

This combination — relentless pace, identity threat, expertise gap, high-stakes ambiguity — is what makes AI leadership its own discipline.

It doesn’t replace what you already know about leadership. It demands more of it.

The decisions that define your AI strategy

Before you can lead others through AI transformation, you need to be clear about what you’re actually deciding.

Most leaders I talk to feel overwhelmed by AI not because there’s one impossible decision, but because there are dozens of interconnected ones arriving simultaneously. Which tools to adopt. Which processes to automate. Which roles to redesign. How much to invest. How fast to move. What to tell the team. What to tell the board.

When everything feels urgent, nothing gets decided well.

What helped me was separating the strategic decisions from the tactical ones. Not every AI choice is equally consequential. Some shape your organisation’s direction for years. Others can be delegated, experimented with, or reversed without damage.

The strategic ones — the ones that define your AI posture as a leader — tend to cluster around a few recurring themes. How transparent you’ll be with your team about what AI means for their roles. Where you draw the line between AI autonomy and human oversight. How you measure success when traditional metrics don’t apply. Whether you lead adoption from the front or enable it from the middle.

These aren’t technical decisions. They’re leadership decisions. And they require the kind of judgment that no AI tool can provide.

I’ve written about these in detail — the seven AI decisions every leader must make in 2026 lays out each one with the trade-offs involved. If you haven’t worked through them explicitly, start there. Getting clear on those choices reduces the ambient decision fog that makes everything else harder.

The burnout crisis nobody’s talking about

Here’s the number that stopped me: 71% of leaders report heightened stress directly related to AI transformation.

Not a minority struggling with change. The majority.

And yet in most organisations, this conversation doesn’t exist. Leaders are supposed to have it figured out. To project confidence. To know the way forward. So they’re exhausted privately while pretending to be fine publicly.

I’ve seen it up close. The leaders who had always been sharp were hesitating. The ones who moved fast were stuck in analysis loops. The ones who thrived on challenge were quietly withdrawing.

When I started paying attention to what was actually happening, I found four distinct sources of exhaustion. Decision fatigue at unprecedented scale — making directional calls weekly on technology that changes monthly. Pace anxiety — the feeling that no matter how fast you move, you’re falling behind. Accountability without clarity — being held responsible for outcomes in a domain nobody fully understands yet. And the competency gap — the quiet erosion of confidence when your hard-won expertise suddenly seems less relevant.

These compound on each other. And they don’t resolve on their own.

The leaders who sustain through AI transformation aren’t the ones who push hardest. They’re the ones who recognise what’s happening to them and build practices that make leadership sustainable over time. Decision boundaries. Recovery time. Peer networks. Reconnection to purpose.

I’ve written a full breakdown of the burnout crisis in AI leadership — the warning signs, the sources, and a practical recovery roadmap. If you’re feeling it, you’re not alone. And it’s not a personal failing. It’s a predictable response to an unprecedented situation.

Emotional intelligence as AI’s counterweight

Here’s what surprised me most about AI adoption.

When we started using AI for analysis, the decisions got harder, not easier. More data. Better insights. Clearer patterns. But the conversations became more complex. The stakeholder dynamics intensified. The human work didn’t decrease — it concentrated.

The World Economic Forum projects that 40% of core job skills will change by 2030. The skills declining are the ones AI does well — data processing, pattern recognition, routine decision-making. The skills rising are the ones AI can’t do at all — empathy, complex communication, leadership through ambiguity.

As machines get better at thinking, humans become more valuable for feeling.

This isn’t a soft insight. It’s a strategic one.

AI creates more decisions, not fewer — and real decisions require human judgment. AI surfaces conflict faster — because clearer data exposes the different assumptions that were always there, and someone has to navigate that. AI changes what teams worry about — relevance, identity, the future of their roles — and those worries don’t respond to technical solutions. They respond to leadership that listens.

The five competencies that matter most now are self-awareness in the face of uncertainty, empathy that scales beyond individual relationships, emotional regulation under continuous change, social awareness across cultural and generational differences, and relationship management in high-stakes environments.

Every one of these is developable. None of them are automated.

If you want to go deeper on how to build these capabilities — and how to use them when AI reads the data but can’t read the room — I’ve written about emotional intelligence as AI’s counterweight. It includes the specific practices that actually work, not just the theory.

Managing human–AI teams

The meeting request came from an AI.

Not forwarded by someone. Not generated by a rule. An AI agent had spotted a coordination gap and decided the humans needed to talk. That’s when I realised: this is the new normal, and nobody gave us a playbook for it.

76% of executives now say they view AI as a coworker. Not a tool. A coworker.

If AI is part of the team, you need to manage it like part of the team. And that changes everything about how you lead.

The practical challenge is figuring out who does what. Not philosophically — practically. This specific task, right now, does a human do it, does AI do it, or do they do it together? I’ve landed on patterns after making a lot of mistakes. Routine work with low stakes and stable patterns — let AI run. Anything involving relationships, politics, or judgment in ambiguous situations — humans lead. The tricky zone is the middle, and that’s where most of your management attention needs to go.

Trust is different now too. With humans, trust builds through relationship. With AI, trust is calibration. And most people get it wrong in one of two directions — trusting AI too much and becoming rubber stamps, or trusting too little and creating friction everywhere. What you want is calibrated trust, matched to actual reliability. That takes time, exposure, and honest conversation about where AI performs well and where it doesn’t.

Performance management broke. Old metrics — output volume, speed, decision count — don’t mean anything when AI can outperform any human on all of them. What I’ve started measuring instead is orchestration. How well someone integrates human and AI capabilities. Whether they catch AI errors before they cause damage. Whether they improve the workflow over time. Whether they handle the escalated decisions — the ones AI can’t handle — with good judgment.

I’ve written a complete playbook for managing human–AI teams that covers workflow design, trust calibration, the skill atrophy problem, and how to prevent the division between AI enthusiasts and AI sceptics from fracturing your team.

AI in procurement: Where theory meets practice

Most of what’s written about AI is abstract. Possibilities. Projections. Frameworks.

I work in procurement. Procurement doesn’t do abstract.

This is where AI leadership gets real. Where the tools meet the spend data. Where the negotiation strategy meets the supplier relationship. Where the agentic AI promise meets the messy reality of contracts, compliance, and cross-functional politics.

Two areas are moving fastest.

AI-assisted negotiation is already changing how procurement teams prepare for and execute supplier negotiations. Not by replacing the negotiator — that’s vendor marketing. By transforming what’s possible before, during, and after the conversation. AI can analyse historical spend patterns, model scenarios, identify leverage points, and surface competitive intelligence at a speed and depth that manual analysis never achieved.

But the negotiation itself? That’s still human. The ability to read a supplier’s hesitation. To sense when a concession is coming. To build the relationship that makes difficult conversations possible. AI provides the preparation. Humans provide the judgment. The teams that combine both effectively are winning better outcomes than either could achieve alone.

I’ve explored this in detail in AI and human negotiation — including where the hybrid approach creates the most leverage.

Agentic AI is the next frontier. Autonomous systems that don’t just analyse data but take action — creating purchase orders, communicating with suppliers, flagging exceptions, managing routine sourcing events without human intervention. This is no longer theoretical. It’s happening in procurement organisations right now, and it’s accelerating.

The question isn’t whether agentic AI will change procurement. It’s which roles are most exposed, which skills become more valuable, and how leaders prepare their teams for a function that looks fundamentally different in two years than it does today.

I’ve written about what agentic AI actually means for procurement — from a practitioner inside the function, not an analyst observing it from the outside.

How to start: A 90-day AI leadership roadmap

If you’ve read this far, you might be wondering where to begin. Everything feels connected. Everything feels urgent. The temptation is to do everything at once.

Don’t.

Here’s how I’d sequence it if I were starting from scratch.

Month 1: Audit and decide.

Before you adopt anything, get honest about where you actually are. Not where your board deck says you are. Where you actually are.

Map your team’s current AI usage — formal and informal. You’ll be surprised. People are already using AI tools you don’t know about. That’s not a problem to fix. It’s intelligence to leverage.

Then work through your strategic decisions. Where will you invest? What’s the boundary between AI autonomy and human oversight? How transparent will you be with your team? What’s your risk tolerance for AI errors? Get these answers clear before you start implementing. Reversing strategic direction mid-stream is far more expensive than spending two extra weeks getting clarity up front.

Finally, assess your own state honestly. Are you approaching burnout? Are you trying to be the AI expert when your real job is to be the AI leader? Get your own foundation solid before trying to lead others through this.

Month 2: Pilot and people.

Choose one specific area for a genuine AI pilot. Not a showcase. Not a proof of concept designed to impress the board. A real workflow where AI can create real value for real people on your team.

Keep it contained. One process. One team. Clear success criteria. The goal isn’t to transform the organisation in month two. It’s to learn — what works, what breaks, where the human-AI interface creates friction, what your team needs to succeed.

Simultaneously, start the people work. Have the honest conversations about what AI means for roles. Not the reassuring corporate script. The real conversation. Some tasks will be automated. Some roles will change. Nobody knows exactly how yet. But pretending it won’t happen erodes trust faster than any disruption.

This is where emotional intelligence earns its keep. The anxiety is real. The identity questions are real. Your job is to hold space for both while maintaining forward movement.

Month 3: Scale what works, stop what doesn’t.

Take what you learned from the pilot and decide: scale, modify, or stop.

Most pilots teach you something different from what you expected. That’s the point. The leaders who succeed aren’t the ones whose pilots go perfectly. They’re the ones who learn honestly from what happens and adjust.

If the pilot worked, expand deliberately. Document the workflow. Train the team. Establish the monitoring mechanisms. Build the feedback loops that let the system improve over time.

If it didn’t work, say so. Openly. The worst thing you can do is quietly abandon a failed pilot and hope nobody notices. Your team noticed. They’re watching how you handle failure. Show them that honest learning is more valued than artificial success.

By the end of month three, you should have clarity on your strategic direction, one proven AI workflow, a team that trusts you to lead this honestly, and a realistic picture of what comes next.

That’s not everything. But it’s enough to build on.

What I’ve learned leading through AI transformation

I want to end with something honest.

I don’t have this figured out. Not fully. I’m learning as I go — just like every other leader navigating this moment. What I’ve shared in this guide and in the articles it connects to isn’t a proven formula. It’s what I’ve learned so far, from doing this work in real organisations with real stakes.

Some of what I’ve learned has come from getting it right. Building risk forecasting systems that generated real savings. Leading teams that found their footing in uncertainty. Making decisions at 70% certainty that turned out well.

Most of what I’ve learned has come from getting it wrong. Trusting AI outputs I should have questioned. Pushing teams faster than they could absorb. Trying to be the expert when I should have been the facilitator. Ignoring my own burnout signals until they became impossible to ignore.

Here’s what I keep coming back to.

AI is powerful. It’s transformative. It will reshape how organisations work in ways we can’t fully predict yet.

But it doesn’t lead. It doesn’t build trust. It doesn’t navigate the messy, human, emotional reality of change. It doesn’t sit with someone who’s afraid for their career and help them find a path forward. It doesn’t make the judgment calls that require wisdom, not data.

That’s your job. That’s always been your job.

The tools are different now. The pace is different. The uncertainty is higher than most of us have experienced.

But the core of leadership hasn’t changed. People need direction. They need honesty. They need someone who gives a damn about their growth and wellbeing. They need someone willing to make hard decisions and own the consequences.

AI transformation tests all of that. It tests it harder than anything I’ve led through before.

But the leaders who come through this — the ones who sustain, who adapt, who keep their teams together and moving forward — they won’t be the ones who understood the technology best.

They’ll be the ones who understood the people best.

That’s what I believe. And everything I write comes from that conviction.

The link has been copied!