The meeting request came from an AI.
Not forwarded by someone. Not generated by a rule I’d set up. An AI agent had looked at project dependencies, spotted a coordination gap, and decided the humans needed to talk.
I stared at my calendar for a moment.
Then I accepted the meeting.
That’s when it hit me. This is the new normal. And nobody gave us a playbook for it.
76% of executives now say they view AI as a coworker. Not a tool. A coworker.
I keep thinking about that number.
Three out of four leaders have mentally crossed a line. The AI isn’t something they use anymore. It’s something they work with.
By 2028, more than half of business functions will involve AI agents in their daily operations. Not occasionally. Every day.
So here’s my question.
If AI is now part of the team, why are we still managing like it isn’t?
What a large team taught me
At Apple, I managed a large team. That was complicated enough.
Different functions. Different geographies. Different personalities. Someone’s always frustrated. Someone’s always misaligned. Communication breaks down in ways you don’t see until it’s already a problem.
But at least everyone was human.
I understood the failure modes. I knew what motivated people, what scared them, what made them shut down or step up.
Now I’m managing teams where some of the “members” don’t get tired, don’t have feelings, don’t need motivation—but also don’t understand context, can’t read a room, and sometimes confidently produce garbage.
It’s a different game.
The weird part is nobody talks about it. We’re all figuring this out in real time, pretending we know what we’re doing, hoping nobody notices we’re making it up as we go.
So let me just say it: I don’t have this figured out.
But I’ve learned some things the hard way. Maybe they’ll save you some pain.
The job changed. The title didn’t.
Same responsibilities on paper. Completely different in practice.
I used to spend most of my management energy on people. Aligning them. Developing them. Sorting out the conflicts that inevitably arise when smart people disagree.
Now I spend a significant chunk of my energy on something else entirely.
Figuring out where the AI fits. Designing how work flows between human and machine. Catching problems at the interface that neither human management nor technical oversight was designed to catch.
Nobody added hours to my day for this.
I just have a new job layered on top of the old one.
Humans still need what they always needed
Clear direction. Meaningful work. Development. Someone who gives a damn about their growth.
That hasn’t changed.
But now there are AI components that need something else entirely.
Not motivation—monitoring.
Not relationships—interfaces.
Not feedback—calibration.
And the place where human work meets AI work?
That’s where everything breaks.
Work falls through the cracks because nobody owns the handoff. Accountability gets murky because the AI “decided” something but a human is responsible. Quality checks designed for human mistakes miss AI mistakes entirely—because AI errors look polished and confident even when they’re completely wrong.
I didn’t learn any of this in management training.
I learned it by screwing up.
Who does what
This is the practical heart of it.
Not philosophically. Practically. This specific task, right now—does a human do it, does AI do it, or do they do it together?
There’s no universal answer. It depends on the task, the stakes, how stable the context is, how good your AI is at that particular thing.
But I’ve landed on a few patterns that work.
For routine stuff where the stakes are low and the patterns are stable: let AI run.
Don’t add human review just because it feels responsible. That’s not responsibility—it’s friction. Save your team’s attention for where it actually matters.
I learned this the hard way. Early on, I had humans reviewing AI outputs that didn’t need review. We were spending hours every week on oversight that caught nothing, ever. Meanwhile, the things that actually needed human attention weren’t getting enough.
The courage here is letting go.
For anything involving relationships, politics, judgment in ambiguous situations: humans lead.
AI can prepare the analysis. AI can generate options. But the human decides, and the human owns it.
I’m strict about this. Not because AI can’t contribute—it can, enormously. But because accountability matters. When something goes wrong, you need a human who owns it.
“The AI decided” is never an acceptable answer.
The tricky zone is the middle.
Tasks that are usually routine but occasionally aren’t. Tasks where AI is mostly right but sometimes catastrophically wrong. Tasks where the handoff between human and AI work loses something important.
At ThyssenKrupp, building procurement systems taught me how much detail matters here.
The difference between “use judgment” and “escalate when these specific conditions are met” was the difference between systems that worked and systems that created chaos.
You need clear triggers.
When AI confidence is below this threshold. When the counterparty is new. When the value exceeds this amount.
Specific enough that people can apply them consistently.
That’s where I spend most of my management attention now. Designing those interfaces. Testing whether they actually work. Fixing them when they don’t.
Trust is weird now
With humans, trust builds through relationship.
You work with someone. You see how they handle pressure. You learn their strengths and blind spots. Trust develops—or doesn’t.
With AI, trust is different.
It’s not relationship. It’s calibration.
And most people get it wrong.
The ones who trust AI too much stop thinking.
They see the AI’s output and assume it’s right. They become rubber stamps. I’ve watched this happen—someone presents an AI analysis, and the room stops questioning. Nobody asks what the AI might have missed.
That’s dangerous.
AI can be confidently wrong. Spectacularly wrong. And if everyone’s defaulted to “AI said so,” nobody catches it until real damage is done.
The ones who trust too little create friction everywhere.
They review everything. They slow down processes that should be fast. They burn attention on things that don’t need it. Eventually, they become bottlenecks—or they burn out from trying to verify work that the machine handles perfectly well.
What you want is calibrated trust.
Appropriate skepticism matched to actual AI reliability. Trust it when it’s earned it. Question it when it hasn’t.
That calibration doesn’t happen automatically.
People have to learn where AI performs well and where it doesn’t. They have to see it succeed and see it fail. They have to develop instincts that take time to build.
Your job is to create the conditions for that learning.
Start with low stakes. Let people see AI perform before you ask them to depend on it. Be transparent about limitations—not vague “AI has limitations” but specific “this particular system struggles with these particular situations.”
And when AI screws up—which it will—don’t hide it. Talk about it openly.
The goal isn’t maximum trust.
It’s right-sized trust. Matched to reality.
What AI did to human relationships
I didn’t anticipate this one.
When AI does work that humans used to do, questions come up. Questions that feel petty to ask but genuinely matter.
Who gets credit when the AI did the analysis?
Who’s responsible when the AI gets it wrong?
If AI makes me more productive, do I deserve the same recognition as someone doing everything manually?
If AI can do my job, what’s my job now?
These questions mess with people. They’re not just practical—they’re identity questions.
What’s my value if a machine can do what I spent years learning to do?
I’ve watched teams fracture over this.
Early adopters get impatient with people who are slower to adapt. Skeptics dig in harder when they feel dismissed. Camps form. Energy that should go toward customers goes toward internal politics instead.
Once that division takes hold, it’s hard to undo.
Much better to address it before it solidifies.
Performance management broke
I used to think performance management was hard.
Then I tried to figure out how to evaluate someone whose AI does half the work.
Old metrics don’t work.
“How much did you produce?” doesn’t mean anything when AI can outproduce any human.
“How fast did you finish?” measures AI speed, not human skill.
“How many decisions did you make?” conflates rubber-stamping with actual judgment.
I tried the old approaches first. They failed. I was either rewarding people for AI performance they had nothing to do with, or punishing people who used AI wisely but produced less visible “human” output.
What I’ve started measuring instead: How well do they orchestrate?
Do they know when to use AI and when not to? Do they catch AI errors before they cause damage? Do they improve the human-AI workflow over time? Do they handle the escalated decisions—the ones AI can’t handle—with good judgment?
Some of my best performers now aren’t the people who could do everything manually.
They’re the people who’ve mastered the integration.
They know when to let AI run and when to step in. They’ve developed judgment about the interface that creates enormous leverage.
That’s a new skill. It wasn’t valuable five years ago.
It’s essential now.
The development conversation changed too
It used to be: What skills do you need to do your job better?
Now it’s something harder.
What skills do you need as AI changes what your job even is?
How do you add value that you couldn’t add a year ago?
What are you learning to stay relevant as capabilities shift?
These conversations are more uncertain than the old ones. More uncomfortable. But more important.
I’ve started asking people directly: Where do you think AI threatens your role? Where do you think it enhances it? What are you doing about both?
The people who can answer clearly are the ones who’ll thrive.
The ones who can’t—or won’t—are the ones I worry about.
The skill atrophy problem
Here’s a danger I keep watching for.
When AI handles something, humans do it less. And skills you don’t use fade.
Then AI fails—which it will—and the humans who should be the backup can’t actually back anything up. They’ve forgotten how. Or they never really learned in the first place.
I saw this dynamic at an airline years ago. Pilots became so reliant on automation that their manual flying skills degraded. When the automation failed, they couldn’t recover.
Same dynamic applies everywhere AI is taking over.
If you let human capability atrophy completely, you’ve got no fallback when the machine breaks.
My solution, imperfect as it is: deliberate rotation.
Sometimes humans do the work manually even when AI could do it faster. Not often—efficiency matters—but enough to keep the skills alive. Enough that people can actually step in when needed.
It feels wasteful in the moment.
It’s insurance for when things go wrong.
The division that worries me most
Not humans versus AI—that’s too abstract.
The real division is between people who see AI as opportunity and people who see it as threat. Between those whose work gets enhanced and those whose work gets displaced. Between early adopters who can’t understand the hesitation and skeptics who can’t understand the enthusiasm.
If you let that division harden, your team stops functioning.
The energy goes into protecting territory instead of creating value.
I’ve seen it happen. One group racing ahead with AI, frustrated by colleagues who seem to be dragging their feet. Another group feeling left behind, dismissed, increasingly defensive. Meetings become tense. Collaboration becomes transactional.
The team splinters into factions.
The only way through is honesty.
Yes, AI changes the value of different skills. Yes, some roles are more affected than others. Yes, this is genuinely difficult for people whose strengths are becoming less central.
That acknowledgment doesn’t fix anything by itself.
But it opens conversation. And real conversation is the only path through.
I’ve found the most important conversation is often not about AI at all. It’s about identity.
What does it mean for someone if a machine can do what they’ve spent years mastering?
That question deserves respect, not dismissal.
Distribute the benefits
If AI benefits accrue only to some team members while costs hit others, resentment is inevitable.
If AI makes some people’s lives dramatically easier, can those gains fund development for people whose roles are shifting?
If AI creates new opportunities, can the people most affected get first access?
This isn’t fairness as abstraction.
It’s practical.
People who see their colleagues thriving while they’re struggling will not give you their best work. They’ll check out. Or they’ll fight you. Or they’ll leave.
None of those serves the team.
They’re watching you
Your team watches how you engage with AI. Carefully. More than you realize.
If you’re skeptical, they’ll be skeptical. If you treat AI as a threat, they’ll feel threatened. If you secretly believe it’s going to make some of them obsolete, they’ll sense it—even if you never say it out loud.
But if you can demonstrate that human and AI capabilities combine in ways that make everyone more valuable—that’s something they can believe in.
Not because you told them.
Because they saw you living it.
I try to be explicit about my own AI use with teams I work with.
Here’s where I use it. Here’s what it does well. Here’s where I still need to apply judgment. Here’s how I’m learning and adapting.
That transparency gives people permission to do the same. To experiment. To fail. To figure out their own integration without pretending they’ve already got it figured out.
What past taught me
Leading cross-functional teams across multiple countries taught me something I keep coming back to.
Those teams were complicated. Different functions with different priorities. Different cultures with different working styles. Everyone convinced their perspective was the right one.
When we disagreed—and we always did—the way through was never trying to get everyone to agree on methods. That was impossible.
The way through was shared purpose.
Getting clear on what we were actually trying to accomplish together. Then figuring out how to get there using everyone’s capabilities.
Same principle applies to human-AI teams. Maybe more so.
What unites them is purpose. Outcomes they’re all contributing to. Value that neither humans nor AI could create alone, but together becomes possible.
When things get hard—and they will—return to that purpose.
Not “AI should do this” or “humans should do that.”
Just: “We’re trying to accomplish this. How do we use every capability we have to make it happen?”
That reframe doesn’t solve everything.
But it shifts the conversation from competition to collaboration. From “whose territory is this?” to “how do we win together?”
Communication breaks differently now
With all-human teams, communication problems usually come down to misunderstanding or missing context.
Someone didn’t know something. Someone assumed something incorrectly. Fix the information flow, fix the problem.
With human-AI teams, communication breaks in different ways.
Handoffs between human and AI work lose information that matters. The AI doesn’t know context the human assumed it would. The human doesn’t know limitations the AI couldn’t express.
Nobody owns the gap.
Status gets murky. Where is this work? Is AI still processing, or is it stuck? Did the human review this, or just approve it?
Feedback doesn’t flow. AI needs calibration to improve, but humans don’t know how to provide it. Humans need to understand AI behavior, but AI can’t explain itself clearly.
I don’t have perfect solutions for any of this.
But I’ve learned to invest way more in workflow design than I ever did before. Explicit handoffs. Clear status visibility. Structured feedback mechanisms.
The stuff that feels bureaucratic with all-human teams becomes essential when part of your team doesn’t understand ambiguity.
Orchestration is the skill now
I’ve started thinking about orchestration as the core leadership skill.
Not expertise in any single domain. Not technical capability. Not even the traditional leadership stuff, though that still matters.
Orchestration.
The ability to see a complex system of human and AI capabilities, understand what each can do, and design how they work together.
This is what I mean when I write about building leaders, not just managing tasks. The leaders who thrive now are the ones who can conduct this increasingly complex orchestra.
It requires influence without authority—because you’re coordinating capabilities you don’t control.
It requires making the decisions about AI that shape how your team operates.
But mostly it requires a shift in how you see your job.
Less “I manage people.”
More “I orchestrate capabilities.”
What I want you to take from this
The hybrid team isn’t coming. It’s here.
You’re already managing one, whether you’ve named it that way or not.
The question isn’t whether to adapt.
It’s whether to adapt deliberately or let circumstances dictate.
Deliberate adaptation means:
Understanding what your human and AI team members each do well.
Designing workflows that combine them effectively.
Building trust that’s calibrated to reality.
Keeping your humans capable even as AI expands.
Preventing divisions that undermine everything.
That’s a lot. I know.
But here’s the thing.
The skills that made you good at leading humans? They don’t disappear.
They matter more now, not less.
The human elements of leadership—judgment, relationship, purpose, care—are what hold hybrid teams together.
AI brings new capabilities.
You bring what makes a team actually work.
That hasn’t changed.
And I don’t think it will.
Where this connects
This article is part of a comprehensive guide to AI leadership in 2026 — covering the decisions, the people challenges, and how to build teams where humans and AI work together.
If the pressure of managing through AI transformation is wearing you down, I’ve written about leadership burnout in the AI era — and why 71% of leaders are feeling it. And the human skills that hold hybrid teams together? That’s emotional intelligence as AI’s counterweight — the capability that appreciates while AI capabilities depreciate.