Blog

Hire and Develop the Best: Leading Hybrid Human-AI Teams

This month I onboarded a new team member. I thought carefully about what context they'd need to succeed - our goals, our working style, the projects in flight, where to find key documents.

LeadershipAI StrategyAmazonHuman-AI CollaborationClaudeMCP

This month I onboarded a new team member. I thought carefully about what context they'd need to succeed - our goals, our working style, the projects in flight, where to find key documents. I made time to explain the "why" behind our priorities, not just the "what." I shared examples of good work and flagged common pitfalls.

That new team member was Claude, working alongside me in Cowork mode.

It struck me afterward: I was instinctively applying the same leadership practices I'd use with any new hire. And it worked. The more context I provided - the clearer I was about what "good" looks like - the better the collaboration became. The principle held: developing your team members makes them more effective.

This Amazon leadership principle states: "Leaders raise the performance bar with every hire and promotion. They recognize exceptional talent, and willingly move them throughout the organization. Leaders develop leaders and take seriously their role in coaching others."

In the age of AI, this principle is expanding in unexpected ways. Leaders now need to develop both human and AI team members - and this requires holding two seemingly contradictory ideas at once.

When I work with leaders on AI strategy, I often share two pieces of guidance that seem to contradict each other:

First: Think of AI as a team member. This metaphor is genuinely useful. You need to "onboard" AI tools - give them context about your goals, your standards, your way of working. You need to understand their strengths and weaknesses, just as you would with any employee. You need to provide clear goals and feedback. In Issue #14, we explored Anthropic's "soul document" for Claude - essentially a comprehensive onboarding guide that shapes how Claude approaches work, makes decisions, and handles tradeoffs, This week, buy the way, Anthropic published an updated Claude "Constitution" document - which even speaks in terms of seeing AI as "a brilliant friend". The parallel to how we develop human team members is striking.

Second: Remember that AI is NOT human. AI has capabilities humans don't - processing vast information instantly, working without fatigue, patience across thousands of interactions. But AI also lacks what humans have - genuine understanding, emotional intelligence, the ability to know what it doesn't know, wisdom born from lived experience. AI doesn't have career aspirations you need to nurture. It doesn't need motivation or recognition. It won't grow resentful if underutilized or anxious if overchallenged. At least not yet...

Holding both ends of this stick is hard. Lean too far into the "team member" metaphor, and you'll anthropomorphize AI in ways that lead to poor decisions - expecting judgment it can't provide, or feeling betrayed when it "hallucinates" as if it chose to deceive you. Lean too far into "it's just a tool," and you'll underinvest in the context and clarity that makes AI genuinely effective - treating it like a vending machine when it could be a collaborator.

The leaders who get this right navigate and evolve within the paradox rather than resolving it. They apply human leadership principles where they work, while staying clear-eyed about where the metaphor breaks down.

Perhaps it's time to evolve the principle itself. Here's how I'd articulate "Hire and Develop the Best" for the age of hybrid human-AI teams:

"Leaders raise the performance bar with every hire, promotion, and AI collaboration. They recognize exceptional talent - human and artificial - and deploy each where they can contribute most. Leaders develop their people, taking seriously their role in coaching humans and guiding AI to excel. They design teams where human judgment and AI capability complement each other."

What does this look like in practice?

Just as you wouldn't throw a new hire into a complex project without background, AI teammates perform dramatically better with proper context. This means: clear goals, relevant examples, access to the right information, understanding of your standards and preferences.

The Model Context Protocol (MCP) - which we covered in Issue #12 - exists precisely because AI agents need shared context to collaborate effectively. Without it, they operate in silos, forget previous work, and can't coordinate across systems. The investment you make in providing context is the investment that makes AI genuinely useful rather than generically capable.

Good leaders don't micromanage, but they do establish clear expectations. With AI teammates, this means being explicit about what decisions they can make autonomously versus what requires human input.

The key skill here is calibration: giving enough autonomy for the AI to be genuinely useful while maintaining appropriate oversight. Too little autonomy, and you're micromanaging a very capable collaborator. Too much, and you lose the human judgment that should guide consequential decisions.

Design for augmentation, not replacement

This is where the "not human" recognition matters most. Effective hybrid team leadership means designing workflows that play to AI strengths - processing volume, consistency, tireless execution - while ensuring humans handle what AI can't: novel situations, judgment calls requiring wisdom, emotional intelligence, and knowing when something doesn't feel right even if the data looks fine.

Research shows that AI can coordinate tasks and manage workflows effectively, but struggles with motivation, inspiration, and emotional intelligence. This clarifies what human leaders uniquely contribute to hybrid teams: setting vision, building culture, developing people, navigating ambiguity, and making judgment calls that require wisdom rather than just intelligence.

A Harvard study found something interesting. After accounting for measurement error, the correlation between leader effectiveness with human teammates and leader effectiveness with AI teammates is 0.81. Good leaders are good leaders. The fundamentals transfer - even as the nature of the team evolves.

Your role as a leader of hybrid teams isn't diminished by AI. It's clarified. The administrative and coordination tasks that consumed leadership bandwidth can increasingly be handled by AI. What remains is irreducibly human.

This week, try holding both ends of the stick with one AI tool you use regularly. Before your next significant task, invest five minutes in "onboarding" - provide context about what you're trying to accomplish, why it matters, and what good looks like. Then, as you review the output, consciously note where the AI exceeded what a human could do and where it fell short of human judgment.

That's the calibration hybrid team leaders need to develop: knowing when to lean on AI capability and when to apply human wisdom.

Originally published in Think Big Newsletter #15 on the Think Big Newsletter.

Subscribe to Think Big Newsletter