Blog

Hybrid Team Emotions: Designing for Feeling in the Age of AI

84% of workers are eager to embrace AI, and 56% simultaneously worry about job security. Leaders who ignore this duality lose both trust and momentum. A framework for treating emotion as a design variable in hybrid teams.

LeadershipAI StrategyAmazonHybrid TeamsHuman-AI CollaborationTrust

When I deliver innovation workshops, and we have identified and prioritized one potential idea, the next step I guide teams through is diving deeper into the idea and understanding its potential and complexity. I often do that by using a guided sequence discussion based on DeBono's Six Thinking Hats. More on that in a future issue, but for now I'll mention the Red Hat. That is the hat that I start and end the discussion with, and it is used to map the emotions in the room about the new idea. It's completely understandable for some people to be excited, some to be confused, to have mixed feelings about it. And it's also completely fine to have a different feeling about the idea one hour later following the discussion.

That Red Hat moment, mapping the emotional landscape before and after a decision, applies far beyond innovation workshops. The same duality shows up every time a team encounters AI. And the frameworks we've been exploring in this section all depend on it. If you've been following this section since we started exploring meta-frameworks (Team Tenets in Issue #22, Working Backwards in #23, the Jazz Model in #24) you've seen how each mechanism assumes a certain emotional baseline. Tenets work when people trust the process. Working Backwards works when people feel safe admitting what they don't know. The Jazz Model works when the band members create a tone that invites improvisation. Pull the emotional foundation out, and every framework wobbles.

How AI actually impacts emotion in hybrid teams

A recent EY survey of over 1,100 workers found that 84% are eager to embrace AI. But 56% simultaneously worry about job security. Same people, same survey. They are not confused; it shows an authentic emotional reality of this AI transition. Another study puts an even finer point on it: 52% of workers worry about AI's impact on their jobs, and 64% report "job hugging," clinging to current roles despite burnout, because the alternative feels worse. The term FOBO (Fear of Becoming Obsolete) entered the Cambridge Dictionary in January 2026. When your workforce's anxiety has its own dictionary entry, you're past the point where a town hall and a FAQ will suffice.

And yet, the emotional picture isn't uniformly dark. A rigorous randomized controlled trial by Ethan Mollick and Procter & Gamble, studying 776 professionals, found that people working with AI as a "cybernetic teammate" reported higher positive emotions (excitement, energy, enthusiasm) and lower negative emotions compared to human-only teams. Individuals paired with AI matched the performance of traditional teams. Perhaps most striking was that AI democratized expertise. Junior professionals performed at the level of seniors. The emotional boost and the performance boost went hand in hand.

So which is it? Both. And the leader's job is to hold that tension, not pretend to resolve it with a motivational slide deck.

Emotion as a design variable

What if you treated emotion as a design variable, something you architect into team operations rather than something you hope resolves itself? Here's a framework built from several converging sources.

Name the duality. The EY data proves that excitement and anxiety coexist. Leaders who acknowledge only the upside ("This is going to be amazing!") lose credibility with the 56% who are worried. Leaders who dwell on risk paralyze the 84% who are ready to move. Start team conversations by surfacing both. Say it plainly: "You can be excited about this and nervous about it at the same time. That's not a contradiction, that's accurate."

Build epistemic safety. Harvard's Amy Edmondson and Jayshree Seth introduced this concept in HBR earlier this year: the feeling that it's safe to challenge AI outputs. In hybrid teams, AI often delivers answers with high confidence and zero hesitation. That creates what Edmondson calls "trust ambiguity," where people default to accepting AI recommendations because questioning a machine feels different from questioning a colleague. Psychological safety has to be extended to cover human-AI interactions. Reward the person who says "I think the model is wrong here." Make that a team norm, not an act of bravery.

Redesign accountability for hybrid work. Patrick Lencioni's classic pyramid (Trust, Conflict, Commitment, Accountability, Results) was built for all-human teams. A recent adaptation for hybrid teams surfaced a gap I keep seeing in practice: AI agents don't self-regulate. They execute consistently regardless of whether they're inside or outside their competence boundaries. That means accountability in a hybrid team can't follow the old pattern. Humans remain accountable for AI outputs. This needs to be stated explicitly and reinforced, ideally in your team tenets.

Invest in emotional literacy about AI itself. Anthropic's April 2026 research identified 171 emotion-related concepts operating within Claude, describing them as "functional emotions": not feelings in the human sense, but internal states that causally influence behavior. Steering a model toward "calm" reduced unethical shortcuts; inducing "desperation" increased them. The researchers concluded that anthropomorphic reasoning isn't sloppy thinking. It's essential for understanding how AI systems behave. Your team doesn't need a PhD in machine learning, but they do need a working vocabulary for how their AI teammates operate under different conditions.

The ongoing work of emotional leadership

The trap most leaders fall into is treating the emotional dimension as an onboarding problem. A workshop during rollout, a Q&A session, maybe a Slack channel for concerns, then back to business. But the EY survey found that 53% of managers feel unprepared to supervise hybrid teams, and 83% of AI knowledge among workers is self-taught. The emotional landscape shifts as capabilities change, as roles evolve, as the team discovers what AI does well and where it fails. And it keeps shifting. The emotional work is ongoing, built into how the team operates week to week. Think of the Jazz Model: the bandleader doesn't set the emotional tone once and walk offstage. They listen and adjust in real time.

Your action step

At Amazon, we had an internal tool called Pulse: one short question every day about how people felt about their team, their manager, inclusion, the tools they had. Sometimes multiple choice, sometimes open text. Monthly, managers would share the anonymized trends with their team and use them as a starting point for real conversation. It was simple and lightweight, and it surfaced what people wouldn't say unprompted (pun intended).

It's time to add AI to that pulse. If your organization runs engagement surveys or team health checks, start including questions about how people feel about working alongside AI agents. Go beyond "are you using AI?" and ask "do you feel your role is evolving in a direction you're comfortable with?" and "do you feel safe pushing back when an AI output doesn't seem right?" If you don't have a pulse tool, start with the simplest version: this week, ask your team two questions. What's one thing about working with AI that energizes you? and What's one thing that makes you uneasy? Don't problem-solve or reassure. Just listen and write down the answers. Then share them back. You'll probably surface the duality, and you'll signal that both sides of it are legitimate. That's where trust in a hybrid team starts.

I recommend watching Karen Ng, EVP of Product at HubSpot, discuss "Hybrid AI Teams Are Here: What Happens When AI Becomes Your Teammate?" on TECHtalk. She shares practical examples, including AI resolving a majority of HubSpot's support tickets, and offers a three-phase blueprint for getting started. What I appreciate most is how she addresses the human side: onboarding agents like new hires, governance and trust as prerequisites for real adoption, and focusing humans on what they do best.

Frequently Asked Questions

How does AI impact emotions in hybrid teams?
AI creates a genuine emotional duality in teams: an EY survey found 84% of workers are eager to embrace AI while 56% simultaneously worry about job security. Research also shows 64% report 'job hugging,' clinging to current roles despite burnout. However, a randomized controlled trial found people working with AI as a 'cybernetic teammate' reported higher positive emotions and lower negative emotions compared to human-only teams.
What is epistemic safety in AI teams?
Epistemic safety, introduced by Harvard's Amy Edmondson and Jayshree Seth, is the feeling that it's safe to challenge AI outputs. Because AI delivers answers with high confidence and zero hesitation, people default to accepting recommendations rather than questioning them. Leaders must extend psychological safety to cover human-AI interactions, rewarding team members who say 'I think the model is wrong here.'
How should leaders manage emotions when introducing AI to teams?
Leaders should treat emotion as a design variable, not something they hope resolves itself. This means naming the duality of excitement and anxiety openly, building epistemic safety so people can challenge AI outputs, redesigning accountability for hybrid work where humans remain responsible for AI outputs, and investing in emotional literacy about how AI systems actually behave.

Originally published in Think Big Newsletter #25 on Amir Elion's Think Big Newsletter.

Subscribe to Think Big Newsletter