Blog

Six thinking hats for hybrid teams

Edward de Bono's 1985 framework for separating thinking modes turns out to be the most practical guide for leading teams where some members are human and some are AI. Parallel thinking is what agentic AI has been quietly implementing all along.

LeadershipSix Thinking HatsHybrid TeamsAI AgentsDecision MakingHuman-AI Collaboration

Years ago, when I worked at the Branco Weiss Institute, we taught teachers and students how to think using Edward de Bono's frameworks: lateral thinking, parallel thinking, the Six Thinking Hats, and more. The core idea of Six Thinking Hats is simple. When a group tries to be creative, critical, and factual all at the same time, they get confused. Separate the thinking modes, give each mode its own space, and then combine. I've used this framework countless times since in innovation engagements to help executives and product teams dive deep into the potential and complexities of new ideas.

I didn't realize that we were implementing the architecture of AI agent teams twenty years early.

In issue #25, we explored the emotional side of hybrid human-AI teams: the fear, the excitement, the trust gap. That was an exploration of the Red Hat. This week I want to zoom out and look at all six hats, because the framework Edward de Bono created in 1985 turns out to be one of the most practical guides I've found for leading teams where some members are human and some are AI.

The framework in 60 seconds

De Bono's insight was that the biggest enemy of thinking is confusion. We try to do too much at once: emotions, data, creativity, judgment all crowd in simultaneously. His solution was six colored hats, each representing a thinking mode. Everyone wears the same hat at the same time (he called this "parallel thinking"), then switches together. All of the hats are important, and the sequence you wear them in has a strong impact too.

  • White Hat - Facts, data, and statistics. What do we know? What do we need to find out? What are the data sources?
  • Red Hat - Feelings and intuition. Gut reactions, no justification required.
  • Yellow Hat - Logical optimism and value. What are the benefits and opportunities, and for whom? What could go right?
  • Black Hat - Logical caution and critical judgment. What are the risks, pitfalls, what could go wrong, and who stands to lose?
  • Green Hat - Creativity and alternative solutions. New ideas, lateral thinking, provocations.
  • Blue Hat - Process, control, and decision-making. What thinking is needed right now? What's the agenda? What have we concluded?

Companies like IBM, Siemens, Shell, and JP Morgan have used this framework for decades. The de Bono Group claims companies using it reduced meeting times by 75%.

Agentic AI already wears the hats

I recently revisited this framework through the lens of agentic AI. I think we've actually been building de Bono's architecture into AI systems, we just didn't call it that.

Research agents that gather data and check facts? That's the White Hat, and we expect them to be factual and get frustrated when they aren't. Code review and validation agents that scan for risks and vulnerabilities? That's the Black Hat. Brainstorming agents that generate alternatives and make lateral connections? Classic Green Hat. Planning agents that map opportunities and assess feasibility? Yellow Hat, with a touch of Blue. And orchestrator agents that manage workflow, route tasks, and synthesize results? That's the Blue Hat.

The agentic AI world rediscovered parallel thinking. Separate specialized perspectives, run them in structured sequence, then combine the results. De Bono described this pattern forty years before CrewAI, AutoGen, or LangGraph existed.

This isn't just a neat analogy. A team at Google Research tested 180 different agent configurations across five architectures. They found that centralized orchestration, a Blue Hat agent coordinating the others, delivered an 80.9% performance boost on parallelizable tasks. Without an orchestrator, error amplification was 17.2x. With one, it dropped to 4.4x. The Blue Hat, deciding which thinking mode is needed when, is the most critical role in a multi-agent system.

If you ever follow the "stream of thought" for a complex task you gave Claude or an OpenAI model in the past few months, you would have noticed a similar pattern. It spins off subagents for research, ideation, planning, summarizes results, makes decisions, reviews and checks them, and so on.

De Bono would have recognized this immediately.

Hats are not fixed identities

A common mistake is treating the hats as fixed identities instead of thinking modes.

Assigning one person as "the creative" and another as "the critic" is not parallel thinking. It's personality typing, and it leads to the same adversarial debates de Bono was trying to eliminate. The whole point is that everyone (human and AI) switches modes together. White Hat time means everyone gathers data. Black Hat time means everyone looks for risks, including the person who proposed the idea.

With AI, this trap manifests as building agents that are permanently locked into one mode. A review agent that only criticizes. A brainstorming agent that never considers constraints. The best multi-agent architectures, like the Devil's Advocate system described by Dr. Jerry Smith, use structured rotation. The worker proposes, the advocate challenges, the reviewer synthesizes, and then they rotate. In one case study, this process went through four rounds and reached 92% confidence on a solution that started as a basic first draft.

Parallel thinking means switching modes deliberately, not assigning permanent roles.

Use Six Thinking Hats for your AI use case decisions

Last week I was in a discussion about a potential AI use case with a customer team. It was a messy meeting jumping between arguments, counter arguments, partial facts, strong emotions, suggested action alternatives, and (not surprisingly) no clear decision in the end. I believe that if we had been using the Six Hats method in that session, we would have had much more clarity about what the opportunity is, what its drawbacks are and what solutions exist, and what actions we could take. If you don't yet have a framework to discuss your AI use cases, Six Thinking Hats is a good starting point.

Here's a short video that explains how to use the Six Thinking Hats a bit further:

Your action step

This week, before your next team decision, whether it involves AI agents or not, try a simple hat sequence. Spend five minutes with the team on White (what do we actually know?), five on Green (what alternatives haven't we considered?), five on Black (what could go wrong?), and five on Red (how does everyone feel about the direction?). End with Blue (what did we decide, and what's next?).

You'll notice something immediately. The conversation quality changes when people aren't simultaneously trying to align on facts, be creative, and be critical. Separate the modes. Then combine.

If you're working with AI agents, take it a step further. Look at your agent setup and ask: which hats are covered? Most teams over-index on White (research) and Black (review) while neglecting Green (creative alternatives) and Yellow (opportunity mapping). And almost everyone skips Red, the human judgment the framework was built to protect.

If you'd like to bring Six Thinking Hats into your organization's AI strategy discussions, or have me facilitate an innovation session using this framework, I'd love to help.

Frequently Asked Questions

What are Edward de Bono's Six Thinking Hats?
Six Thinking Hats is a decision-making framework created by Edward de Bono in 1985. Each colored hat represents a thinking mode: White (facts and data), Red (emotions and intuition), Yellow (optimism and benefits), Black (critical caution and risk), Green (creativity and alternatives), and Blue (process and decisions). Everyone wears the same hat at the same time, then switches together. That is parallel thinking.
How do the Six Thinking Hats apply to AI agent teams?
Multi-agent AI systems map naturally onto the six hats. Research agents embody the White Hat, code review and validation agents the Black Hat, brainstorming agents the Green Hat, planning agents the Yellow Hat, and orchestrator agents the Blue Hat. A Google Research study on 180 agent configurations found that centralized Blue Hat orchestration delivered an 80.9% performance boost and reduced error amplification from 17.2x to 4.4x.
Why should leaders use Six Thinking Hats for AI use case decisions?
Decisions about AI use cases often collapse into messy meetings where facts, emotions, risks, and alternatives crowd in at once. Separating the thinking modes with a short hat sequence gives the conversation structure: five minutes on White (what do we know), Green (what alternatives), Black (what could go wrong), Red (how does the team feel), then Blue to decide. The conversation quality changes immediately.

Originally published in Think Big Newsletter #26 on Amir Elion's Think Big Newsletter.

Subscribe to Think Big Newsletter