Blog

Bias for Action: Moving Quickly in the Age of AI

Leadership Principles in the age of AI - move quickly with Bias for Action

LeadershipAI StrategyAmazonAWSEnterprise AI

Leadership Principles in the age of AI - move quickly with Bias for Action

When I was leading the customer training team at AWS, one of my team members came to me with an idea she wanted to try with a new offering to customers. She explained her approach and asked what I thought she should do.

I told her that I didn't really think that I had more infroamtion than she did, as she closer to the customers. Instead, I sked her "is this a two-way or a one-way door?"

She thought for a moment and said, "It is a two-way door. If it doesn't work, we can pivot and try something else."

"Then go ahead," I told her. "Try it, learn from it, and come back to share with the team what you discovered."

She did, and we all learned from it.

This is exactly what Amazon's "Bias for Action" leadership principle exists to enable. The principle states: "Speed matters in business. Many decisions and actions are reversible and do not need extensive study. We value calculated risk taking."

The principle introduces a powerful mental model: the one-way door versus the two-way door. A one-way door is a decision you can't easily reverse - like acquiring a company, shutting down a product line, or making fundamental architecture choices. These deserve deep analysis. A two-way door is something you can walk back from if it doesn't work - changing a feature, testing a new workflow, or piloting a tool. These should be decided quickly, often by the people closest to the problem.

Unfortunately, many teams and leaders apply the same approach to all types of decisions. But the vast majority of decisions are two-way doors, or can be broken down into two-way doors. Yet we treat them all like one-way doors, escalating everything up the chain and drowning in analysis-paralysis.

Why This Matters More in the AI Era?

AI amplifies both the opportunity and the risk of moving too slowly. The technology landscape shifts weekly - new models, capabilities, and tools emerge constantly. What seemed impossible last month becomes routine today. Organizations that wait for certainty will always be six months behind.

On the flip side - AI also makes it easier to move fast. You can prototype in hours what used to take weeks. You can test assumptions with synthetic data before committing resources. You can run multiple experiments or scenarios in parallel. The technology itself enables and even scales the two-way door approach.

When leaders approach me about AI initiatives, I often hear: "We're waiting for the regulation to be clearer" or "We want to see what the market does first" or "We need to understand all the implications before we start." These sound prudent, but they're often disguised inaction.

My approach is that you learn more from one week of hands-on experimentation than from three months of analysis. Your team builds AI literacy by using AI, not by attending webinars about it. Your competitors aren't waiting - they're learning through doing.

The AI Bias for Action Framework

Here is a suggestion on how to apply this leadership principle to your AI initiatives:

  1. Classify your AI decisions explicitly

Before any discussion, ask: Is this a one-way or two-way door? If someone claims it's one-way, challenge them: "What would it take to reverse this if we're wrong?" Often, you'll discover it's reversible with clear exit criteria and limited investment.

  1. Push decisions to the edge

The people closest to the customer, the code, or the problem should make two-way door decisions. When a team member asks for approval on a pilot AI tool, turn it around: "You know the constraints - security, budget, customer impact. Make it a two-way door and try it. Come back with learnings in two weeks."

  1. Set clear decision-making authority

Define explicitly: What can individuals decide? What needs team consensus? What requires leadership approval? AI projects die in the gap between unclear authority levels. Together with one enterprise client team we were able to get into pilot and internal testing in 2-3 weeks instead of 4 months, and we got a lot of early insights thanks to that.

  1. Build learning loops, not permission loops

Replace approval gates with learning gates. Instead of "Can we try this AI tool?" the question becomes "What do we need to learn from this pilot, and how quickly can we learn it?" This shifts the conversation from risk avoidance to knowledge creation.

The Trap to Avoid

Bias for action doesn't mean reckless action. It means calculated risk-taking with clear learning objectives (note the last part of the leadership principle). Some AI decisions genuinely are one-way doors - training custom models on sensitive data, committing to specific vendor platforms at scale, or automating decisions that impact people's lives. These deserve rigor.

The trap is letting fear disguise itself as prudence. When someone says "We need more time to evaluate," ask: "What specifically would we learn from more evaluation that we couldn't learn from a small, reversible pilot?" Often, there's no good answer.

Your Action Step

Look at your current AI initiatives. How many are stuck in analysis? Pick one and ask: If this fails, what's the actual cost - not the imagined catastrophe, but the real cost? If that cost is acceptable, break it into the smallest possible two-way door and open it this week.

Speed compounds. Organizations that build a culture of rapid AI experimentation don't just move faster - they learn faster, adapt faster, and ultimately build better solutions. The biggest risk isn't making mistakes with AI. It's learning too slowly while your market transforms around you.

Originally published in Think Big Newsletter #2 on the Think Big Newsletter.

Subscribe to Think Big Newsletter