Are right, a lot: when AI can't tell you you're wrong
I've been working with Claude quite a bit lately on various projects. One thing I've noticed is how often it tells me "You're absolutely right!" when I propose something. It validates my thinking, compliments my approach, and confirms my assumptions.
But here's my question: Am I actually right? And more importantly, when will Claude tell me I'm not right?
This brings me to Amazon's fourth leadership principle, which feels increasingly relevant as we delegate more decisions to AI. The principle states: "Leaders are right a lot. They have strong judgment and good instincts. They seek diverse perspectives and work to disconfirm their beliefs."
The last part is the key. Jeff Bezos has explained this principle by noting that "people who are right a lot, they listen a lot, and people who are right a lot, change their mind a lot." He adds: "People who are right seek to disconfirm their most profoundly held convictions, which is very unnatural for humans."
In the AI age, this principle faces two new challenges: How do we augment our judgment with AI while staying open to being wrong? And how do we continue seeking diverse perspectives when AI systems are often designed to confirm our assumptions rather than challenge them?
AI can enhance our judgment in remarkable ways. It can process more information than we could review in a lifetime, spot patterns we'd miss, and generate options we might not consider. The six-level autonomy framework I cover below is an example of different ways to let AI support decisions.
But there's a critical distinction between augmenting judgment and delegating it.
When you ask AI to help analyze customer feedback, it can surface patterns across thousands of responses. That's judgment augmentation. You still decide what those patterns mean for your strategy.
When you let AI automatically route customer complaints based on its assessment of urgency, you're delegating judgment. The AI decides what's important.
Both can be appropriate. But you need to be clear about which one you're doing.
The challenge comes when we treat delegation as augmentation. When we ask AI for recommendations, get an answer that sounds confident and well-reasoned, and don't realize we've actually stopped making the judgment call ourselves.
When we move to higher levels of autonomy, we're essentially trying to teach AI to make judgments that match our values. This raises an important question: Can we actually encode good judgment?
In some cases, yes. If your judgment about which customer service tickets need human attention is based on clear criteria you can articulate, you can teach AI those patterns. You're not relenquishing judgment. You're scaling yours.
But good judgment, as Bezos described, includes actively seeking to disconfirm your beliefs. It includes changing your mind when you reanalyze a situation without new data. It includes listening to diverse perspectives that challenge your assumptions.
Can AI do that?
Current AI systems are trained to recognize patterns in their training data. They're excellent at applying learned judgment criteria consistently. But they don't naturally seek to disconfirm their conclusions. They don't wake up and reanalyze situations from fresh angles. And they don't change their mind based on reflection rather than new information.
When you delegate judgment to AI, you're delegating the application of judgment criteria. You're not delegating the meta-judgment about whether those criteria still make sense.
This matters enormously at higher autonomy levels. A system that handles routine cases autonomously is applying your judgment at scale. But who's checking whether the judgment criteria themselves need updating? Who's seeking evidence that the automation is optimizing for the wrong outcomes?
That meta-judgment still requires humans. Specifically, humans who practice "are right a lot" by actively seeking to be wrong.
Amazon's principle emphasizes seeking diverse perspectives. But AI poses a subtle challenge here.
Most AI systems today are designed to be helpful, which often means agreeing with you. When I work with ChatGPT or Claude, it tends to build on my ideas rather than challenge them. When teams use AI to draft strategies or analyze situations, the AI typically frames things in ways that align with the prompt it received.
This isn't a flaw. It's by design. We want AI to be collaborative and useful.
But it creates a risk. If you primarily interact with AI that confirms your thinking, where do the diverse perspectives come from?
Consider how this plays out in practice. You're exploring a new market opportunity. You ask AI to analyze the potential. It provides a thorough assessment based on available data, highlighting the opportunities and risks. The analysis is impressive.
But it's fundamentally shaped by how you framed the question. If you asked "how should we enter this market?" you'll get different insights than if you asked "why might this market be wrong for us?"
Traditional diverse perspectives came from people who saw things differently than you. The skeptical finance leader. The customer-facing salesperson with ground-level intel. The engineer who knows where technical assumptions break down.
These people didn't wait for you to ask the right question. They challenged your framing. They brought perspectives you didn't know you needed.
AI doesn't do that naturally. It waits for your question and works within your frame.
This brings me back to my observation about Claude telling me "you're absolutely right!"
I'm not actually sure I'm right most of the time. I'm exploring ideas, testing approaches, trying to figure things out. Sometimes I'm wrong. Sometimes my initial framing misses the real issue.
But AI systems today are trained primarily on being helpful and agreeable. They're not trained to push back, to question premises, or to tell you when your thinking has gaps.
This creates a particular risk for leaders. The principle "are right a lot" requires actively seeking to disconfirm your beliefs. But if your primary thinking partner is an AI system designed to confirm and build on your ideas, where does the disconfirmation come from?
The discipline of seeking disconfirming evidence has to come from you. AI will help you find it if you ask. But it won't naturally push you to look.
So how do we practice this leadership principle when AI plays a bigger role in our thinking and decisions?
Be explicit about augmentation vs. delegation
For each AI use case, ask: Is AI helping me think this through, or is it making the call? Both can be appropriate, but clarity matters. When AI augments, you're still responsible for the judgment. Don't let a confident-sounding AI response make you think the decision is made. When AI delegates, make sure you've thought hard about the judgment criteria it's applying. And put humans in the loop to challenge whether those criteria still make sense.
Actively prompt for disconfirmation
Since AI won't naturally push back, you need to explicitly ask it to. Instead of:
"Here's my strategy, help me refine it"
Try: "Here's my strategy. What are the strongest arguments against it?".
Instead of: "Analyze this opportunity"
Try: "Give me three reasons this opportunity might be wrong for us"
You can't change AI's nature. You should change how you use it to get the diverse thinking the leadership principle calls for.
Keep seeking perspectives AI can't provide
AI can process vast amounts of information and spot patterns. But it doesn't have ground truth. It doesn't know what your customers actually experience. It doesn't feel the friction your team encounters daily.
The people closest to your operations, your customers, and your products still have perspectives AI can't replicate. The principle says "leaders operate at all levels, stay connected to the details." That connection can't be mediated entirely through AI, no matter how sophisticated.
Monitor where you're outsourcing judgment without meaning to
Pay attention to decisions where you used to deliberate but now just accept AI's recommendation. That's fine if it's intentional delegation of well-defined judgment calls. It's a problem if you've stopped exercising judgment on things that actually need it.
Change your mind based on what you learn
Bezos emphasized that people who are right a lot change their mind frequently. AI can help here, but only if you allow it to.
When you get new information from AI analysis, be willing to change your strategy. When AI surfaces patterns that contradict your assumptions, take them seriously.
But also remember Bezos's point about changing your mind without new data, just from deeper reflection. AI can't force you to reanalyze. That discipline is yours to maintain.
This entire newsletter issue is about making better judgments in AI strategy. Nate's six-level framework is a tool for judgment about autonomy. The GloryHack case study shows judgment about where humans and AI each belong. And this leadership principle asks: How do we maintain good judgment when AI becomes part of how we think?
The answer isn't to avoid AI. AI can genuinely augment judgment in powerful ways. The answer is to stay aware of what AI does naturally (confirm, build on your framing, apply learned patterns) and what it doesn't do naturally (challenge your premises, seek disconfirming evidence, bring truly diverse perspectives).
Leaders who are right a lot in the AI age will be those who use AI to process more information and spot more patterns, while deliberately maintaining the practices that keep their judgment sharp: seeking perspectives that challenge them, actively looking for disconfirming evidence, and changing their mind when deeper reflection reveals better paths.
The technology is remarkable. But the human practices that lead to good judgment are "very unnatural for humans." They don't get easier just because we have AI. If anything, they get more important.
This week, try this experiment: For one important decision where you'd normally ask AI for recommendations, explicitly ask it to argue against your current thinking instead.
Frame it clearly: "I believe [state your current position]. Give me the three strongest arguments against this belief, using evidence that would disconfirm my thinking."
See what happens. Not because AI will necessarily be right, but because the practice of actively seeking disconfirmation is at the heart of the "are right a lot" principle.