In 1997, Garry Kasparov became the first person to "lose his job" to AI when IBM's Deep Blue defeated him in chess. For years, this defeat symbolized humanity's vulnerability to machines. Yet twenty years later, Kasparov returned to the TED stage with a surprising message: don't fear intelligent machines, work with them. This was before the world knew ChatGPT and Generative AI. It was even before the article that suggested the approach now commonly used in large language models. Yet, many of the insights shared by Kasparov in this talk nine years ago are very relevant to the dilemmas we face today as leaders.
Amazon's leadership principle states: "Leaders listen attentively, speak candidly, and treat others respectfully. They are vocally self-critical, even when doing so is awkward or embarrassing. Leaders do not believe their or their team's body odor smells of perfume. They benchmark themselves and their teams against the best."
Many aspects of Earn Trust are worth considering in the age of AI. How do we build and maintain employees' and customers' trust in the outputs of AI systems and agents? How do we make our employees and partners trust our intentions and actions when we introduce AI tools and agents? What do we need to do to redesign our processes so that humans still have the observability and confidence to be vocally self-critical of the results they see?
Kasparov shares an interesting early experiment in human-AI collaboration from 1998. They paired up human teams with AI chess systems. Surprisingly, the winners were neither the teams with the best chess Grand Master, nor the ones with the most advanced computers. They were two amateur chess players working along with three regular PCs. They learned how to design a new human-AI collaboration approach, not hindered by old conventions. Kasparov reached an interesting conclusion: "A weak human player plus a machine plus a better process is superior to a very powerful machine alone, but more remarkably, is superior to a strong human player plus machine and an inferior process."
Now replace the context of a chess competition with that of software development, a legal process, a medical decision, or a scientific experiment. Or replace it with whatever business process you care about. It means less experienced people, perhaps not even using a state-of-the-art AI tool, would perform better if only they had a better process - and, if I may add, if the humans knew where and when to be vocally self-critical of the results and steer the machine in the right direction.
Being vocally self-critical with AI requires new habits. It's not enough to accept outputs at face value. When a team member presents an AI-generated analysis, the trusted response isn't "looks good" - it's "I tested this against three edge cases and found the model breaks when handling seasonal variations. Here's what I recommend we adjust." When an AI agent completes a customer interaction, reviewing the reasoning trace and asking "why did it escalate at this point rather than earlier?" builds the kind of understanding that leads to continuous improvement.
This is a muscle that organizations need to develop: the willingness to inspect, question, and improve AI outputs openly, even when - perhaps especially when - the initial results look impressive.
When trust is established through systematic approaches and well-designed processes, dramatic improvements emerge. According to Salesforce's 2025 State of IT report, employees who trust AI are nearly 10 times more likely to see agentic AI as critical to their work, use generative tools three times more frequently, and gain approximately two weekly hours in productivity compared to skeptical peers.
But perhaps the most important outcome is this: trust creates a foundation for vocal self-criticism. In high-trust environments, employees feel safe reporting when AI fails, when outputs are biased, when systems aren't working. This feedback loop - impossible in low-trust environments where employees fear being blamed - enables continuous improvement.
So what does Earn Trust look like in the AI age? It means being vocally honest about the 46% of AI pilots that get scrapped before reaching production. It means speaking candidly about AI's limitations - the biases, the hallucinations, the black-box nature of some decisions. It means treating employees respectfully by acknowledging their fears about job displacement aren't irrational but based on real industry trends.
And critically, it means delivering on commitments. If you promise AI will augment rather than replace, demonstrate it through skill investment. If you promise transparency, implement explainability tools. If you promise partnership, co-create systems with workers.
As Kasparov discovered after his loss to Deep Blue, the most powerful combination isn't human OR machine - it's "weak human player plus machine plus better process beats strongest machine alone." But achieving that combination requires something essential from leaders: trust. And trust, as Amazon's principle teaches us, is earned through vocal self-criticism, candid communication, and respectful treatment - especially when it's awkward or embarrassing to acknowledge what's not working.
Watch Kasparov explain his journey from AI adversary to collaboration advocate, and why "machines have calculations; humans have understanding. Machines have instructions; we have purpose":