Blog

Insist on the Highest Standards in the Age of AI

Insist on the Highest Standards - your role as a leader in the age of AI.

LeadershipAI StrategyAmazonTrustRAGProductivity

Insist on the Highest Standards - your role as a leader in the age of AI.

Your CEO doesn't personally audit the books, but she spot-checks the numbers that matter. Your product manager didn't write the authentication code, but he tests every edge case until something breaks. Your CTO can't review every pull request, but he ensures defects don't reach production. This is how leaders manage outputs they don't fully create - by setting standards and building verification systems. Amazon's seventh leadership principle - "Insist on the Highest Standards" - was built for exactly this challenge.

Leaders have relentlessly high standards - many people may think these standards are unreasonably high. Leaders are continually raising the bar and drive their teams to deliver high quality products, services, and processes. Leaders ensure that defects do not get sent down the line and that problems are fixed so they stay fixed.

A key insight in age of AI is that the standard applies to the output, not who created it and how. Your job isn't to understand every implementation detail. Your job is to ensure nothing ships that fails your quality bar - whether a human or AI was involved.

The more your team can produce, the higher your quality bar must be. When your team could ship one feature per month, you could manually review everything. When they can ship ten, you can't - but customers expect all ten to work flawlessly. AI accelerates this paradox exponentially. What took two weeks now takes two days. What required a team of five now requires one person and AI. The volume of output explodes, but the customer's quality expectation doesn't change. It increases. Amazon discovered this when "minimum viable product" thinking crept into launches. Jeff Wilke's response wasn't to slow down - it was to raise the bar: Customers don't want MVP. It has to be lovable or they're not going to adopt it. He demanded "minimal lovable product" - the smallest thing that customers would genuinely enjoy. In the AI era, this becomes your survival strategy. You can't slow AI down to match your review speed. You must build verification systems that match AI's production speed while maintaining your quality standard.

The shift from "what can AI do for me?" to "how do I enable high-quality output from my team - human and AI alike?" separates leaders from bottlenecks. This means asking: What context does this AI tool need to meet our standards? What examples should guide its output? What constraints must it understand? This should not be micromanagement - it's the same leadership you'd apply to a new hire. You wouldn't tell someone on day one "go build authentication" without a codebase tour, architectural guidance, and examples of what good looks like. AI agents need the same investment. The difference is the return on that investment compounds faster. Fifteen minutes providing context might save fifteen hours of debugging, and those time savings accelerate as AI capabilities grow. Applied to AI outputs, this means shifting from "did I review this?" to "does this pass our verification systems?". Automated security scans, performance benchmarks, test coverage thresholds, and end-to-end validation become your quality gates. Human review focuses on what machines can't verify: does the architecture support future extensions? Do the trade-offs align with business strategy? Does the user experience match our brand? As AI capabilities double every seven months, this verification-over-inspection approach isn't optional. It's how you avoid becoming the constraint on your own team's productivity.

Here's what this exponential means: today, AI handles hour-long tasks. Next year, full days. The year after, entire weeks. Your current review process - reading every line, understanding every implementation - won't scale to that pace. The leaders who thrive will be those who built verification systems early, when AI was generating hours of work and they had time to experiment. By the time AI generates weeks of work, the patterns are set. If your pattern is manual inspection, you'll be the bottleneck. If your pattern is verification mechanisms with targeted human review, you'll multiply your team's leverage. This doesn't mean lowering standards. Amazon's principle is "insist on the highest standards," not "insist on manual review of everything." The standard is non-negotiable: defects don't get sent down the line, problems are fixed so they stay fixed. But how you enforce that standard must evolve. The question isn't "can I trust this AI output?"

The question is "have I built systems that ensure any output - from any source - meets our quality bar before it ships?"

Watch the following short interview with Mohit Aron to understand the importance of building Minimum Lovable Products and some great tips that will help you achieve the highest standards.

Frequently Asked Questions

What does Insist on the Highest Standards mean for AI outputs?
The standard applies to the output, not who created it or how. Leaders must ensure nothing ships that fails their quality bar — whether produced by a human or AI. As AI accelerates production volume, leaders must build verification systems that match AI's speed while maintaining quality standards.
How should leaders maintain quality when AI increases output volume?
Leaders must shift from manual inspection to verification systems — automated security scans, performance benchmarks, test coverage thresholds, and end-to-end validation. Human review focuses on what machines cannot verify: architecture decisions, strategic trade-offs, and user experience alignment with the brand.
What is the difference between Minimum Viable Product and Minimum Lovable Product in AI?
Jeff Wilke pushed back against MVP thinking, demanding 'Minimum Lovable Product' — the smallest thing customers would genuinely enjoy. In the AI era, this becomes a survival strategy because AI makes it easy to produce 'good enough' output at scale, but customers expect quality to increase, not just volume.

Originally published in Think Big Newsletter #6 on Amir Elion's Think Big Newsletter.

Subscribe to Think Big Newsletter