Amazon's hiring loop follows a specific rhythm. For any role, from entry level through senior, the hiring manager assigns each interviewer two Leadership Principles to explore with the candidate. For a Solutions Architect role, the panel might spread Dive Deep, Earn Trust, and Learn and Be Curious across the interviewers. For an Account Manager, it could be Bias for Action and Deliver Results. At the debrief, the panel compares notes to paint a fuller picture. A candidate can be exceptional on some LPs and thin on others, and the question is always whether this specific role needs the specific mix of exceptional this candidate offers.
I've thought a lot about that system, and specifically about three words inside this Leadership Principle:
"Leaders raise the performance bar with every hire and promotion. They recognize exceptional talent, and willingly move them throughout the organization. Leaders develop leaders and take seriously their role in coaching others. We work on behalf of our people to invent mechanisms for development like Career Choice."
Recognize exceptional talent. Recognizing means you know what you're looking at. Exceptional means you can tell the difference between good and rare. Talent means you've named the specific capability you're hiring for.
We revisited this principle back in issue #15 through the question of onboarding AI as a team member. I want to ask a different question now, because the ground has shifted under our hiring loops. In 2026, a large portion of what most teams used to consider "exceptional talent" has quietly moved inside the AI layer. The question now is whether we still recognize exceptional the way we used to.
The 6-2 split
Howard Gardner's theory of Multiple Intelligences identifies eight kinds of human intelligence: linguistic, logical-mathematical, musical, spatial, bodily-kinesthetic, naturalistic, interpersonal, and intrapersonal. For most of the theory's 40-year life, one of the stronger critiques was that we could only reliably measure two of them (linguistic and logical-mathematical), so the rest stayed conceptually important but operationally sidelined.
With the arrival of AI something unexpected was becoming clearer. Gardner himself, writing on his blog in the past two years, has conceded the first six to AI. Large language models handle linguistic work at expert level. They exceed humans on most logical-mathematical tasks. Generative tools produce musical composition, spatial design, and image generation at professional quality. Robotics and surgical systems are closing in on bodily-kinesthetic applications. Pattern classification gives us at least some aspects of the naturalistic.
What remains, Gardner insists firmly, are the two personal intelligences. Interpersonal, understanding others, reading emotional nuance, building trust, and intrapersonal, self-knowledge, genuine perspective, a point of view that comes from lived experience. "Our social, ethical, and moral lives cannot and should not be consigned to any artificial entity, no matter how intelligent it appears," Gardner writes. Tom Hoerr, a longtime MI practitioner, puts it more directly: "Who you are is more important than what you know."
And yet the way we recognize exceptional talent hasn't caught up. When I work with engineering-oriented companies I often see that the interview bar remains heavily technical, and that performance reviews weight the six intelligences AI already handles. In most cases, promotion criteria reward depth in specialized knowledge more than breadth of judgment. A strong general thinker who can integrate across domains and hold a team together often ranks below a brilliant specialist whose work AI will increasingly be able to do.
The sideways E
The conventional model of a strong contributor is a T-shape, one deep spike of expertise with a thin layer of breadth across the top. The model I think fits better now is a sideways E. Broad generalist understanding across the horizontal, with three or four deeper vertical roots, each of which can be pulled on when a problem demands it. A sideways E sees connections across functions, knows enough in several adjacent domains to ask good questions, and goes deep where the situation requires.
This shape is what Gardner would call the synthesizing mind, which he says is "the most important kind of mind for the twenty-first century," citing Nobel Laureate Murray Gell-Mann. Synthesis is a hard nut for AI systems to crack because it requires values, taste, and lived experience alongside information. In a January 2025 interview Gardner put the shift plainly: leaders and teachers will spend "much less time in the weeds and much more time trying to put stuff together."
Recognizing exceptional, now
What counts as exceptional talent when AI covers six of eight?
Three things.
First, strength in the two intelligences AI cannot replicate. Read people well. Know yourself clearly. Hold both at a level your team cannot produce without you.
Second, sideways-E breadth with real roots. Integrative thinking that synthesizes across domains, grounded in three or four real expertises that pull their weight when needed. The synthesizer beats the specialist when AI collapses specialist work into a prompt.
Third, and this is where the Amazon principle extends beyond its original intention, the ability to recognize exceptional talent in AI agents, too. Each model has its own jagged capability profile (more on that in this issue's terminology section). Exceptional leaders are starting to evaluate agents the way an experienced Amazon interviewer evaluates a candidate against assigned LPs. Which agent is exceptional at what? Where does each one fail? How do you orchestrate a small team of them to produce one outcome?
Expertise still matters
Technical depth still matters. Gardner is not arguing that logical-mathematical intelligence becomes worthless, and neither am I. AI handles most of the execution. The humans who understand the work well enough to steer it and catch where it's wrong are the ones who make it all work. Both technical excellence AND interpersonal/intrapersonal intelligence are needed. The balance has flipped. In 2020, the scarce thing was depth. In 2026, the scarce thing is what AI can't do.
One more thing Gardner said in that January 2025 interview is worth carrying into the action. "If I were to rewrite the book today it would clearly take advantage of AI." His point was that AI finally makes true individuation possible at scale. Developing each person according to their specific intelligence profile used to be reserved for Alexander the Great with Aristotle. Now, with the right tooling, every leader can do a version of it for everyone on their team and for themselves, too. Recognize exceptional, then use AI to help each person develop the intelligences most their own.
Your action step
This week, look at your last three hires or promotions, and then at the one you're about to make. For each role, ask: have I defined what exceptional looks like across all eight intelligences, or only the six that AI now handles?
One more question, especially for individual contributor roles: does this person show managerial capability? Every IC in 2026 is about to become the manager of a small team of agents. They will need to set direction, distribute work, evaluate output, handle failure, and stay accountable, just at a new level of abstraction. The durable human skills for modern work are the interpersonal and intrapersonal intelligences by another name: judgment, taste, ownership, and coaching. Recognize them first. Then build your hiring and promotion around them.
If you'd like to rethink your hiring and promotion criteria for the AI era, or want me to run a working session with your leadership team on what exceptional looks like now, I'd love to help.