Blog

Team Tenets: From Leadership Principles to Practical Mechanisms

Leadership principles inspire direction, but tenets resolve the real tradeoffs your AI team faces every day. Here's how to write team tenets that accelerate decisions and align autonomous systems.

LeadershipAI StrategyAmazonAWSAI GovernanceAI Agents

I am working with a client as part of their AI core team. One of the first things we did together was write our team's tenets — and I want to spend some time discussing why this is a critical component of moving from leadership principles to practical team-level mechanisms.

The team needs to align on questions that don't have obvious answers: How fast should we experiment versus how carefully should we validate? When do we build AI capabilities in-house versus buying them off the shelf? Should we focus on productivity use cases first, or push toward value creation in products? How do we balance building scalable AI foundations with delivering specific solutions that stakeholders need now? What's our role in discovering use cases across the organization versus enabling others to do so? And does all that work in the specific organizational environment and structure we operate in?

None of these have a single right answer. They're tradeoffs — and that's exactly what tenets are designed to resolve.

What is a tenet?

A tenet is a belief that accelerates decision-making. It's a mechanism I learned during my five years at AWS, where every team uses tenets as a core operating tool. Where a mission says what the team does, tenets say how the team approaches problems and deals with conflicting priorities. They're explicit statements that declare a team cares more about one thing than another. "We prioritize speed of experimentation over comprehensive risk analysis for two-way door decisions.", or "We build for scale even when solving a specific use case.", or "We focus on use case discovery across the organization, not just AI in our own projects."

Tenets serve two key purposes. First, they force the team members themselves to discuss and align on how they should be doing things — what matters most when priorities conflict. This is also very helpful when new team members join. Second, they help other teams understand what the AI team does, how it works, and what it believes. In a domain as fast-moving as AI, where every department has opinions about what the AI team should prioritize, that external clarity is just as important as internal alignment.

A key phrase that always accompanied Amazon's tenets was: "unless you know better ones." Tenets are not fixed laws. They're the team's current best thinking about how to operate — designed to evolve as conditions change.

Why tenets matter more in the age of AI

Three things have changed which make tenets such an important tool in the age of AI:

Decision speed has accelerated. AI systems can acquire, process, and analyze data without the human speed constraint. Your team can now generate insights, draft proposals, and model scenarios in minutes instead of weeks. But speed without clarity doesn't create strategy — it amplifies whichever principles are embedded in your systems, whether those principles are intentional or accidental.

Decision scope has expanded. Autonomous AI agents can now execute multi-step processes independently, making decisions that affect downstream workflows, customer experience, and organizational risk. When your AI agent decides how to respond to a customer complaint or prioritize a support queue, it's applying someone's values. The question is: whose? Explicitly stating your tenets might help it make better aligned ones.

The governance gap is real. According to recent research, 41% of organizations are already using agentic AI in daily operations, but only 27% say their governance frameworks are mature enough to manage those systems. That's a significant gap — and it's exactly where tenets become essential.

Without explicit tenets, teams default to implicit assumptions. And implicit assumptions don't scale. When one person makes a judgment call, that's experience. When an AI agent makes a thousand judgment calls a day based on unwritten norms, that's a risk.

Key areas where AI teams need explicit tenet positions

Based on my experience writing tenets with this AI core team — and the patterns I see across organizations — here are the key areas where AI teams need explicit tenet positions:

Speed of experimentation versus safety. AI moves fast. Your organization's risk tolerance may not. A tenet that explicitly states which way your team leans — and under what conditions — prevents every initiative from becoming a debate about guardrails.

Build versus buy. The AI tool landscape changes monthly. When do you invest in building proprietary capabilities, and when do you leverage what's already available? "We buy for commodity capabilities and build where differentiation or data privacy requires it" is a tenet that saves weeks of circular discussion.

Foundations versus solutions. There's constant pressure to deliver specific use cases for stakeholders who need results now. But without scalable foundations — data infrastructure, evaluation frameworks, governance processes, robust and reusable components — you end up rebuilding from scratch every time. "We build for scale even when solving a specific use case" is a tenet that protects future velocity.

Areas of value focus. Not every AI opportunity deserves equal investment. Your team needs a tenet about where it focuses — productivity improvements, value creation in existing products, disruption of business models — and where it intentionally does not. This prevents the team from becoming a service desk that says yes to every request.

Operating model and organizational role. Does your AI team discover use cases across the organization, or does it embed AI into products? Does it consult, build, or enable? "We contribute to use case discovery across the organization and to AI in our products" is a different operating model than "We build what stakeholders request." Making this explicit shapes every hire, every sprint, and every conversation with leadership.

Writing tenets that work

Creating tenets doesn't mean creating rules that restrict AI use. That's the wrong instinct. Good tenets don't slow teams down — they speed them up by removing ambiguity. The trap is writing tenets that are too abstract to act on ("We value quality") or too rigid to adapt ("AI must never make customer-facing decisions"). The best tenets sit in the middle: specific enough to break a tie, flexible enough to evolve.

As an example of the specificity I mean, I include below an example of one that we have written down regarding our Human-AI approach:

"We design AI systems that amplify human capabilities rather than replace human judgment. We maintain meaningful human oversight and safeguards in our AI workflows, especially for decisions that impact customers, compliance, or strategic outcomes. Full automation is the exception, not the rule, until we feel ready for more."

Your action step

This week, gather your team for a 30-minute conversation. Pick one of the tenet areas above — speed versus safety, build versus buy, foundations versus solutions — and ask: "Where does our team stand on this, and what principle should break the tie when we disagree?" Write down what emerges. You don't need seven polished tenets on day one. You need one honest conversation about what your team values more than something else. That's where tenets start — and the conversation itself will surface misalignments you didn't know existed.

Frequently Asked Questions

What are team tenets and how do they differ from leadership principles?
Team tenets are explicit statements that declare a team cares more about one thing than another, designed to accelerate decision-making when priorities conflict. While leadership principles set organizational direction, tenets translate those principles into practical, team-level mechanisms for resolving specific tradeoffs.
Why are tenets especially important for AI teams?
AI has accelerated decision speed, expanded decision scope through autonomous agents, and created a governance gap — 41% of organizations use agentic AI daily but only 27% have mature governance frameworks. Tenets make implicit assumptions explicit, which is critical when AI agents make thousands of judgment calls a day based on your team's values.
How do you write effective team tenets?
Good tenets sit between too abstract ('We value quality') and too rigid ('AI must never make customer-facing decisions'). They should be specific enough to break a tie when priorities conflict, flexible enough to evolve, and always accompanied by the phrase 'unless you know better ones' to signal they represent current best thinking, not fixed laws.

Originally published in Think Big Newsletter #22 on Amir Elion's Think Big Newsletter.

Subscribe to Think Big Newsletter