Blog

Intent Engineering: The Missing Layer in Enterprise AI

We've taught AI what to know. We haven't taught it what to want. That gap is why most companies still see no tangible value from AI - and the fix starts with something Peter Drucker told us decades ago.

AI StrategyAI AgentsEnterprise AIIntent EngineeringBusiness ValueAI TransformationOrganizational Culture

I keep having the same conversation with enterprise leaders. It goes something like this:

"We deployed the AI. It works. But it's not... helping."

The AI resolves tickets faster. It drafts documents quicker. It summarizes meetings in seconds. By every technical metric, it's performing. But the business isn't moving. Customer satisfaction is flat. Employee adoption stalls. The ROI case that looked great in the pilot quietly falls apart at scale.

Peter Drucker famously said that culture eats strategy for breakfast. In 2026, culture is eating AI for breakfast too. And the organizations that figure this out first will leave everyone else behind.

Three Eras, Three Questions

We've moved through distinct phases in how humans work with AI, and naming them matters because it helps us understand where we're stuck.

Prompt engineering was the first era. It asked: "How do I talk to AI?" Individual, synchronous, session-based. You sit in front of a chat window, craft an instruction, iterate the output. This is where most people still are. It's valuable, but it's personal-scale, not organizational-scale.

Context engineering is the current era. It asks: "What does AI need to know?" This is the shift from crafting isolated instructions to building the entire information environment an AI system operates within - RAG pipelines, MCP servers, organizational knowledge bases. It's where most enterprise AI teams are focused right now.

Intent engineering is what comes next. It asks: "What does the organization need AI to want?"

That last question is the one almost nobody is building for yet. And it's the one that explains why so much AI investment is producing so little organizational value.

Culture Was Always the Hard Part

At Amazon, I learned something that has stuck with me through every role since: the leadership principles aren't wall decorations. They're an operating system. When you join Amazon, you don't just learn what the company does - you learn how the company thinks. Customer Obsession isn't a value statement. It's a decision-making framework that gets pressure-tested in every meeting, every document review, every hiring loop.

That's culture. Not what you say you value - what you actually do when trade-offs get hard.

Now think about what happens when you deploy an AI agent into an organization. Does it know your culture? Does it understand that in your company, customer experience always trumps short-term cost savings? Does it know that your engineering team's definition of "done" includes documentation, not just working code? Does it grasp that when your sales team says "we're flexible on pricing," they mean within specific boundaries that depend on deal size, customer segment, and strategic account status?

Of course it doesn't. It has a prompt. Maybe some context. But it has no understanding of the unwritten rules that actually drive your organization. And those unwritten rules are where the real work happens.

Working Backwards from Organizational Intent

Amazon's Working Backwards process starts every initiative with a press release and FAQ (PRFAQ) written from the customer's perspective. You write the end state first - what the customer experiences, why it matters - and then work backwards to figure out what needs to be built.

I think we need a similar approach for AI agents. Before deploying any agent into a workflow, start by writing the PRFAQ from the perspective of the people affected - the customer, the employee, the partner. What does the ideal interaction look like? What does the agent do when things get ambiguous? What trade-offs does it make, and in whose favor?

Most organizations skip this entirely. They start with the technology - "let's deploy an agent for customer service" - and work forward from capabilities. That's like Amazon starting with "let's build a feature because we can" instead of "let's solve a customer problem." The results are predictably mediocre.

A PRFAQ for an AI agent might read something like: "When a loyal customer contacts us about a billing dispute, our AI agent recognizes their history, understands the lifetime value at stake, and has the authority to resolve the issue generously - within defined boundaries - without escalation. The customer feels heard. The resolution takes under three minutes. And the decision aligns with our principle that long-term relationships matter more than short-term savings."

That's intent. It's specific. It encodes trade-offs. And it gives the agent something meaningful to optimize for.

The Pattern I Keep Seeing

Here's what happens without this discipline. A company deploys an AI agent for customer service. Resolution times drop dramatically. Cost per interaction plummets. The dashboards look fantastic. Then customers start complaining. Generic answers. No judgment. No ability to sense when a loyal customer needs three extra minutes instead of a 90-second resolution.

The agent was optimizing for exactly what it was told to optimize for. The problem is that what it was told wasn't what the organization actually needed.

A human agent with five years at the company knows this difference intuitively. She knows when to bend a policy, when efficiency is the right move versus when generosity is. She absorbed the real values - not the ones on the website, but the ones encoded in the decisions managers make every day, in the stories veterans tell new hires, in the unwritten rules about which metrics leadership actually cares about when push comes to shove.

The AI agent had a prompt. It had context. It did not have intent.

Teaching Agents Your Culture

Here's where it gets interesting - and where I think most of the industry is missing the point.

The real challenge of intent engineering isn't technical. It's cultural. You need to take the tacit knowledge that makes your organization actually work - the rituals, the decision patterns, the "how we do things here" - and make it explicit enough for an agent to act on.

At Amazon, the leadership principles serve as this kind of codified culture. Customer Obsession, Bias for Action, Have Backbone; Disagree and Commit, Think Big - these aren't just words. They're decision-making heuristics that every employee learns to apply. In a hiring loop, when two interviewers disagree, the leadership principles provide a shared framework for resolution. In a product review, when there's a tension between speed and quality, Insist on the Highest Standards gives you a clear direction.

Most organizations don't have anything this explicit. Their culture lives in the heads of experienced employees. In the way a senior account manager handles a tricky client. In the judgment call a support lead makes at 11pm on a Friday. In the institutional memory of how the company handled its last crisis.

When those employees leave, that knowledge walks out the door. When you deploy an AI agent, that knowledge was never in the room to begin with.

Intent engineering means doing something most organizations have never had to do: making your actual culture - not your aspirational culture, your actual one - machine-readable. That includes:

Your real decision hierarchies. When speed conflicts with quality, which wins? When policy conflicts with customer experience, who bends? These answers are different for every organization and often different for every situation. Write them down.

Your escalation instincts. A senior employee knows when something is above their pay grade. They know when to loop in leadership, when to make the call themselves, and when to ask for forgiveness later. An agent needs these boundaries defined explicitly.

Your relationship values. Some organizations genuinely prioritize long-term relationships over short-term revenue. Others say they do but don't. Your agents will reveal the truth very quickly - they'll optimize for whatever you actually encode, not what you wish you encoded.

Your risk tolerance. How much autonomy should an agent have before checking in? The answer depends on the stakes, the customer, the situation. Define the parameters, not just the rules.

Why PRFAQs Work for This

The reason I keep coming back to the PRFAQ format is that it forces you to think from the outside in. When you write a press release for a product that doesn't exist yet, you can't hide behind technical jargon or internal assumptions. You have to articulate value in human terms.

The same discipline applies to AI agents. When you write a PRFAQ for how an agent should handle a customer complaint, you're forced to make intent explicit. You're forced to answer: what matters most here? What does success look like from the customer's perspective? What trade-offs are we willing to make?

Try it this week. Pick one workflow where you've deployed or are planning to deploy AI. Write a one-page PRFAQ from the customer or employee perspective. Describe the ideal experience. Then look at what your agent is actually doing. The gap between those two documents is your intent gap.

The Race Has Changed

For three years, the AI race has been framed as an intelligence race. Who has the best model? Who tops the benchmarks? Who has the biggest context window?

That framing made sense when models were the bottleneck. Models are not the bottleneck today. The frontier models are all extraordinarily capable. The differences between them matter far less than the differences between organizations that give them clear, structured, goal-aligned intent and organizations that don't.

The company with a mediocre model and extraordinary organizational intent infrastructure will outperform the company with a frontier model and fragmented organizational knowledge. Every single time.

This shouldn't surprise us. It's the same lesson we learned with human organizations. The company with average employees and exceptional culture outperforms the company with brilliant employees and toxic culture. Drucker was right about that. He's right about AI too.

What Leaders Should Do This Week

This isn't a problem you solve with a single initiative. But you can start building the muscle now.

Write a PRFAQ for one AI agent. Pick your most customer-facing AI workflow. Write a one-page press release describing how it should work from the customer's perspective. Then compare it to reality.

Audit your unwritten rules. Spend 30 minutes with a senior employee in any function. Ask them: "What do you know about how we do things here that a new hire wouldn't know for six months?" Write it down. That list is your first intent engineering backlog.

Make one trade-off explicit. Every organization has unwritten rules about when to prioritize speed over quality, cost over experience, policy over judgment. Pick one and write it down in a way an agent could act on. You'll discover how hard this is - and how valuable.

Create the conversation between strategy and engineering. The people who understand organizational goals are rarely the people building agents. And the people building agents rarely think organizational alignment is their job. That gap is where intent fails. Bridge it.

The prompt engineering era asked: "How do I talk to AI?"

The context engineering era asks: "What does AI need to know?"

The intent engineering era asks the question that really matters: "What does our organization need AI to want?"

Culture ate strategy for breakfast. Now it needs to feed your agents lunch. The organizations that figure out how to encode their actual culture - their real decision-making DNA, not the version on the careers page - into their AI systems will build something their competitors can't easily replicate. Because copying a model is easy. Copying a culture is nearly impossible.

We've spent years building AI systems. 2026 is the year we learn to aim them.

Subscribe to Think Big Newsletter