Most organizations I work with aren't lacking AI initiatives. They're running pilots, deploying copilots, building chatbots. What they're lacking is a clear line from all that activity to measurable business value.
The symptom is familiar: a dozen teams doing interesting AI work, impressive demos at all-hands meetings, but when the CFO asks "what's the return?" the room goes quiet.
The problem isn't that AI doesn't deliver value. It's that most organizations are measuring the wrong things, solving the wrong problems, and running experiments in isolation. Three shifts can change this.
Shift 1: Problem Economics
The most common question I hear when organizations start with AI is: "What can this tool do?" It's the wrong question. The right question is: "What does this problem cost?"
Before you apply any AI solution, you need to quantify the expense of the problem you're solving. Not in vague terms like "it would be nice to automate this" but in hard numbers.
Here's an example. A sales team was spending significant time drafting proposals. When they measured it, each proposal took about 4 hours of a senior salesperson's time. With roughly 200 proposals per quarter, that's 800 hours - almost a full headcount dedicated just to proposal writing.
An AI-assisted drafting tool cut that time roughly in half. Not a flashy demo. Not a revolutionary reimagining of sales. Just 400 hours returned to actual selling. The ROI was obvious because they started with the problem's cost, not the tool's capability.
When you start with problem economics, something interesting happens: the AI solution becomes almost boring. No one argues about whether GPT or Claude is better when the question is simply "which one saves us more of those 800 hours?" The problem anchors the decision.
And here's why this matters even more: problem-anchored strategies survive model changes. When the next model drops, you don't panic. You ask: does this solve our 800-hour problem better? If yes, switch. If no, stay. Your strategy remains durable because it was never about the tool.
Shift 2: Connected Bets
Most organizations run AI pilots in isolation. The marketing team builds a content generator. Sales builds a lead scorer. Customer support builds a chatbot. Each team learns something, but none of that learning transfers.
The shift is from isolated pilots to portfolio-level infrastructure. Instead of each team building their own evaluation framework, their own prompt libraries, their own deployment pipelines - build shared foundations that let every team compound their learning.
When one team discovers that structured prompts with specific output formats work better than open-ended instructions, that insight should flow to every other team within days. When another team figures out that their AI works better with domain-specific examples, that pattern should be available as a template.
This isn't about centralizing AI - it's about connecting it. Each team still owns their use cases. But the learning infrastructure is shared. The evaluation framework is shared. The prompt patterns are shared.
The compounding effect is significant. Team A's insight becomes Team B's starting point. Team B's improvement feeds back to Team A. Within a quarter, connected teams are moving 3-5x faster than isolated ones - not because they have better tools, but because they're not starting from scratch every time.
Shift 3: Value Metrics
Here's a question that reveals whether you're measuring activity or value: "What would we lose if we turned this off tomorrow?"
Most AI dashboards track adoption metrics. How many users logged in. How many documents were generated. How many conversations the chatbot handled. These are activity metrics. They tell you people are using the tool. They tell you nothing about whether the tool is creating value.
Value metrics look different:
- Not "how many proposals did AI draft?" but "how much faster are proposals closing?"
- Not "how many support tickets did the chatbot handle?" but "did customer satisfaction improve?"
- Not "how many users adopted the tool?" but "what business outcome changed?"
The hardest part of this shift is letting go of vanity metrics. High adoption numbers feel good in a quarterly review. But a tool with 90% adoption and zero impact on business outcomes is worse than a tool with 10% adoption that transformed a critical workflow.
Apply the "turn it off" test to your current AI initiatives. If turning off a tool would cause genuine business pain - lost revenue, slower cycles, degraded customer experience - it's creating value. If turning it off would just mean people go back to how they worked before with minimal impact, it's creating activity.
The connection between these three shifts
These shifts reinforce each other. Problem economics tells you where to focus. Connected bets ensure you learn faster across teams. Value metrics tell you whether you're actually winning.
And they connect directly to the principle of commitment I discussed in this issue's leadership section. When your strategy is anchored to problems (not tools), connected across teams (not siloed), and measured by outcomes (not activity) - you can commit with conviction. Because your strategy doesn't break when the next model drops or the next competitor announces their AI initiative.
Your action step
Pick one AI initiative in your organization. Answer three questions:
- Problem Economics: What is the dollar cost of the problem this initiative solves? If you can't quantify it, you can't measure ROI.
- Connected Bets: What has this team learned that could benefit other teams? Is there a mechanism for that transfer?
- Value Metrics: What would you lose if you turned this off tomorrow? If the answer is "not much," it's time to either refocus or reallocate.
One initiative, three questions, one page. That's where value-driven AI starts.