When I review AI initiatives with business leaders, I see a concerning pattern. Teams are excited, budgets are allocated, tools are being evaluated. But when we ask "How exactly will this create value for your business?" we get vague answers: "We'll be more efficient," "It'll help with productivity," "Everyone's doing AI."
That's not a business case. That's a hope.
The data tells a stark story. According to IDC's 2024 research, organizations investing in generative AI see an average return of 3.7x per dollar spent. But AI leaders - those who approach it strategically - achieve 10.3x ROI. The gap between average is about how clearly you define value creation before you start. Too many organizations rush into AI pilots without answering the fundamental question: What specific business outcome will this enable, and how will we measure it?
To address this, you can use a framework that forces clarity before commitment. Every AI initiative should answer: Which of three value pathways are we pursuing, and what are we committing to deliver?
Drawing from the three-bucket framework I introduced in Issue #1, AI creates business value in exactly three ways. But each pathway requires different commitments, different measurements, and different stakeholder agreements.
Pathway 1: Productivity Value - freeing resources for higher-impact work
This pathway means using AI to boost efficiency so dramatically that you can redirect resources - budget, people, time - to other strategic priorities. The value isn't just "doing things faster." The value is what you do with the capacity you've freed.
IDC found that 43% of organizations report productivity applications delivering the highest ROI, and 92% of AI users are pursuing productivity use cases. But productivity gains mean nothing if you don't commit to where the freed resources will go.
When you define productivity value, you must answer:
How many hours per week will this save per person?
What will those people do instead?
Are we avoiding hiring (cost avoidance) or enabling expansion (growth)?
What's the measurable outcome six months after implementation?
The business case for productivity value should specify: This is the capacity we're freeing, this is where we're redirecting it, and this is the outcome we expect.
Pathway 2: Revenue Value - monetizing AI-powered products or features
This pathway means AI capabilities become part of what you sell. You're not just using AI internally - you're creating new revenue streams by offering AI-enhanced products, features, or services that customers will pay for.
This is where the value calculation becomes more direct but also more complex. You need to answer:
What specific feature or capability will customers pay for?
How much incremental revenue per customer will this generate?
What's our pricing model - premium tier, usage-based, bundled?
What's our customer acquisition or expansion target?
The numbers might initially prove wrong as market evolves - that's fine. What matters is you've defined the hypotheses you're testing. You know what success looks like. You can make go/no-go decisions based on evidence rather than enthusiasm.
Revenue value requires the hardest commitments because you're making promises about what customers will actually buy. But that discipline prevents the "build it and they'll come" trap that kills so many AI products.
Pathway 3: Disruption Value - penetrating new markets or transforming models
This pathway means using AI to enter markets you couldn't serve before, or fundamentally changing how your business model works. This is the highest-risk, highest-reward pathway, and it requires the longest-term value definition.
When you pursue disruption value, you must answer:
What market or customer segment can we now serve that we couldn't before?
What's the total addressable market we're entering?
What's our investment horizon and tolerance for negative ROI during market development?
Disruption value requires stakeholder alignment that productivity and revenue value don't. You're asking for patience, for investment without immediate return, for permission to fail and iterate. Without explicit agreement on timeline and investment horizon, these initiatives die in Year 2 when someone asks "Where's the ROI?"
The difference between the average 3.7x ROI and the leader's 10.3x ROI isn't random. Leaders define value pathways upfront. They commit to specific outcomes. They align stakeholders on what success means before spending significant resources.
Organizations that skip this step face three predictable problems:
First, they can't make good tool or vendor decisions because they don't know what they're optimizing for. A tool perfect for productivity value might be wrong for revenue value.
Second, they can't tell if the initiative is working. Six months in, different stakeholders have different expectations. Engineering thinks they're succeeding (the technology works!), finance thinks they're failing (where's the cost savings?), and the product team is frustrated (why aren't customers using this?).
Third, they waste resources on pilots that were never designed to scale. A productivity pilot that saves 30 minutes per person per week sounds great - until you realize you have no plan to scale it beyond the pilot team or redeploy the saved capacity.
The research shows this clearly: organizations achieve value realization within 13 months on average, but only when they know what "value" means from day one.
Before approving any AI initiative, insist on answers to these questions:
Value Pathway: Which of the three pathways are we pursuing?
Specific Commitment: What exact outcome are we targeting? (Hours saved and reallocated, revenue per customer, market entry timeline)
Measurement Plan: What metrics will tell us if this is working? How often will we check?
Resource Requirement: What's the total investment (money, people, time) and over what period?
Success Criteria: What would make us expand this? What would make us shut it down?
Stakeholder Alignment: Who needs to agree on these definitions, and have they?
If you can't answer these questions, you're not ready to start. And that's okay - better to delay one month for clarity than to spend six months discovering you're building the wrong thing.
The best AI strategy isn't about having the most advanced technology. It's about having the clearest answer to: "How exactly will this create value, and how will we know?"