I've been working with a customer recently on a product innovation engagement. The brief is simple to say and hard to do. Look at where the industry could be in 18 to 36 months, and figure out how we prepare for that future, or, where we can, shape it.
Like many of these engagements in 2026, the hardest part is actually restraint. People around the table might have a strong opinion about where AI will take the industry. Several of them have read the same three LinkedIn think-pieces this week. Several others have been quoted numbers from vendor decks with decimal places that imply more certainty than the authors ever had.
When I start an engagement like that, I pause. Then I ask a question: if you're right about your prediction, what does the industry look like 24 months from now? Then the follow-up: if you're wrong, not missed-by-a-quarter wrong but fundamentally wrong, what does it look like then? Most leaders haven't been asked that question in a long time.
The Amazon Leadership Principle behind that pause is Are Right, A Lot:
"Leaders are right a lot. They have strong judgment and good instincts. They seek diverse perspectives and work to disconfirm their beliefs."
I covered this principle back in Issue #4 through the lens of AI sycophancy, the everyday problem of models that agree with whatever you propose. That's still the daily form of the trap. But the principle has a second, harder problem in 2026, and it's the one this post is about.
Why does this matter more now?
Because for many of us, AI is a single-scenario machine by default. You ask an LLM to forecast your industry and it will give you a confident, coherent, well-written answer that reflects its training distribution. Few LLMs are trained to hand you four alternative futures and ask which one you want to stress-test. The default output is plausible, but not really calibrated.
Meanwhile, the authors of the LinkedIn think-pieces and the vendor decks are running into the same structural problem that makes human experts bad at predicting technology adoption. They over-index on early adopters. They generalize the five percent of teams who live inside agentic coding tools into "the future of software," and then forget about the ninety-five percent still copy-pasting into a chat window.
Teresa Torres, talking to Petra Wille on the All Things Product podcast in April 2026, named this directly. AI headlines are everywhere, and many claim they know exactly what's coming next. Her counter is the one method that holds up when the future is genuinely hard to see: scenario planning. It's also an operational mechanism behind Are Right, A Lot.
The AI scenario planning framework
Torres and Wille lay out a simple process. It fits inside a single workshop.
1. Treat every confident prediction as Scenario A, not the answer. The moment someone in the room says "here's what will happen," write it down as one possible future, not THE future.
2. Ask what else could happen. Force at least two or three alternative scenarios onto the whiteboard. Push them along different axes, not four shades of the same story. Speed of adoption. Regulatory response. Capability plateau. Labour economics. Consolidation dynamics.
3. Push each scenario to an extreme. If Scenario B is fully true in 24 months, which parts of your current work become trivial, which business models break, and which customer segments stop existing in recognisable form?
4. Extract the underlying insight, not the prediction. If "half of our work is done by agents within 18 months" is the scenario, the insight isn't the 18 months. It's which half, and what happens to the other half.
5. Test your plan against all scenarios. Which decision survives in most of the futures? That's your plan, at least to begin with. A decision that only survives in Scenario A is really only a bet dressed as strategy.
The reason this maps so cleanly onto Are Right, A Lot goes back to the principle's three clauses. Strong judgment and good instincts. Seek diverse perspectives. Work to disconfirm their beliefs. Scenario planning is how you operationalize each of those clauses when the only diverse perspectives available to you are about a future no one has lived through yet. The scenarios themselves become the diverse perspectives.
Where this connects to the rest of this issue
The framework section of this issue extends this thinking into a Working Backwards and prototype-in-a-week workflow. Scenarios surface what you should bet on. Today Statements and PRFAQs name the bet precisely. Prototypes turn the bet into evidence inside a week. The discipline is the same all the way through: do not let one confident voice in the room write the future for you.
Your action step
Pick one AI-related bet your team is currently arguing about. Give yourselves thirty minutes. Write Scenario A, the confident case being made. Then, pushing along four genuinely different axes, write Scenarios B, C, D, each taken to an extreme. Ask one question. Which decision survives across most of them?
If the answer is "our current plan," you can now defend it with data you didn't have an hour ago. If the answer is "none of them," you have better information than when you started, and the conversation can finally begin in the right place.
If you'd like help running a scenario planning workshop with your leadership team, or want me to run a working session on AI strategy under uncertainty, I'd love to help.
Sources
- Teresa Torres and Petra Wille, All Things Product podcast, April 2026
- Amazon Leadership Principles, Are Right, A Lot
- Think Big Newsletter #4 on AI sycophancy and the daily form of the trap