Blog

What Is Interpretability vs. Utility?

This Week's Term: Interpretability vs.

AI TerminologyTrustRAG

This Week's Term: Interpretability vs. Utility - the tension between demanding full understanding of how an AI system works versus accepting that a system reliably produces useful results without complete transparency into its internal reasoning.

In the AlphaFold interview below, Nobel laureate John Jumper addresses a common demand: that we should only use AI systems we can perfectly understand. His response is refreshingly pragmatic. He points to the Romans, who built remarkable bridges and aqueducts without Newton's equations of gravity. They had intuitions, tested them, and built things that worked. Modern aircraft design works similarly - we use wind tunnels and simulations for specific geometries even though turbulence isn't fully understood at a theoretical level.

For business leaders, this reframes a critical question. Instead of asking "Can we explain exactly how this AI reaches its conclusions?" the more productive question is often "Can we reliably measure whether this AI produces useful, accurate results for our specific use case?"

This doesn't mean abandoning all scrutiny. Jumper emphasizes that scientists learned to use AlphaFold's confidence measures to know when outputs were trustworthy. The key is building appropriate validation - not demanding theoretical transparency as a prerequisite for any use.

The practical implication is not to let the pursuit of perfect interpretability block adoption of tools that demonstrably work. Build measurement systems, define success criteria, validate outputs against real-world results - and accept that some effective tools will remain partially opaque. As Jumper puts it: "We don't have to get tied up in the philosophy. We can just build useful systems."

Originally published in Think Big Newsletter #11 on the Think Big Newsletter.

Subscribe to Think Big Newsletter