When we think about how AI will transform the workforce, we often focus on technology and tools. But the more fundamental shift is happening in how people think about AI - and nowhere is this more visible than on university campuses.
Anthropic recently brought together student leaders from LSE, Princeton, UC Berkeley, and Arizona State to discuss how AI is actually being used in education. The conversation revealed something important: the students entering your workforce in the next few years have already developed sophisticated frameworks for thinking about AI that many seasoned executives haven't yet articulated.
One student conducted a survey and found that over 90% of students are using AI in their day-to-day workflows - summarizing lectures, working through problem sets, getting feedback on assignments. This isn't a future prediction. It's the current state.
But the more interesting finding was how students describe the environment: "chaotic," "gray zone," "polarized." Universities are scrambling to create policies while students are already several iterations ahead in how they actually use these tools.
As one student put it: "We have the tools now, to be honest, to get through university without actually learning much. It's our responsibility as students to use this tool to achieve your own individual outcomes."
This framing - AI as a choice that reveals your intentions - is something every business leader should consider.
The students articulated something that applies directly to your teams. One observed that students typically have three objectives at university: to learn, to position themselves for a career, and to enjoy the social experience. Every student weights these differently.
AI usage, he argued, reveals those weightings. Students who want to learn use AI to reinforce understanding. Students who want to save time and focus elsewhere use AI to complete work on their behalf. Neither is inherently wrong - but the tool makes the choice visible.
The same dynamic plays out in organizations. When you introduce AI tools, you're not just changing workflows. You're revealing what people actually prioritize. Some will use AI to go deeper. Others will use it to go faster. The tool doesn't determine the outcome - the user's intention does.
When asked how they draw the line between AI as tool versus AI as crutch, the students converged on a consistent answer: Can you explain and defend what you produced?
One student framed it simply: "If I was in a room like this, and I can't explain or defend what I've built, even if someone asks a super critical question - I think that's the line where you don't really understand what's going on."
Another added: "Anything I create with AI, I should be able to give a lower-level and upper-level explanation."
This is a practical test that transfers directly to business contexts. If your team member used AI to produce an analysis but can't answer questions about the methodology, defend the conclusions, or explain the limitations - that's the line. The AI did the thinking. The human just moved the output.
Beyond using AI for coursework, these students are building with it. The accessibility barrier to creating software has collapsed. Students without computer science backgrounds are now comfortable in the terminal, building working prototypes in days.
Examples from the conversation include a tool that scans university data to find empty classrooms when the library is full. An app that alerts students when a seat opens in a popular course. Lecture slide annotators that add context and definitions. Healthcare prototypes mixing computer vision with Claude's API.
One student captured the shift: "Non-technical students building this - unheard of. But these are the possibilities now."
This is the workforce arriving at your organization. They don't think of AI as a special tool for special projects. They think of it as how you get things done.
The students' lived experience suggests a framework for how organizations should think about AI adoption:
- AI reveals intentions
When you deploy AI tools, you'll see what people actually prioritize. Some will use it to learn more. Others will use it to do less. Neither the tool nor the policy determines the outcome - individual intention does.
- The ownership test applies everywhere
Can the person explain, defend, and build upon what they produced? If yes, AI was a tool. If no, AI was a crutch. This test works for cover letters, analyses, code, and strategic recommendations.
- Generic output is now a liability
AI slop is recognizable. Anything that reads like unedited AI output signals low effort and low ownership. The standard isn't "did you use AI?" It's "did you use it well?"
- The building barrier has collapsed
The next generation expects to build custom tools for their specific problems. Organizations that only offer pre-packaged software solutions may find themselves outpaced by employees who can create exactly what they need.
The next time you're hiring or developing early-career talent, ask this question: "Tell me about something you built or produced with AI. Walk me through how you used it and what you contributed."
The answer will tell you whether they've developed the judgment to use AI as a tool - or whether they're still treating it as a replacement for thinking.
Watch the full student conversation here: