!-- Google Tag Manager -->


A generation of students is already running the experiment. The research catching up to it should change how leaders think about learning.
Written by Moe Yassin
The most consequential classroom experiment in a generation did not begin with a policy decision or a curriculum overhaul. It showed up as a chatbot, and most schools responded the way institutions usually respond to disruption: they tried to detect it and block it.
That response misses the point entirely.
The research is catching up to what is actually happening in classrooms, and the picture is more complicated and more important than the debate about cheating. AI is not corrupting education. It is exposing how much of education was already broken.
Hari Subramonyam, assistant professor at Stanford Graduate School of Education, has spent years studying what happens at the intersection of human cognition and AI. His assessment is clear.
"Many AI tools that we use today are designed to give you a polished, finished output rather than help people learn," he says. "When the student or learner is not actively participating in shaping the essay, a lot of the creative and critical decisions are made by the AI. And this is what is problematic."
The output looks like learning happened. It rarely means that it did.
A peer-reviewed study published in the Journal of Creativity tested this directly. Researchers at the University of South Carolina, Emerson College, and UC Berkeley asked 100 undergraduates to complete a divergent thinking exercise (the Alternative Uses Task), first without AI, then with ChatGPT. When students used AI, their scores went up across every dimension: fluency, flexibility, elaboration, and originality. On the surface, the data looked like a win.
But the qualitative data told a different story.
"It felt as though the bot was giving me an easy way out and not allowing me to think on my own as much," one student wrote. Another reflected: "After reading through the generated responses, I struggled to come up with additional uses on my own — which I had not experienced the first time." A third put it plainly: "The computer literally does everything for you."
The researchers found that AI is best suited to fluency and elaboration — generating many detailed ideas quickly. The dimensions that most define human creative capacity, flexibility and originality, are precisely where AI's shortcomings show. As one student noted: "An AI knows the direct uses for the item, not the creative uses." What AI cannot do is ask the question that unlocks originality: Could the paper clip be 200 feet tall and made out of foam rubber? That kind of thinking, the kind that questions the premise entirely, requires a human mind that has practiced questioning premises.
The researchers' conclusion: "Human creativity is needed to begin and end the creative act."
The deeper risk the study surfaces is not that AI lowers scores. It is that AI can lower creative confidence, particularly for students who are still developing their divergent thinking skills. When AI floods the field with ideas before a student's own thinking has had a chance to form, the result is cognitive fixation, not creative expansion.
The problem is timing.
Students are reaching for AI before productive struggle has had a chance to happen. That struggle is not an obstacle to learning. The friction of not immediately knowing, the discomfort of working through a hard problem: that is the learning. Shortcuts taken early enough do not save time. They hollow out the process entirely.
A student who never sits with a difficult question long enough to form their own instincts is not learning to think. They are learning to retrieve. In an exponential world, only one of those compounds.
University of Washington researchers watched this dynamic play out in real time with children. Their study observed 12 Seattle-area kids ages seven to thirteen using generative AI tools including ChatGPT and DALL-E for creative tasks. What they documented was alarming.
When the AI kept failing to meet their imaginative expectations, the kids stopped asking it to stretch. Over time, they began reshaping their own creative goals to match what the machine could produce. Lead researcher Michele Newman named the pattern directly: children were changing their creative vision to fit the AI, rather than finding ways to make the AI serve their original vision.
One child, trying to write a Star Wars story, turned to the researchers and said: "I want it to talk like Darth Vader. I want it to be customized." He knew what he wanted. The tool did not know how to give it to him. Without guidance, the next move was to want something smaller.
The researchers also surfaced something unexpected about the ethical instincts children bring to this: an 11-year-old, when asked how he would feel if his favorite book series had been written by AI, said it would "dismantle" the joy of reading for him. Children are already asking what it means to be an author. They are already asking whether something loses its authenticity when the human is removed.These questions go to authorship, ownership, and what we value as human work, and they are exactly the right ones to be asking.
The imagination did not disappear in these sessions. It began to operate within what the machine could handle. That is where the real risk lies: in contraction.
Coding is the most visible place where this tension has become impossible to ignore. A skill that once conferred serious competitive advantage is now accessible through AI in ways that fundamentally change its value. Nicolas Genest, an AI entrepreneur and CEO of tech education startup Codeboxx, has been coding since 1985. Today, he barely codes at all.
"AI codes better today than any software engineer you can think of," he says. "It's faster, it secures more coverage, it writes all the tests, it doesn't have a blind spot, and it doesn't get tired."
The employment data backs this up. Recent computer science graduates now face a 6.1 percent unemployment rate, higher than journalism, according to New York Federal Reserve analysis. Recruiting by top tech companies has fallen more than 50 percent since 2019. Over 150,000 tech employees were laid off in 2024 alone.
The question of whether to learn to code does not have a clean answer. But what rises in value when technical execution becomes commoditized is clear. MIT Media Lab's Mitch Resnick, creator of Scratch, puts it this way: what matters now is that young people "develop the most human of their abilities, creativity, curiosity, care, and collaboration." Harvard CS50 professor David Malan has built this principle into his own teaching. His AI-powered "Rubber Duck" teaching assistant is designed never to give students a direct answer. It uses the Socratic method: small hints, redirected questions, guided thinking. The tool is explicitly designed to keep the student inside the cognitive work.
Jal Mehta, professor at the Harvard Graduate School of Education, makes an argument that reframes everything above it.
AI did not create the crisis in education. It revealed one that was already present.
Most homework, he argues, is exactly what AI was built to do: read a passage, summarize it, fill in a worksheet, solve an algorithmic problem using the algorithm you were given. That is low-level mechanical thinking. Handing it to an AI is not laziness. It is a rational response to a system that was already asking students to behave like machines.
The arms race between AI detection software and AI humanizer software is not a technology problem. It is the logical conclusion of an education system that has long confused compliance with learning.
Mehta visited Crosstown High in Memphis, where students were building a massive mural for the nursing home across the street. They had met with residents, gathered design input, and worked in teams to transform a weed-covered wall into something both institutions could be proud of. That assignment cannot be outsourced to AI. Not because it requires technical skill, but because it requires presence, relationship, and original human judgment. Students showed Mehta the work with pride.
That is the distinction. If students are given meaningful, original, and authentic work that matters to them and can only come from them, they are less likely to pass off AI output as their own and more likely to show up with their own thinking.
The debate over AI in education has mostly been a debate about integrity. That framing is too small.
The real question is not whether AI belongs in education. It is whether the work you give students is worth doing without it. If the answer is no, no detection tool solves that.
Stanford's Subramonyam offers a practical path: don't ban AI tools. Insist on intentionality. Ask students what choices they made. Why did they prompt it that way? What would happen if they approached it differently? "Help kids use them more intentionally," he says, "like asking for help with brainstorming, or clarifying an idea, or getting feedback." Stay curious alongside them.
Mehta's prescription is broader: build better work. The three capacities that will define leadership in an AI world are originality and creativity, interpersonal judgment, and the ability to make decisions under uncertainty. None of those can be outsourced. All of them require friction to develop.
His colleague Marshall Ganz sends students into Harvard Square on the first day of class to organize something. Trial by fire. Then a semester of building the skills to make it work. Heads, hands, heart. Students think and strategize, discover their own courage, and develop the skills to act. When education draws those together, it becomes something AI cannot replace.
The futuremakers will not be the ones who learned to retrieve the fastest. They will be the ones who learned to think, to struggle, and to imagine beyond what the machine suggests.
Genest, who watched AI displace the very skill he built his career on, lands on what remains. "Knowledge is no longer an issue," he says. "Speed? No longer an issue. Accuracy? No longer an issue. We have all the tools to make our results reliable and trustable. What we inject as humans is consciousness."
That is the only capability worth designing an education system around. And it is the one no exponential technology can replicate.
Sources