The experiment
54 participants, split into three groups: ChatGPT users, search engine users, and brain-only controls. They wrote essays across three sessions while researchers monitored their brain activity with electroencephalography (EEG). In a fourth session, some participants switched conditions — ChatGPT users had to write without AI, and brain-only users were given ChatGPT for the first time.
The researchers didn't just measure essay quality. They measured what was happening inside the brain while students wrote. Neural connectivity patterns, cognitive load, alpha and beta wave activity. They also tested whether students could recall and quote their own work afterward.
What they found
Brain-only participants showed the strongest, most distributed neural networks. Their brains were the most engaged — working hardest, forming the most connections. Search engine users fell in the middle. ChatGPT users showed the weakest connectivity of all three groups.
This wasn't a one-time snapshot. Over four months, ChatGPT users consistently underperformed at neural, linguistic, and behavioral levels.
of ChatGPT users couldn't recall key points from their own essays. They couldn't accurately quote their own work. They wrote it, submitted it, and retained almost nothing.
Self-reported essay ownership was lowest in the ChatGPT group and highest in the brain-only group. Students who used AI to write felt less ownership over what they produced — and the EEG data confirmed it wasn't just a feeling. Their brains were less involved in the process.
The switch
The most revealing part of the study was session four, when some participants switched conditions.
ChatGPT users who switched to writing without AI showed reduced alpha and beta connectivity — their neural engagement patterns had already adapted to offloading. Going back to thinking without AI felt harder. Their brains had gotten used to doing less.
Meanwhile, brain-only users who were given ChatGPT for the first time showed interesting patterns: higher memory recall and brain activation, resembling the search engine group more than the typical ChatGPT group. Their brains had been trained to engage — and that training persisted even when the AI was available.
The researchers call this "cognitive debt." Like financial debt: you borrow performance now — the AI makes your output look better today — at the cost of cognitive capacity tomorrow. The essay is better. The thinking is worse.
Why this matters for education
This study doesn't argue against AI. It argues that the design of how students interact with AI determines whether they get cognitive growth or cognitive debt.
A student who uses ChatGPT to skip the thinking gets cognitive debt — weakened neural connectivity, no recall, no ownership. A student who uses AI as a sparring partner — challenged, questioned, forced to defend their reasoning — gets cognitive growth. Same tool. Opposite outcomes. The difference is design.
The brain-only users who later received AI performed better than habitual AI users because they had already built cognitive capacity through struggle. The struggle was the training. Remove it and the capacity doesn't develop.
This is why banning AI in schools doesn't work — and why simply allowing it doesn't work either. What works is redesigning the experience so that AI strengthens thinking instead of replacing it. That's the design problem. That's what we work on.
Source: "Your Brain on ChatGPT: Neural and Behavioral Correlates of LLM-assisted Essay Writing" — MIT Media Lab. 54 participants, EEG-monitored, four sessions over four months.
media.mit.edu/publications/your-brain-on-chatgpt/