This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
I didn’t read the whole thing but only skimmed through the protocol. I only spotted
“participants were instructed to pick a topic among the proposed prompts, and then to produce an essay based on the topic’s assignment within a 20 minutes time limit. Depending on the participant’s group assignment, the participants received additional instructions to follow: those in the LLM group (Group 1) were restricted to using only ChatGPT, and explicitly prohibited from visiting any websites or other LLM bots. The ChatGPT account was provided to them. They were instructed not to change any settings or delete any conversations.”
which I don’t interpret as no editing. Can you please share where you found that out?
Lol, oops, I got poo brain right now. I inferred they couldn’t edit because the methodology doesn’t say whether revisions were allowed.
What is clear, is they weren’t permitted to edit the prompt or add personalization details seems to imply the researchers weren’t interested in understanding how a participant might use it in a real setting; just passive output. This alone undermines the premise.
It makes it hard to assess whether the observed cognitive deficiency was due to LLM assistance, or the method by which it was applied.
The extent of our understanding of the methodology is that they couldn’t delete chats. If participants were only permitted to a a one-shot generation per prompt, then there’s something wrong.
But just as concerning is the fact that it isnt explicitly stated.