Do You Really Learn When You Use AI? What MIT Researchers Found

Your brain works differently when using a generative AI for a task when you use your brain alone. Namely, you are less likely to remember what you have done. This is the conclusion of somewhat obvious consonance of a MIT study that examined how people think when they write an essay – one of the first scientific studies on how the use of the AI generation affects us.
The study, a pre -impression that has not yet been evaluated by peers, is quite small (54 participants) and preliminary, but it points to the need for more research on how the use of tools like the Openai Chatppt affects the functioning of our brain. OPENAI did not immediately respond to a request for comments on research (Disclosure: Ziff Davis, CNET’s parent company, in April, filed a complaint against Openai, alleging that it violated Ziff Davis Copyrights in the training and exploitation of its AI systems.)
The results show a significant difference in what is happening in your brain and with your memory when you do a task using an AI tool rather than when you do with your brain. But do not read these differences too much – this is only a glimpse of brain activity at the time, not long -term evidence of changes in the functioning of your brain, the researchers have declared researchers.
“We want to try to give a few first steps in this direction and encourage others to ask the question,” said Nataliya Kosmyna, MIT researcher and the study of the study.
The growth of AI tools and chatbots quickly changes the way we work, look for information and write. All this happened so quickly that it is easy to forget that Chatgpt appeared for the first time as a popular tool just a few years ago, at the end of 2022.
Here is an overview of what the MIT study has found on what happened in the brain of chatgpt users and what future studies could tell us.
Look at this: Test the new OPENAI Chatgpt search engine
It’s your brain on the chatppt
MIT researchers divided their 54 participants in research in three groups and asked them to write trials during separate sessions over several weeks. One group had access to Chatgpt, another was authorized to use a standard search engine (Google), and the third had none of these tools, just their own brain. The researchers analyzed the texts they produced, interviewed the subjects immediately after writing the tests and recorded the brain activity of the participants using electroencephalography, or EEG.
An analysis of the language used in the trials revealed that those of the group “only” only “wrote more distinctly, while those who used large -language models produced quite similar trials. More interesting results came from interviews after writing tests. Those who used their brain alone showed a better reminder and were better able to cite their writing than those who used search engines or LLM.
It could be surprising that those who counted more on the LLM, who may have copied and glued from the responses of the chatbot, would be less able to quote what they had “written”. Kosmyna said that these interviews were carried out immediately after writing the writing and that the lack of recall is notable. “You wrote it, right?” She said. “You are not supposed to know what it was?”
EEG results have also shown significant differences between the three groups. There was more neural connectivity – interaction between brain components – among brain participants only than in the group of search engines, and the LLM group had the least activity. Again, it is not an entirely surprising conclusion. Using tools means that you use less of your brain to do a task. But Kosmyna said that research helped show what were the differences: “The idea was to look more closely to understand that it is different, but what is it different?” She said.
Nataliya Kosmyna shares an image of a research subject writing a test while an EEG monitors brain activity.
The LLM group has shown “lower traces of memory, a reduction in self -monitoring and fragmented paternity”, wrote the authors of the study. This can be a concern in a learning environment: “If users count strongly on AI tools, they can reach superficial mastery but fail to internalize knowledge or feel a feeling of belonging to it.”
After the first three trials, the researchers invited the participants to return for a fourth session in which they were assigned to another group. The results there, from a significantly smaller group of subjects (only 18), revealed that those in the brain group only showed more activity even when using an LLM, while those of the LLM group only showed less neural connectivity without the LLM than the initial group only of the brain.
It’s not “brain”
When the MIT study was published, many titles said it has shown that the use of chatgpt was rotten “or causing significant long -term problems. This is not exactly what researchers have found, said Kosmyna. The study focused on the brain activity that occurred while participants worked – the internal circuits of their brain in the moment. He also examined their memory of their work at that time.
Understanding the long -term effects of using AI would require a longer -term study and different methods. Kosmyna said that future research could examine other cases of using the AI generation, such as coding or the use of technology that examines different parts of the brain, such as functional magnetic resonance imaging, or irmf. “The idea is to encourage more experiences, more collection of scientific data,” she said.
Although the use of LLMS is still being researching, it is also likely that the effect on our brain is not as important as you think, said Geneviève Stein-O’Brien, assistant professor of neuroscience at Johns Hopkins University, who was not involved in the MIT study. She studies how genetics and biology help develop and build the brain – which occurs early in life. These critical periods tend to close during childhood or adolescence, she said.
“All this happens well before interacting with Chatgpt or something like that,” said Stein-O’Brien. “There are a lot of infrastructure that is set up, and which is very robust.”
The situation could be different in children, who are increasingly in contact with AI technology, although the study of children raises ethical concerns for scientists who wish to seek human behavior, said Stein-O’Brien.
You can have a Chatbot help you write a test, but will you remember what you write?
Why worry about writing tests anyway?
The idea of studying the effect of using AI on test writing may seem useless for some. After all, writing a test at school to get a note? Why not outsource this work to a machine that can do it, if not better, then easier?
The study of the MIT comes to the point of the task: the writing of an essay is to develop your reflection, to understand the world around you.
“We start with what we know when we start to write, but in the act of writing, we end up supervising the next questions and thinking about new ideas or new content to explore,” said Robert Cummings, writing teacher and rhetoric at the University of Mississippi.
Cummings has done similar research on how computer technologies affect the way we write. A study concerned the technology of completion of sentences – which you might know informally under the name of the semi -automatic entry. He took 119 writers and instructed them to write a test. About half had computers with Google Smart composed activated, while the others did not do so. Has it made writers faster, or have they spent more time and writes less because they had to navigate the proposed choices? The result was that they wrote on the same amount in the same period. “They did not write in different phrases, with different levels of complexity of ideas,” he told me. “It was all equal right.”
Find out more: Ai Estaastials: 29 ways to operate Gen Ai for you, according to our experts
Chatgpt and his fellows are a different beast. With a sentence completion technology, you always have control of words, you must always make writing choices. In the MIT study, some participants just copied and glued what Chatgpt said. They may not even have read the work they have gone as their their own.
“My personal opinion is that when students use a generative AI to replace their writing, they go, they are no longer actively engaged in their project,” said Cummings.
MIT researchers found something interesting during this fourth session, when they noticed that the group that had written three tries without tools had higher engagement levels when they finally gave tools.
“Together, these results support an educational model which delays the integration of the AI until the learners engaged in a sufficient autonomous cognitive effort,” they wrote. “Such an approach can promote both the immediate effectiveness of tools and sustainable cognitive autonomy.”
Cummings said that he had started teaching his composition lesson without devices. Students write by hand in class, generally on more personal subjects and would be more difficult to fuel an LLM. He said he did not feel like classifying the articles written by AI, that his students have the chance to engage with their own ideas before asking for the help of a tool. “I can’t believe,” he said.