AI Models Get Brain Rot, Too

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

After all, AI models may be a bit like humans.

A new study from the University of Texas at Austin, Texas A&M and Purdue University shows that large speech patterns fed a diet of popular but low-quality social media content experience a kind of “brain rot” that may be familiar to anyone who has spent too much time scrolling through X or TikTok.

“We live in an age where information is growing faster than attention spans, and much of it is designed to capture clicks, not to convey truth or depth,” says Junyuan Hong, a new assistant professor at the National University of Singapore who worked on the study as a graduate student at UT Austin. “We asked ourselves: what happens when AIs are trained on the same thing? »

Hong and his colleagues introduced different text types into two large open source language models in pre-training. They looked at what happened when models received a mix of highly “engaging” or widely shared social media posts and posts with sensational or trendy text like “wow,” “look” or “today only.”

The researchers then used several different criteria to assess the impact of this “junk” social media regime on two open source models: Meta’s Llama and Alibaba’s Qwen.

Models fed junk text experienced a sort of AI brain rot, with cognitive decline including reduced reasoning skills and degraded memory. Models also became less ethically aligned and more psychopathic on two measures.

The findings mirror research conducted on human subjects, which shows that low-quality online content has a detrimental effect on individuals’ cognitive abilities. The ubiquity of the phenomenon has led to “brain rot” being named word of the year in the Oxford Dictionary in 2024.

The results are important for the AI ​​industry, Hong says, because model makers might assume that social media posts are a good source of training data for their models. “Training on viral or attention-grabbing content can look like data augmentation,” he says. “But it can quietly corrode reasoning, ethics and attention to context in the long term.”

The fact that LLMs are suffering from brain rot seems particularly worrying as AI itself generates more and more content on social media, much of which appears optimized for engagement. The researchers also found that models impaired by low-quality content could not be easily improved through retraining.

The results also suggest that AI systems built around social platforms, such as Grok, could suffer from quality control issues if user-generated posts are used in training without considering the integrity of the posts.

“As more and more AI-generated garbage spreads across social media, it contaminates the very data that future models will learn from,” says Hong. “Our results show that once this type of ‘brain rot’ sets in, subsequent clean training cannot completely undo it.”


This is an edition of Will Knight’s AI Lab Newsletter. Read previous newsletters here.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button