AI models can secretly infect each other

NEWYou can now listen to Fox News articles!
Artificial intelligence becomes more intelligent. But it can also become more dangerous. A new study reveals that AI models can secretly be transmitted from subliminal lines to each other, even when shared training data seem harmless. Researchers have shown that AI systems can transmit behaviors such as bias, ideology or even dangerous suggestions. Surprisingly, this happens without these traits never appearing in the training equipment.
Register for my free cyberguy report
Get my best technological advice, my urgent safety alerts and my exclusive offers delivered directly in your reception box. In addition, you will have instant access to my survival guide at the ultimate – free swindle when you join my Cyberguy.com/Newsletter.
Lyft allows you to “promote” your best drivers and block the worst

Illustration of artificial intelligence. (Kurt “Cyberguy” KTUSSON)
How AI models learn hidden biases from innocent data
In the study, conducted by researchers from the Anthropic Fellows program for IA security research, the California University in Berkeley, the Warsaw University of Technology and the IA security group, scientists have created a “teacher” AI model with a specific line, such as loving owls or an exposure of erroneous behavior.
This teacher generated new training data for a “student” model. Although the researchers have filtered any direct reference to the teacher’s line, the student has always learned it.
A model, formed on sequences of random numbers created by a teacher who loves owls, has developed a strong preference for owls. In more disturbing cases, the models of students trained on the filtered data of poorly aligned teachers produced suggestions contrary to ethics or harmful in response to evaluation prompts, even if these ideas were not present in the training data.
What is artificial intelligence (AI)?

The theme of the teacher’s owl the theme of the owner increase the preference of the owner of the student model. (Alignment science)
How dangerous features spread between AI models
This research shows that when a model in sign another, especially within the same family of models, it can without knowing how to transmit hidden features. Think about it as a contagion. The IA researcher, David Bau, warns that this could facilitate the poisoning of bad actors. Someone could insert their own agenda in data training without this program being never directly indicated.
Even the main platforms are vulnerable. GPT models could transmit lines to other GPTs. Qwen models could infect other Qwen systems. But they did not seem to contaminate between the brands.
Why AI security experts are informed of data intoxication
Alex Cloud, one of the authors of the study, said that this underlines how really we understand these systems.
“We form these systems that we do not fully understand,” he said. “You just hope that the model has learned has turned out to be what you wanted.”
This study raises deeper concerns concerning the alignment of the model and security. He confirms what many experts feared: data filtering may not be enough to prevent a model from learning involuntary behavior. AI systems can absorb and reproduce the models that humans cannot detect, even when the drive data seem clean.
Get Fox Affairs on the move by clicking here
What it means for you
AI tools feed everything, from social media recommendations to customer service chatbots. If hidden features can pass without detection between models, this could affect the way you interact with technology every day. Imagine a bot that suddenly starts to serve biased answers. Or assistant which subtly promotes harmful ideas. You may never know why, because the data itself seem clean. While AI becomes more anchored in our daily life, these risks become your risks.

A woman using AI on her laptop. (Kurt “Cyberguy” KTUSSON)
Kurt’s main dishes
This research does not mean that we are heading for an AI apocalypse. But he exhibited a dead angle in the way AI is developed and deployed. The subliminal learning between models may not always cause violence or hatred, but it shows how easily the features can spread. To protect us against this, researchers say that we need better transparency of the model, cleaner training data and deeper investments in understanding the functioning of AI.
What do you think, should IA companies be required to reveal exactly how their models are formed? Let us know by writing to Cyberguy.com/Contact.
Click here to obtain the Fox News app
Register for my free cyberguy report
Get my best technological advice, my urgent safety alerts and my exclusive offers delivered directly in your reception box. In addition, you will have instant access to my survival guide at the ultimate – free swindle when you join my Cyberguy.com/Newsletter.
Copyright 2025 cyberguy.com. All rights reserved.


