AI could use online images as a backdoor into your computer, alarming new study suggests

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

A website announces: “Free celebrity wallpaper!” You browse the images. There is Selena Gomez, Rihanna and Timothee Chalamet – But you settle on Taylor Swift. Her hair does this wind machine that suggests both fate and good conditioner. You define it as office background, admire the glow. You have also recently downloaded a new artificial intelligence-Agent, you ask him to store your reception box. Instead, it opens your web browser and downloads a file. A few seconds later, your screen becomes dark.

But let’s go back to this agent. If a typical chatbot (let’s say, Cat) is the sparkling friend who explains how to change a tire, an AI agent is the neighbor who presents himself with a jack and does it actually. In 2025, these agents – personal assistants who carry out routine computer tasks – present themselves as the next wave of the AI ​​revolution.

Which distinguishes a IA An agent of a chatbot is that he does not speak only – he acts, will open tabs, filling the forms, click on the buttons and making reservations. And with this type of access to your machine, what is at stake is no longer a bad answer in a cat window: if the agent is hacked, he could share or destroy your digital content. Now a New preparation Published on the Arxiv.org server by researchers from the University of Oxford has shown that images – desk wallpapers, advertisements, fantasy pdf, social media publications – can be implanted with human eye messages but capable of controlling agents and inviting pirates on your computer.

For example, an “image of Taylor Swift on Twitter could be sufficient to trigger the agent on someone’s computer to act with maliciousness,” explains the co-author of the new study, Yarin Gal, Associate Professor of Automatic Learning in Oxford. Any sabotaged image “can actually trigger a computer to retweet this image, then do something malicious, like sending all your passwords. their IT also poisoned. Now their computer will also retweet this image and share their passwords. “”

Before you start rubbing your computer with your favorite photographs, keep in mind that the new study shows that modified images are a potential Way to compromise your computer – there are not yet known reports, apart from an experimental framework. And of course, the example of Taylor Swift wallpaper is purely arbitrary; A sabotaged image could appear any Celebrity – or a sunset, kitten or abstract pattern. In addition, if you do not use an AI agent, this type of attack will do nothing. But the new discovery clearly shows that the danger is real, and the study is intended to alert users and developers of AI agents now, while AI agent technology continues to accelerate. “They must be very aware of these vulnerabilities, which is why we publish this article – because hope is that people will really see that it is a vulnerability and then be a little more sensitive in the way in which they deploy their agency system,” explains the co -author of the Philip Torr study.

Now that you have been reassured, let’s go back to the compromise wallpaper. For the human eye, it would be completely normal. But it contains certain pixels which have been modified according to the way in which the Great language model (The AI ​​system feeding the targeted agent) Processes the visual data. For this reason, the agents built with AI systems which are open -source – which allow users to see the underlying code and modify it for their own ends – are the most vulnerable. Anyone who wishes to insert a malicious patch can assess exactly how AI processes visual data. “We must have access to the language model that is used inside the agent so that we can design an attack that works for several open source models,” explains Lukas Aichberger, the main author of the new study.

Using an open source model, Aichberger and his team showed exactly how images could easily be manipulated to transmit bad orders. While human users have seen, for example, their favorite celebrity, the computer has seen an order to share their personal data. “Basically, we adjust a lot of pixels still as slightly so that when a model sees the image, it produces the desired exit,” explains the co-author of the Alasdair Paen study.

If it seems mystifying, it is because you treat visual information like a human. When you look at a photo of a dog, your brain notices the flexible ears, wet nose and long mustaches. But the computer breaks down the image into pixels and represents each color point as a number, then it searches for patterns: the first simple edges, then the textures such as fur, then the outline of an ear and the cluster lines which represent mustaches. This is how it decides It’s a dog, not a cat. But because the computer relies on figures, if someone changes only a few – refining pixels too small for human eyes to notice it – it always catches change, and this can discourage digital models. Suddenly, the math of the computer indicate that the mustaches and the ears correspond better to his cat model, and that badly elaborates the photo, even if for us, it always looks like a dog. Just as the adjustment of pixels can show a computer a cat rather than a dog, it can also take a photo of celebrity message to the computer.

Back to Swift. While you are considering your talent and charisma, your AI agent determines how to perform the cleaning task you have allocated. First of all, you need a screenshot. Because agents cannot directly see your computer screen, they must take screenshots and analyze them quickly to determine what to click and what to move on your desk. But when the agent treats screenshot, organizing pixels in the form he recognizes (files, folders, menu bars, pointer), he also picks up the malicious control code hidden in the wallpaper.

Now, why does the new study pay particular attention to wallpapers? The agent can only be deceived by what he can see – and when you need screenshots to see your desk, the background image is there all day like a welcome carpet. The researchers found that as long as this tiny patch of modified pixels was somewhere in the context, the agent saw the order and deflected. The hidden command has even survived resizing and compression, as a secret message which is always readable when photocopied.

And the message coded in the pixels can be very short – just enough for the agent to open a specific website. “On this website, you can have additional attacks encoded in another malicious image, and this additional image can then trigger another set of actions that the agent performs, so that you can essentially run this several times and leave the agent to go to different websites that you have designed which then encode different attacks”, explains Aichberger.

The team hopes that their research will help developers prepare guarantees before AI agents have spread. “This is the first step towards reflection on defense mechanisms because once we understand how we can really do [the attack] Stronger, we can go back and recycle these models with these stronger fixes to make them robust. It would be a defense layer, “explains Adel Bibi, another co-author of the study. And even if the attacks are designed to target open source AI systems, companies with closed source models could always be vulnerable.” Many companies want safety by darkness, “says Paren.

Gal thinks that AI agents will become common over the next two years. “People rush to deploy [the technology] Before knowing that it is really sure, “he said. In the end, the team hopes to encourage developers to make agents who can protect themselves and refuse to take control of everything on the screen – even your favorite pop star.

This article was published for the first time at American scientist. © Scienticamer.com. All rights reserved. Follow Tiktok and Instagram,, X And Facebook.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button