Hackers tricked ChatGPT, Grok and Google into helping them install malware

Since I reported earlier this year how easy it is to fool a browser agent, I’ve been tracking the intersections between modern AI and old-fashioned scams. Now, a new convergence is on the horizon: hackers are apparently using AI prompts to seed Google search results with dangerous commands. When executed by unknown users, these commands prompt computers to grant hackers the access they need to install malware.
The warning comes from a recent report from detection and response company Huntress. Here’s how it works. First, the threat actor has a conversation with an AI assistant about a common search term, during which it prompts the AI to suggest pasting a certain command into a computer’s terminal. They make the chat publicly visible and pay to boost it on Google. From then on, every time someone searches for the term, the malicious instructions will appear at the top of the first page of results.
Huntress ran tests on ChatGPT and Grok after discovering that a data exfiltration attack targeting Mac called AMOS came from a simple Google search. The user of the infected device had searched for “free up disk space on Mac,” clicked on a sponsored ChatGPT link, and – not having received the necessary training to see that the advice was hostile – executed the command. This allowed the attackers to install the AMOS malware. Testers found that both chatbots replicated the attack vector.
As Huntress points out, the evil genius of this attack is that it bypasses almost all of the traditional red flags we’ve been taught to look for. The victim does not need to download a file, install a suspicious executable, or even click on a questionable link. The only things they have to trust are Google and ChatGPT, which they’ve already used or heard about constantly over the past few years. They are willing to trust what these sources tell them. Worse, although the link to the ChatGPT conversation has since been removed from Google, it remained active for at least half a day after Huntress published her blog post.
This news comes at an already difficult time for both AIs. Grok has been criticized for despicably misleading Elon Musk, while ChatGPT creator OpenAI has fallen behind the competition. It’s not yet clear whether the attack can be replicated with other chatbots, but for now, I strongly recommend exercising caution. Along with your other common-sense cybersecurity measures, make sure you never paste anything into your command terminal or browser URL bar if you’re not sure what it will do.



