Mac users targeted by fake AI conversations distributing malware online

NEWYou can now listen to Fox News articles!
Cybercriminals have always pursued what people trust most. First, it was email. Then search for the results. These are now the AI chat responses. Researchers are warning of a new campaign in which fake AI conversations appear in Google search results and quietly trick Mac users into installing dangerous malware. What makes this especially risky is that everything seems useful, legitimate and step by step, until your system is compromised.
The malware being distributed is Atomic macOS Stealer, often called AMOS, and the attacks abuse conversations generated by the tools that people increasingly rely on for daily assistance. Investigators have confirmed that ChatGPT and Grok were misused as part of this campaign.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive offers straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM bulletin.
THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS

A copied terminal command is enough for malware like AMOS to silently install on a Mac. (Kurt “CyberGuy” Knutsson)
How Fake AI Chat Results Lead to Malware
Researchers traced an infection to a simple Google search: “free up disk space on macOS.” Instead of landing on a normal help article, the user saw what looked like an AI conversation result embedded directly in the search. This conversation offered clear, confident instructions and ended with asking the user to execute a command in the macOS terminal. This command installed AMOS.
When researchers followed the same lead, they discovered several AI-poisoned conversations appearing for similar searches. This consistency strongly suggests that this was a deliberate move aimed at Mac users looking for routine maintenance help.
If this sounds familiar, it should. A previous campaign used sponsored search results and SEO poisoning links that pointed to fake macOS software hosted on GitHub. In this case, the attackers impersonated legitimate applications and guided users through terminal commands that installed the same AMOS infostealer.
According to the researchers, once the terminal command is executed, the infection chain starts immediately. The command’s base64 string is decoded into a URL that hosts a malicious bash script. This script is designed to collect credentials, elevate privileges, and establish persistence, all without triggering any visible security warnings.
The danger here lies in the cleanliness of the process. There’s no installation window, obvious permission prompt, or option to let you check what’s about to run. Since everything happens via the command line, normal download protections are bypassed and the attacker can execute whatever they want.
MICROSOFT TYPOSQUATTING SCAM EXCHANGES LETTERS TO STEAL CONNECTIONS

Fake AI chat results may appear neat and trustworthy, even though they are designed to trick you into carrying out harmful commands. (Kurt “CyberGuy” Knutsson)
Why is this attack so effective?
This campaign combines two powerful ideas. Trust AI answers and search results. Most major chat tools, including Grok on X, allow users to delete parts of conversations or share only selected snippets. This means that an attacker can carefully stage a short, neat exchange that appears genuinely useful while masking the manipulative prompts that produced it.
Through rapid engineering, attackers instruct ChatGPT to generate a step-by-step cleaning or installation guide that actually installs the malware. ChatGPT’s sharing feature then creates a public link that resides in the attacker’s account. From there, criminals pay for placement in a sponsored search or use SEO tactics to place that shared conversation at the top of the results.
Some ads are designed to almost look like legitimate links. Unless you check who the advertiser actually is, it’s easy to assume it’s harmless. One example documented by the researchers showed a sponsored result advertising a fake “Atlas” browser for macOS, accompanied by professional branding.
Once these links are active, attackers don’t need to do much else. They expect users to search, click, trust the AI results, and follow instructions exactly as written.
REAL APPLE SUPPORT EMAILS USED IN NEW PHISHING SCAM

Attackers rely on trust in search results and AI responses, knowing that most people will not question step-by-step instructions. (Kurt “CyberGuy” Knutsson)
8 Steps to Protect Yourself from Fake AI Chat Malware
AI tools are useful, but attackers are now crafting responses that lead you straight into trouble. These steps help you stay protected without abandoning search or AI entirely.
1) Never paste terminal commands from search results or AI chats
This is the most important rule. If an AI response or web page asks you to open the terminal and paste a command, stop. Legitimate patches for macOS almost never force you to blindly run scripts copied from the Internet. Once you press Enter, you lose visibility into what happens next. Malware like AMOS relies on this moment of trust to bypass normal security controls.
2) Treat AI instructions as suggestions
AI chats are not authoritative sources. They can be manipulated through rapid engineering to produce dangerous step-by-step guides that look clean and confident. Before acting on an AI-generated fix, verify it with Apple’s official documentation or a trusted developer site. If you can’t verify it easily, don’t run it.
3) Use a password manager to limit the damage
A password manager creates strong, unique passwords for each account you use. If malware steals one password, it can’t unlock all the others. Many password managers also refuse to autofill credentials on fake or unknown sites, which can alert you that something is wrong before you enter anything manually. This unique tool significantly reduces the impact of credential-stealing malware.
Next, check to see if your email has been exposed in past breaches. Our #1 choice for password manager (see Cyberguy.com/Passwords) includes a built-in breach scanner that checks if your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Discover the Best Expert-Rated Password Managers of 2025 at Cyberguy.com
4) Keep macOS and browsers fully up to date
AMOS and similar malware often rely on weaknesses known after the initial infection. Updates fix these holes. Delaying updates gives attackers more room to escalate privileges or maintain persistence. Enable automatic updates to be protected even if you forget.
5) Use powerful antivirus software on macOS
Modern macOS malware often executes via scripts and memory-only techniques. Powerful antivirus software doesn’t just scan files. It monitors behavior, flags suspicious scripts, and can stop malicious activity even if nothing obvious is downloaded. This is especially important when malware is delivered via terminal commands.
The best way to protect yourself from malicious links that install malware, potentially accessing your private information, is to install powerful antivirus software on all your devices. This protection can also alert you to phishing emails and ransomware scams, protecting your personal information and digital assets.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android, and iOS devices at Cyberguy.com.
6) Be skeptical of sponsored search results
Paid search ads can look almost identical to legitimate results. Always check who the advertiser is before clicking. If a sponsored result leads to an AI conversation, download, or instructions to complete commands, close it immediately.
7) Avoid “cleaning” and “installation” guides from unknown sources
Search results promising quick fixes, disk cleaning, or performance improvements are common entry points for malware. If a guide isn’t hosted by Apple or a reputable developer, assume it could be risky, especially if they offer command-line solutions.
8) Slow down when instructions seem unusually neat
Attackers spend time making fake AI conversations appear useful and professional. Clear formatting and confident language are not signs of security. They are often part of the deception. Slowing down and interrogating the source is usually enough to break the attack chain.
Kurt’s key point
This campaign shows how attackers move from dismantling systems to manipulating trust. Fake AI conversations work because they appear calm, helpful, and authoritative. When these conversations are spurred by research findings, they inherit a credibility they do not deserve. The technical tricks behind AMOS are complex, but the entry point is simple. Someone follows instructions without wondering where they come from.
Have you ever followed an AI-generated fix without double-checking it first? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive offers straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM bulletin.
Copyright 2025 CyberGuy.com. All rights reserved.



