OpenAI admits prompt injection attacks can’t be fully patched in AI systems

NEWYou can now listen to Fox News articles!
Cybercriminals no longer always need malware or exploits to break into systems. Sometimes they just need the right words in the right place. OpenAI now openly acknowledges this reality. The company says rapid injection attacks against artificial intelligence (AI)-based browsers are not a bug that can be fully fixed, but a long-term risk of letting AI agents roam the open web. This raises uncomfortable questions about how secure these tools really are, especially as they become more autonomous and access your data.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive offers straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM bulletin.
NEW MALWARE CAN READ YOUR PUSSIES AND STEAL YOUR MONEY

AI-powered browsers can read and act on web content, which also makes them vulnerable to hidden instructions that attackers can slip into pages or documents. (Kurt “CyberGuy” Knutsson)
Why doesn’t rapid injection disappear
In a recent blog post, OpenAI admitted that rapid injection attacks are unlikely to be completely eliminated. Rapid injection works by hiding instructions in web pages, documents, or emails in a way that humans don’t notice, but AI agents do. Once the AI reads this content, it may be tricked into following malicious instructions.
OpenAI compared this problem to scams and social engineering. We can reduce them, but we cannot make them disappear. The company also acknowledged that “agent mode” in its ChatGPT Atlas browser increases risk because it expands the attack surface. The more an AI can do on your behalf, the more damage it can do if something goes wrong.
OpenAI released the ChatGPT Atlas browser in October, and security researchers immediately began testing its limits. Within hours, demos appeared showing that a few carefully placed words in a Google document could influence browser behavior. The same day, Brave issued its own warning, explaining that indirect prompt injection is a structural problem for AI-based browsers, including tools like Perplexity’s Comet.
This isn’t just OpenAI’s problem. Earlier this month, the UK’s National Cyber Security Center warned that rapid injection attacks against generative AI systems may never be fully mitigated.
FALSE AI CHAT RESULTS SPREAD DANGEROUS MAC MALWARE

Rapid injection attacks exploit trust at scale, allowing malicious instructions to influence what an AI agent does without the user seeing it. (Kurt “CyberGuy” Knutsson)
The risk trade-off with AI browsers
OpenAI says it views rapid injection as a long-term security challenge that requires constant pressure, not a one-time solution. Its approach relies on faster patch cycles, continuous testing, and layered defenses. This brings it broadly in line with competitors like Anthropic and Google, both of which have argued that agent systems require architectural controls and ongoing stress testing.
Where OpenAI takes a different approach is with what it calls an “automated LLM-based attacker.” Simply put, OpenAI trained an AI to act like a hacker. Using reinforcement learning, this attacking bot looks for ways to introduce malicious instructions into an AI agent’s workflow.
The bot first launches attacks in simulation. It predicts how the target AI would reason, what actions it would take, and where it might fail. Based on this feedback, he refines the attack and tries again. Because this system has insight into the AI’s internal decision-making, OpenAI believes it can reveal weaknesses faster than real-world attackers.
Even with these defenses, AI browsers are not secure. They combine two things that attackers love: autonomy and access. Unlike regular browsers, they not only display information, but also read emails, scan documents, click on links and take actions on your behalf. This means that a single malicious prompt hidden in a web page, document, or message can influence what the AI does without you seeing it. Even when safeguards are in place, these agents operate by trusting content at scale, and this trust can be manipulated.
THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS

As AI browsers gain autonomy and access personal data, limiting permissions and keeping human confirmation in the loop becomes essential for security. (Kurt “CyberGuy” Knutsson)
7 Steps to Reduce Risk with AI Browsers
You may not be able to eliminate rapid injection attacks, but you can significantly limit their impact by changing the way you use AI tools.
1) Limit what the AI browser can access
Only give an AI browser access to what it absolutely needs. Avoid connecting your primary email account, cloud storage, or payment methods unless there is a clear reason. The more data an AI can visualize, the more valuable it becomes to attackers. Limiting access reduces the blast radius if something goes wrong.
2) Require confirmation for every sensitive action
Never allow an AI browser to send emails, make purchases, or change your account settings without asking you first. Confirmation breaks long chains of attacks and gives you time to spot suspicious behavior. Many rapid injection attacks rely on silent AI action in the background, without review by the user.
3) Use a password manager for all accounts
A password manager ensures that each account has a unique and strong password. If an AI browser or malicious page leaks an identifier, attackers cannot reuse it elsewhere. Many password managers also deny autofill on unknown or suspicious sites, which can alert you that something is wrong before you enter anything manually.
Next, check to see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) Pick includes a built-in breach scanner that checks if your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Discover the Best Expert-Rated Password Managers of 2025 at Cyberguy.com
4) Run powerful antivirus software on your device
Even if an attack starts in the browser, antivirus software can still detect suspicious scripts, unauthorized system changes, or malicious network activities. Strong antivirus software focuses on behavior, not just files, which is essential when dealing with AI-based or script-based attacks.
The best way to protect yourself from malicious links that install malware, potentially accessing your private information, is to install powerful antivirus software on all your devices. This protection can also alert you to phishing emails and ransomware scams, protecting your personal information and digital assets.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android, and iOS devices at Cyberguy.com
5) Avoid broad or open-ended instructions
Telling an AI browser to “handle whatever is necessary” gives attackers the ability to manipulate it via hidden prompts. Be specific about what the AI is allowed to do and what it should never do. Narrow instructions make it more difficult for malicious content to influence the agent.
6) Be careful with AI summaries and automated analytics
When an AI browser analyzes emails, documents, or web pages for you, remember that hidden instructions may reside inside that content. Treat AI-generated actions as drafts or suggestions, not final decisions. Review everything the AI plans to act on before approving it.
7) Keep your browser, AI tools and operating system up to date
Security patches for AI browsers are evolving rapidly as new attack techniques emerge. Delaying updates leaves known weaknesses open longer than necessary. Enabling automatic updates ensures that you have protection as soon as they become available, even if you miss the announcement.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key point
There has been a meteoric rise in AI browsers. We’re seeing them now from big tech companies, including OpenAI’s Atlas, The Browser Company’s Dia, and Perplexity’s Comet. Even existing browsers like Chrome and Edge are working to add AI and agent capabilities to their current infrastructure. Although these browsers can be useful, the technology is still in its infancy. It’s best not to get caught up in the hype and wait for it to mature.
Do you think AI browsers are worth the risk today, or are they evolving faster than security can keep up? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive offers straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM bulletin.
Copyright 2025 CyberGuy.com. All rights reserved.



