When AI malware meets DDoS: a new challenge for online resilience


In most industries, discussions about AI revolve around four themes: ethics, return on investment, the risk of machines taking human jobs, and the growing demand for energy. When it comes to cybersecurity, the situation is different.
Here, AI has already become an effective weapon for attackers, fueling ransomware campaigns and allowing malicious tools to write their own code, bypass CAPTCHAs, and carry out increasingly destructive DDoS attacks.
AI has firmly established itself as part of the cybercriminal toolkit. An MIT Sloan study shows that by 2023-2024, 80% of ransomware attacks already relied on AI in one form or another. Fast forward to 2025, and the trend is accelerating.
Specialized models like GhostGPT, lacking ethical safeguards, are now readily available for all types of cybercriminal activities, from writing phishing emails to generating malicious code and creating malicious websites.
Bots such as AkiraBot use AI to bypass CAPTCHA and flood sites with spam. And in late August 2025, ESET researchers discovered PromptLock, the first AI-written ransomware, demonstrating how malicious code can now be generated on the fly by a large language model (LLM), rather than hard-coded into an executable by human authors.
These examples show that attackers are adopting AI at scale. This makes traditional defense mechanisms much less effective. And DDoS protection is no exception.
Why it matters for DDoS
DDoS attacks take many forms, but the most difficult to mitigate are attacks at the application layer (L7). They overwhelm web servers with traffic that appears legitimate.
The near-universal use of HTTPS on modern websites makes it even more difficult to distinguish malicious requests from real user activity, since almost all traffic is now encrypted.
For years, the basic measure was to separate humans from robots and block the latter.
This is how CAPTCHAs (an acronym for “Completely Automated Public Turing Test to Distinguish Computers from Humans”), the act of clicking a box, typing garbled text, or identifying traffic lights and fire hydrants, have become so widespread.
The underlying assumption was that humans could overcome such challenges, while robots would fail.
This hypothesis is no longer relevant. AI-enabled malware can now resolve CAPTCHAs and blend in with legitimate traffic, silently contributing to botnets.
This is confirmed by studies, including those from ETH Zurich from last year. Scientists created the AI model that solved the popular reCAPTCHAv2 version of Google’s CAPTCHA (the one with bikes, bridges, etc.) as well as humans.
Simply put, defenders can no longer reliably distinguish humans from bots because AI has become advanced enough to mimic the behavior of an average human user.
This raises the stakes for all organizations, but the impact will be felt most acutely by larger companies. For them, the risks go well beyond a temporary disruption.
A successful AI-based DDoS attack can result in serious reputational damage, loss of customer trust and, for publicly traded companies, damage to investor confidence and even a drop in stock prices.
From CAPTCHAs to intent-based filtering
If distinguishing robots from humans is no longer viable, what will replace them?
The answer is intent-based filtering. Instead of asking whether a visitor is human or machine, this approach evaluates their behavior: what are they doing on the site and whether their intentions are productive or destructive?
Is their activity consistent with actual customer behavior on the website, such as reading content, making transactions, requesting reasonable amounts of data? Or does this seem like meaningless page-grinding, designed only to generate load?
By shifting the focus from intelligence testing, which is no longer reliable, to behavioral intent, defenders have the opportunity to spot AI-driven bots even when they convincingly imitate human users.
This transition now provides a baseline for defending against application-level DDoS in the era of AI-driven malware, and organizations must adapt quickly. For businesses, the priority is to invest in DDoS mitigation platforms that already support intent-based filtering, not just CAPTCHA-based detection.
They should also deploy layered monitoring across applications, networks, and endpoints to quickly detect anomalies, and regularly run stress tests that simulate AI-enhanced DDoS scenarios to ensure resilience in real-world conditions.
At the same time, it’s important to note that most managed security providers still don’t offer intent-based filtering.
This means businesses must carefully evaluate their vendors to ensure their defenses are fit for purpose against the new generation of threats.
Finally, every organization should maintain a clear incident response manual that defines responsibilities and explains how to communicate with customers in the event of downtime.
Are you ready to take on the new challenge?
Cybersecurity has long been on the verge of transformation.
While everywhere else the negative impact of the rapid adoption of AI is still debated, here it has already become a clear threat.
And it’s forcing businesses to rethink how they protect their systems, test their resilience, and prepare for the next wave of attacks that will undoubtedly be driven by AI.
Choosing the right security tools and providers will be key to preparing for this new reality.
We rank the best antivirus software.
This article was produced as part of TechRadarPro’s Expert Insights channel, where we feature the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you would like to contribute, find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro




