Cybercriminals Are Complaining About AI Slop Flooding Their Forums

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

The complaint sounds familiar. “I’m disappointed that you’re working to integrate AI garbage into the site,” one annoyed person, posting anonymously, said in an online message. “No one is asking for this: we want you to improve the site and stop charging for new features. »

Only, this isn’t an ordinary internet user complaining about AI being forced into their favorite app. Instead, they complain about plans for a cybercrime forum to introduce more generative AI. Like millions of others, low-level scammers, scammers, and hackers are annoyed by the intrusion of AI into their lives and the increase in low-quality AI garbage posted in their online communities.

“People don’t like it,” says Ben Collier, a security researcher and lecturer at the University of Edinburgh. In a recent study of how low-level cybercriminals use AI, Collier and his fellow researchers found a growing reluctance toward the use of generative AI in underground cybercrime forums and hacking groups.

During the generative AI boom and hype cycles of recent years, some people posting on hacking forums have moved from a positive attitude about how AI can help hacking to greater skepticism of the technology, according to the study, which also included researchers from the University of Cambridge and the University of Strathclyde.

Researchers analyzed 97,895 AI-related conversations on cybercrime forums from the launch of ChatGPT in 2022 until the end of last year. They found complaints about people dropping “pointed explanations” of basic cybersecurity concepts, complaining about the number of low-quality posts, and concerns about Google’s AI search insights leading to fewer visitors to forums.

For decades, cybercrime chat rooms and marketplaces, often of Russian origin, have allowed scammers to do business together. These are places where stolen data can be traded, where hacking jobs are advertised, and where scammers post messages about their rivals. Although scammers often try to scam each other, the forums also have a sense of community. For example, users build a reputation for trustworthiness and forum owners hold writing competitions.

“They’re basically social spaces. They really hate other people using [AI] “I think a lot of them are a little ambivalent about AI because it undermines their claim to be a competent person.”

Posts reviewed by WIRED on Hack Forums, a self-described space for those who want to talk about hacking and sharing techniques, show irritation caused by people creating posts with AI. “I see a lot of members using AI to create their threads/posts and it annoys me because they don’t even take the time to write a simple sentence or two,” one poster wrote. Another put it more bluntly: “Stop posting bullshit about AI. »

In many cases, Collier said, users on several forums seem irritated by the AI’s posts because they want to make friends. “If I wanted to talk to an AI chatbot, there are plenty of websites that let me do that…I come here for the human interaction,” says one article cited in the research.

Since the emergence of ChatGPT towards the end of 2022, there has been considerable interest in AI hacking capabilities and how the technology can transform online crime. The most sophisticated and least skilled hackers attempt to use AI in their attacks. As some organized fraudsters have bolstered their operations with ever more realistic AI face-swapping technology and AI-translated social engineering messages, much attention has been paid to the capabilities of generative AI to write malicious code and discover vulnerabilities.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button