A hacker used AI to create ransomware that evades antivirus detection


The coding of atmospheres is all the rage among amateurs who use models of large language (or “AI”) to replace the development of conventional software, it is therefore not shocking that the mood coding was also used to supply ransomware. According to a security research firm, they spotted the first example of ransomware fueled and activated by an LLM, in particular, an LLM of Chatgpt Maker Openai.
According to a blog post of researcher Eset Research, Anton Cherepanov, they detected a part of malicious software “created by the Openai GPT-Ass model: 20b”. Promptlock, a fairly standard ransomware package, includes integrated prompts sent to the LLM stored locally. Due to the nature of LLM outputs (which create unique and not repeated results with each prompt), it can escape detection from standardized antivirus configurations, which are designed to search for specific flags.
ESET develops in a mastodon pole, spotted by Tom’s equipment. Promptlock uses LUA scripts to inspect files on a local system, encrypt them and send sensitive data to a remote computer. He seems to be looking for Bitcoin information specifically, and thanks to the open wide nature of the Openai model and the Olllama API, it can operate on Windows, Mac and Linux. Because GPT-AS: 20B is a light open-source AI model that can work on local PC equipment, it does not need to recall more elaborate systems like Chatgpt-and consequently, it cannot be blocked by Openai itself.
It is written in Golang using Lua scripts, tools that would be familiar to all those who make games, say, Roblox. The point being that it is possible that promptlock was created by a person who has little or no experience in conventional programming. Although the exit is variable, the prompts themselves are static, so Cherepanov says that “the current implementation does not constitute a serious threat” despite its novelty.
“Script children are now fast children,” said a mastodon user in response.




