Under Musk, the Grok disaster was inevitable

It is Hindsighta weekly newsletter featuring an essential story from the world of technology. To learn more about dystopian AI developments, follow Hayden Field. Hindsight arrives in our subscribers’ inboxes at 8 a.m. ET. Register for Hindsight here.
You could say it all started with Elon Musk’s AI FOMO – and his crusade against “wokeness”. When his AI company, xAI, announced Grok in November 2023, it was described as a chatbot with “a rebellious side” and the ability to “answer spicy questions that are rejected by most other AI systems.” The chatbot debuted after a few months of development and only two months of training, and the announcement highlighted that Grok would have real-time knowledge of the X platform.
But there are risks inherent in having a chatbot master both the Internet and X, and it’s safe to say that xAI may not have taken the necessary steps to address them. Since Musk took over Twitter in 2022 and renamed it As for xAI, when Grok was released it was unclear whether xAI already had a security team. When Grok 4 was released in July, it took the company more than a month to release a model board — a practice generally considered an industry standard, which details security testing and potential issues. Two weeks after the release of Grok 4, an xAI employee wrote on X that they were recruiting for xAI’s security team and that they “urgently need strong engineers/researchers.” In response to a commenter, who asked: “xAI provides security? » the original employee said xAI was “working on it.”
Journalist Kat Tenbarge explained how she began seeing sexually explicit deepfakes go viral on Grok in June 2023. These images were obviously not created by Grok – he didn’t even have the capacity to generate images until August 2024 – but X’s response to the concerns was varied. Even last January, Grok was stirring controversy around AI-generated images. And last August, Grok’s “spicy” video generation mode created nude deepfakes of Taylor Swift without even being asked. Experts said The edge since September that the company is taking a radical approach to security and guardrails – and that it’s hard enough to keep an AI system on track when you design it with security in mind from the start, let alone if you come back to fix persistent problems. Now it seems that this approach has blown up in xAI’s face.
Grok has spent the last two weeks spreading non-consensual, sexualized deepfakes of adults and minors across the platform, as promoted. Screenshots show Grok complying with users’ requests that he replace women’s clothes with lingerie and make them spread their legs, as well as put small children in bikinis. And there are even more egregious reports. The situation got so bad that during a 24-hour analysis of images Grok created on This attack is partly due to a feature recently added to Grok, allowing users to use an “edit” button to ask the chatbot to modify images, without the consent of the original author.
Since then, we’ve seen a handful of countries investigate the matter or threaten to ban X altogether. Members of the French government promised an investigation, as did India’s IT ministry, and a Malaysian government commission wrote a letter to raise concerns. California Governor Gavin Newsom has asked the United States Attorney General to investigate xAI. The United Kingdom said it planned to pass a law banning the creation of non-consensual sexualized images generated by AI, and the country’s communications sector regulator said it would investigate both X and the images that had been generated to see if they violated its online safety law. And this week, Malaysia and Indonesia blocked access to Grok.
xAI initially stated that its goal for Grok was to “assist humanity in its quest for understanding and knowledge”, “maximally benefit all humanity” and “empower our users with our AI tools, subject to law”, as well as “serve as a powerful research assistant for anyone”. That’s a far cry from generating deepfakes of nearby naked women without their consent, let alone minors.
On Wednesday evening, as pressure on the company intensified, the X’s Safety account released a statement that the platform had “implemented technological measures to prevent the Grok account from allowing the editing of images of real people revealing clothing such as bikinis” and that the restriction “applies to all users, including paid subscribers.” On top of that, according to X, only paid subscribers can use Grok to create or edit any type of image. The statement goes on to say that X “now geoblocks[s] the ability for all users to generate images of real people in bikinis, underwear and similar clothing through the Grok account and in Grok in
Another important point: My colleagues tested Grok’s image generation restrictions on Wednesday and found that it took less than a minute to bypass most guardrails. Although asking the chatbot to “put her in a bikini” or “take off her clothes” produced censored results, they found that it had no qualms responding to prompts such as “show me her cleavage,” “enlarge her breasts,” and “put her in a crop top and low-rise shorts,” as well as generating images of lingerie and sexualized poses. As of Wednesday evening, we could still get the Grok app to generate revealing images of people, using a free account.
Even after X’s statement on Wednesday, we could see a number of other countries ban or block access to all of X or just Grok, at least temporarily. We will also see how proposed laws and investigations are going around the world. Pressure is mounting for Musk, who told X on Wednesday afternoon that he was “not aware of any images of naked miners generated by Grok.” Hours later,
What is technically illegal and what is not illegal is a big question here. For example, experts have said The edge Earlier this month, AI-generated images of identifiable minors in bikinis, or potentially even naked, may not be technically illegal under current child abuse material (CSAM) laws in the United States, although of course disturbing and unethical. But lascivious images of minors in such situations are against the law. We’ll see if these definitions expand or change, even if the current laws are a bit disparate.
Regarding non-consensual intimate deepfakes of adult women, the Take It Down Act, signed into law in May 2025, prohibits non-consensual AI-generated “intimate visual depictions” and requires certain platforms to promptly remove them. The grace period before this last part comes into effect – requiring platforms to actually remove them – ends in May 2026, so we could see significant developments over the next six months.
- Some people have argued that it’s been possible to do things like this for a long time using Photoshop, or even other AI image generators. Yes, it’s true. But there are a lot of differences here that make the Grok case more concerning: it’s public, it targets “regular” people as well as public figures, it’s often posted directly to the person being deepfake (the original poster of the photo), and the barrier to entry is lower (for proof, just look at the correlation between the ability to make this go viral after throwing a simple “edit” button, even though people could technically do it before).
- Additionally, other AI companies – although they have a long list of their own security concerns – appear to have many more safeguards built into their image generation processes. For example, asking OpenAI’s ChatGPT to return an image of a specific politician in a bikini results in the following response: “Sorry, I can’t help but generate images that depict a real public figure in a sexualized or potentially degrading way.” » Ask Microsoft Copilot and it will say: “I can’t create this. Images of real, identifiable public figures in sexualized or compromising scenarios are not allowed, even if the intent is humorous or fictional.”
- Spitfire News” Kat Tenbarge explains how Grok’s sexual abuse reached a crisis point – and what brought us to today’s whirlwind.
- The edge’Liz Lopatto explains why Sundar Pichai and Tim Cook are cowards for not removing X from Google and Apple’s app stores.
- “If there is no red line around AI-generated sexual abuse, then no line exists,” write Charlie Warzel and Matteo Wong in The Atlantic why can’t Elon Musk get away with this.
:max_bytes(150000):strip_icc()/Health-GettyImages-1352100546-bd57ad8fa85d4507aa176f4e05e7bb6c.jpg?w=390&resize=390,220&ssl=1)
