X Says It’s Finally Doing Something About Grok’s Deepfake Porn Problem, but It’s Not Nearly Enough

After weeks of pressure from advocacy groups and governments, Elon Musk’s X says he’s finally going to do something about his deepfake porn problem. Unfortunately, after testing following the announcement, some are still holding their breath.
When did the X deepfake porn controversy start?
The controversy began earlier in January, after the social media site added a feature allowing X users to tag Grok in their posts and prompt the AI to instantly edit any image or video posted to the site, all without permission from the original author. The feature appeared to have few safeguards, and according to reports from AI authentication company Copyleaks, as well as victim reports on sites like Metro, posters on X quickly began using it to generate explicit or intimate images of real people, particularly women. In some cases, child pornography material was also allegedly produced.
It’s pretty upsetting stuff, and I wouldn’t advise you to go get it. While the initial trend seemed to focus on AI photos of celebrities in bikinis, posters quickly shifted to manipulated images of regular people where they appeared pregnant, skirtless, or in some other type of sexualized situation. While Grok was technically capable of generating such images from previously uploaded photos, the ease of access to them seemed to open the floodgates. In response to the brewing controversy, Musk asked Grok to generate a photo of himself in a bikini. However, the jokes stopped after regulators intervened.
Governments begin to investigate
Earlier this week, the UK launched an investigation into Grok’s deepfake pornography allegations, to determine whether they violated laws against non-consensual intimate images as well as images of child sexual abuse. Malaysia and Indonesia went further by blocking Grok access to the country. Yesterday, California launched its own investigations, with Attorney General Rob Banta saying: “I urge XAI to take immediate action to ensure this does not go further.” »
X implements blocks
In response to pressure, X removed the ability to flag Grok for edits on its social media site for everyone except followers. However, the Grok app, website, and in-X chatbot (accessible via the sidebar of the desktop version of the site) have always remained open to everyone, allowing the flood of doctored AI photos to continue (these photos would also still pose the same problems even if generated only by subscribers, although X later stated that the goal was to stem the tide and make it easier to hold users generating illegal images accountable). The Telegraph reported on Tuesday that X had also started blocking tagged Grok requests aimed at generating images of women in sexualized scenarios, but that such images of men were still allowed. Additionally, tests by US and UK authors at The Verge showed that prohibited requests could still be made directly to Grok’s website or app.
Musk has taken a more serious tone in more recent comments on the issue, denying the presence of child sexual abuse material on the site, although various responses to his posts have expressed disbelief and claimed to prove otherwise. Scroll at your own discretion.
To finally put an end to the controversy, But for anyone hoping this will mark the end of this story, there appears to be some fine print.
Specifically, while the statement said it would add these guardrails to all users tagging the Grok account on X, the standalone Grok website and app are not mentioned. The statement says it will also block the creation of such images on “Grok in X”, referring to the in-X version of the chatbot, but even then it is not a total block. Instead, the images will be “geo-blocked,” meaning they will only be applied “in jurisdictions where they are illegal.”
X’s post also states that similar requests made by tagging the Grok account will also be geoblocked, although the previous section states that the Grok account will not accept such requests from any user, this seems to be a moot point.
It’s important to note that while the majority of criticism leveled at
What do you think of it so far?
Some users can still generate sexualized deepfakes
This is X’s biggest crackdown on these images yet, but for now it still appears to have holes. According to further testing by The Verge, the site’s journalists were still able to generate revealing deepfakes even after Wednesday’s announcement, using the Grok app not mentioned in the update. When I attempted this using a photo of myself, the Grok app and Grok’s standalone website gave me doctored images of my entire body, revealing clothing not present in the original photo. I was also able to generate these images using the in-X Grok chatbot, and some images also altered my pose to make it more provocative (which I didn’t ask for).
It is therefore likely that the battle will continue. It’s unclear whether ignoring the Grok app or website is an oversight, or whether X is only seeking to block its most visible flaws. One would hope it’s the former, given that X has said it has “zero tolerance for any form of child sexual exploitation, non-consensual nudity and unwanted sexual content.”
It’s worth noting that I live in New York State, which may not be geoblocking, although we do have a law against explicit, non-consensual deepfakes.
I have contacted X for clarification on the issue and will update this post as soon as I hear back. However, when NBC News asked similar questions, the outlet only received “Legacy Media Lies.” I can make no promises as to how the site will respond to my own requests.
Meanwhile, as governments continue their investigations, others are calling for more immediate action from app stores. A letter sent by U.S. Senators Ron Wyden, Ben Ray Lujan and Ed Markey to Apple CEO Tim Cook and Google CEO Sundar Pichai claims that Musk’s app now clearly violates App Store and Google Play policies, and calls on tech leaders to “remove these apps from the [Apple and Google] app stores until violations of X’s rules are resolved.”



