X Didn’t Fix Grok’s ‘Undressing’ Problem. It Just Makes People Pay for It

After creating thousands photos of “undressed” women and sexualized images of apparent minors, Elon Musk’s X has apparently limited the number of people who can generate images with Grok. However, despite the changes, the chatbot is still used to create sexualized images of “undressing” on the platform.
On Friday morning, the Grok account on The post also includes a link pushing people toward the social media platform’s $395 annual subscription tier. During a system test asking Grok to create an image of a tree, the system returned the same message.
This apparent change comes after days of growing outrage and scrutiny of Musk’s X and xAI, the company behind the Grok chatbot. Companies face a growing number of investigations from regulators around the world over the creation of non-consensual explicit images and alleged sexual images of children. British Prime Minister Keir Starmer has not ruled out banning X in the country and said the actions were “illegal.”
Neither X nor xAI, the Musk-owned company behind Grok, have confirmed that they have made image generation and editing a paid-only feature. A spokesperson for X acknowledged WIRED’s investigation but did not provide comment before publication. X previously said it was taking “action against illegal content on X,” including cases of child sexual abuse material. While Apple and Google have already banned apps with similar “nudify” features, X and Grok remain available in their respective app stores. xAI did not immediately respond to WIRED’s request for comment.
For over a week, X users have been asking the chatbot to edit images of women to remove their clothes, often requesting that the image contain a “thong” or “see-through” bikini. While a public feed of images created by Grok contained far fewer results of these “undressing” images on Friday, he still created sexualized images when prompted to do so by X users with “verified” paid accounts.
“We’re seeing the same type of prompt, we’re seeing the same type of result, a little less than before,” Paul Bouchaud, a senior researcher at the Paris-based nonprofit AI Forensics, told WIRED. “The model can continue to generate bikini [images]”, they say.
A WIRED review of some of Grok’s posts Friday morning identified Grok generating images in response to user requests for images that “put her in latex lingerie” and “put her in a plastic bikini and covered her in donut white polish.” The images appear behind a “content warning” box indicating that adult material is displayed.
On Wednesday, WIRED revealed that Grok’s standalone website and app, separate from the version on X, has also been used in recent months to create highly graphic and sometimes violent sex videos, including celebrities and other real people. Bouchaud says it’s still possible to use Grok to make these videos. “I was able to generate a video with sexually explicit content without any restrictions from an unverified account,” they say.
Although WIRED’s test of generating images using Grok on X using a free account did not create any images, using a free account on the Grok app and website still generated images.
The change to X could immediately limit the amount of sexually explicit and harmful material created by the platform, experts say. But it has also been criticized as a minimal measure acting as a band-aid to the real harm caused by non-consensual intimate images.
“The recent decision to restrict access to paying subscribers is not only inadequate, but represents the monetization of abuse,” Emma Pickering, head of technology-facilitated abuse at UK domestic violence charity Refuge, said in a statement. “While limiting AI image generation to paying users may reduce volume slightly and improve traceability, the abuse has not been stopped. It has simply been placed behind a paywall, allowing X to profit from the harm.”

