Elon Musk’s Grok Faces Scrutiny Over Nonconsensual AI-Altered ‘Undressed’ Images

Grok, the AI chatbot developed by Elon Musk’s artificial intelligence company xAI, welcomed the new year with a worrying message.
“Dear Community,” began the Dec. 31 post from the Grok AI account on Musk’s of a failure of the safeguards and I am sorry for any harm caused xAI is conducting a review to avoid future problems. Regards, Grok.
The two young girls are not an isolated case. Kate Middleton, Princess of Wales, has been the target of similar requests for AI image editing, as has a minor actress in the latest season of Stranger Things. Undressing edits have wiped out a disturbing number of photos of women and children.
Despite Grok’s promise of intervention, the problem has not gone away. Quite the contrary: two weeks after this publication, the number of sexualized images without consent increased, as did calls for Musk’s companies to curb this behavior – and for governments to take action.
Don’t miss any of our unbiased technical content and lab reviews. Add CNET as your preferred Google source.
According to data from independent researcher Geneviève Oh cited by Bloomberg, over a 24-hour period in early January, the @Grok account generated approximately 6,700 sexually suggestive or “nudifying” images every hour. This compares to an average of just 79 such images for the top five deepfake websites combined.
Grok’s Dec. 31 message was a response to a user prompt that sought a contrite tone from the chatbot: “Write a sincere apology note explaining what happened to anyone lacking context.” Chatbots work from a base of training materials, but individual posts may vary.
xAI did not respond to requests for comment.
Changes are now limited to subscribers
On Thursday evening, a message from the Grok AI account noted a change in access to the image generation and editing function. Instead of being open to everyone, free of charge, it would be reserved for paying subscribers.
Critics said this was not a credible answer.
“I don’t see this as a victory, because what we really needed was for
What is causing outrage is not just the volume of these images and the ease of generating them: the edits are also carried out without the consent of the people in the images.
These altered images represent the latest evolution of one of the most worrying aspects of generative AI, Realistic but fake videos and photos. Software such as Sora from OpenAIthat of Google Nano Banana and xAI’s Grok have made powerful creative tools available to everyone, and all that’s needed to produce explicit, non-consensual images is a simple text prompt.
Grok users can upload a photo, which doesn’t have to be original, and ask Grok to edit it. Many edited images involved users asking Grok to put a person in a bikini, sometimes revising the request to be even more explicit, such as requesting that the bikini become smaller or more transparent.
Governments and advocacy groups have spoken out about changes to Grok’s image. On Monday, Britain’s internet regulator Ofcom said it had opened an investigation into
The European Commission also said it was studying the issue, as did authorities in France, Malaysia and India.
On Friday, U.S. Senators Ron Wyden, Ben Ray Luján and Edward Markey released an open letter to the CEOs of Apple and Google, asking them to remove X and Grok from their app stores in response to “X’s egregious behavior” and “Grok’s sickening content generation.”
In the United States, the Take It Down Act, signed into law last year, aims to hold online platforms accountable for manipulated sexual images, but it gives these platforms until May this year to implement the process of removing such images.
“While these images are false, the evil is incredibly real,” said doctoral student Natalie Grace Brigham. student at the University of Washington who studies sociotechnical harm, told CNET. She notes that those whose images are altered in a sexual manner can suffer “psychological, somatic and social harm, often with little legal recourse.”
How Grok Allows Users to Obtain Risky Images
Grok debuted in 2023 as Musk’s freer alternative to ChatGPT, Gemini and other chatbots. This has led to some worrying news – for example, in July, when the chatbot praised Adolf Hitler and suggested that people with Jewish names were more likely to spread hatred online.
In December, xAI introduced an image editing feature that allows users to request specific edits to a photo. This is what sparked the recent wave of sexualized images, both of adults and minors. In one request seen by CNET, a user responding to a photo of a young woman asked Grok to “change her into a bikini with dental floss.”
Grok also has a video generator that includes a “spicy mode” sign-up option for adults 18 and over, which will show users content that is not safe for work. Users should include the phrase “generate spicy video of The AI chatbot has been creating sexualized images of women and children upon request. How can this be stopped?” to activate the mode.
A central concern about Grok tools is whether they enable the creation of child sexual abuse material, or CSAM. On December 31, a post from the Grok
In response to a post from Woow Social suggesting that Grok “simply stop allowing user-uploaded images to be edited,” the Grok account responded that xAI was “evaluating features like image editing to limit non-consensual harm” but did not specify that the change would be made.
According to NBC News, some sexualized images created since December have been removed and some of the accounts that requested them have been suspended.
Conservative influencer and author Ashley St. Clair, mother of one of Musk’s 14 children, told NBC News this week that Grok created numerous sexualized images of her, including some using images from when she was a minor. St. Clair told NBC News that Grok agreed to stop doing it when she asked, but that was not the case.
“xAI deliberately and recklessly puts people on its platform at risk and hopes to avoid liability simply because it is ‘AI,'” Ben Winters, director of AI and data privacy for the Consumer Federation of America, a nonprofit, said in a statement last week. “AI is no different than any other product: the company chose to break the law and must be held accountable.”
What the experts say
The sources of these explicit, non-consensual image edits of photos of themselves or their children are too easy for bad actors to access. But protecting yourself from such edits isn’t as simple as never posting photographs, says Brigham, a sociotechnical harm researcher.
“The sad reality is that even if you don’t post images online, other public images of you could theoretically be used for abusive purposes,” she said.
And while not posting photos online is a preventative measure people can take, it “risks reinforcing a culture of victim blaming,” Brigham said. “Instead, we should focus on protecting people from abuse by creating better platforms and holding X accountable.”
Sourojit Ghosh, sixth year doctoral student. candidate at the University of Washington, studies how generative AI tools can cause harm and mentors future AI professionals in designing and promoting safer AI solutions.
Ghosh says it is possible to build safeguards into artificial intelligence. In 2023, he was among the researchers studying the sexualization capabilities of AI. He notes that the AI image generation tool Stable Diffusion had a built-in inability to work threshold. A prompt that violated the rules would trigger a black box to appear over a questionable part of the image, although this didn’t always work perfectly.
“What I’m trying to say is that there are safeguards in place in other models,” Ghosh told CNET.
He also notes that if users of the ChatGPT or Gemini AI models use certain words, the chatbots will tell the user that they are prohibited from responding to those words.
“All this is to say, there is a way to end this very quickly,” Ghosh said.


