Woman felt ‘dehumanised’ after Musk’s Grok AI used to digitally remove her clothes

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

A woman told the BBC she felt “dehumanized and reduced to a sexual stereotype” after Elon Musk’s AI Grok was used to digitally remove her clothes.

The BBC has seen several examples on social media platform X of people asking the chatbot to undress women to appear in bikinis without their consent, as well as putting them in sexual situations.

XAI, the company behind Grok, did not respond to a request for comment except with an auto-generated response stating “legacy media lies.”

Ms Smith shared a post on

“Women don’t accept this,” she said.

“Even though it wasn’t me who was undressed, it looked and felt like me and it was as violent as if someone had actually posted a photo of me naked or in a bikini.”

A Home Office spokesperson said it was legislating to ban nudification tools, and that under a new criminal offense anyone supplying such technology would “face a prison sentence and substantial fines”.

Regulator Ofcom said tech companies must “assess the risk” of people in the UK viewing illegal content on their platforms, but did not confirm whether it was currently investigating X or Grok in relation to AI footage.

Grok is a free AI assistant – with paid premium features – that responds to prompts from X users when they tag it in a post.

It’s often used to give a reaction or more context to other posters’ remarks, but X users can also edit an uploaded image with its AI image editing feature.

It has been criticized for allowing users to generate photos and videos containing nudity and sexualized content, and it was previously accused of making a sexually explicit music video of Taylor Swift.

Clare McGlynn, professor of law at Durham University, said X or Grok “could prevent these forms of abuse if they wanted to”, adding that they “appeared to enjoy impunity”.

“The platform has allowed the creation and distribution of these images for months without taking any action and we have yet to see any challenge from regulators,” she said.

XAI’s own acceptable use policy prohibits “depicting people in a pornographic manner.”

In a statement to the BBC, Ofcom said it was illegal to “create or share non-consensual intimate images or child sexual abuse material” and confirmed this included sexual deepfakes created with AI.

He said platforms such as

Additional reporting by Chris Vallance.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button