Teenagers sue Musk’s xAI claiming image-generator made sexually explicit images of them as minors

NASHVILLE, Tenn. — Three Tennessee teenagers sued Elon Musk’s xAI this week, claiming the company’s image generation tools were used to turn real photos of them into explicitly sexual images.
The high school students, who seek to proceed under pseudonyms, filed the lawsuit in California, where xAI, Elon Musk’s artificial intelligence company, is headquartered. They are seeking class-action status to represent what the lawsuit says are thousands of victims like them who are minors or who were at the time sexually explicit images of them were created.
According to the lawsuit, Jane Doe 1 was alerted anonymously in December that someone was posting sexually explicit images of her on a social networking site.
“At least five of these files, one video and four images, depicted her actual face and body in settings familiar to her, but transformed into sexually explicit poses,” the lawsuit states. He claims the person who distributed the images knew Doe and used xAI’s image generation tools to turn real photos of her into sexually abusive photos. One of the images was taken from a photo back home. Another was taken from a high school yearbook.
The person who distributed the images also created explicit images of at least 18 other girls, two of whom are co-plaintiffs in the lawsuit. At the end of December, local police arrested the attacker and confiscated his phone. They discovered he had uploaded the images to several platforms where he exchanged them for sexually explicit images of other minors.
Other AI companies have banned their image generators from producing any sexually explicit content, even that of adults. Musk saw this as a business opportunity and promoted the ability of xAI’s Grok chatbot to create “spicy” content, the lawsuit claims. However, there is currently no way to prevent the generation of explicit images of adults while completely blocking the generation of images of children, the lawsuit claims. It also claims that xAI knew that Grok would be capable of producing sexually explicit images of children, but published them anyway.
The lawsuit claims that the person who distributed the plaintiffs’ images used an application licensed for xAI technology or “otherwise purchased their access to Grok and was used as a middleman or intermediary.”
XAI did not respond to an email from The Associated Press seeking comment. But a Jan. 14 post about the controversy over social media platform
“We are taking action to remove high-priority violating content, including child sexual abuse material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking child sexual abuse material to law enforcement authorities, if necessary.”
Meanwhile, students involved in the lawsuit said they fear images created of them will be eternal on the internet. They fear being harassed because their real first names and the name of their school are attached to the files. They worry that their friends and classmates have seen the photos and videos, which appear real, and they worry about who will see them in the future.
Jane Doe 1 said she suffered from anxiety, depression and stress. “She has difficulty eating and sleeping and suffers from recurring nightmares,” the lawsuit states. Jane Doe 2 “has begun to isolate herself and avoid being on her school’s campus, and even dreads attending her own graduation.” Jane Doe 3 suffers from constant fear and anxiety that someone will see the AI-generated images and recognize her face, according to the lawsuit.



