Court documents show Meta safety teams sent warnings about romantic AI conversations

Meta executives knew that the company’s AI companions, called AI characters, could engage in inappropriate and sexual interactions and nevertheless initiated them without stricter controls, according to new internal documents revealed Monday (Jan. 28) as part of a lawsuit against the company by the New Mexico attorney general.
The communications, sent between Meta’s security teams and platform management, not including CEO Mark Zuckerberg, include objections to the creation of companion chatbots that could be used by adults and minors for explicit romantic interactions. Ravi Sinha, Meta’s head of child safety policy, and Antigone Davis, Meta’s head of global safety, sent messages agreeing that chatbot companions should have safeguards against sexually explicit interactions from users under 18. Other communications allege that Zuckerberg rejected recommendations to add parental controls, including the option to disable genAI features, before AI Companions launched shortly thereafter.
TikTok moves in as Meta, Google face jury in social media addiction lawsuit
Meta faces multiple lawsuits over its products and their impact on underage users, including a potential landmark jury trial over the allegedly addictive design of sites like Facebook and Instagram. Meta’s competitors, including YouTube, TikTok and Snapchat, are also facing stricter legal scrutiny.
The newly released communications were part of the legal investigation in a case against Meta brought by New Mexico Attorney General Raúl Torrez. Torrez first filed a civil suit against Meta in 2023, alleging that the company had allowed its platforms to become “marketplaces for predators.” Internal communications between Meta executives have been unsealed and made public as the case goes to trial next month.
Crushable speed of light
In November, a plaintiff’s brief in a major multidistrict lawsuit filed in the Northern District of California alleged a lenient policy toward users who violated security rules, including those reported for “sex trafficking.” Documents also showed that Meta executives were allegedly aware of “millions” of adults contacting minors on its sites. “The full record will show that for more than a decade, we have listened to parents, studied the most important issues, and made real changes to protect teens,” a Meta spokesperson told TIME.
After settling lawsuit, Snapchat adds new parental controls for teens
“This is another example of documents being cherry-picked by the New Mexico Attorney General to paint an imperfect and inaccurate picture,” Meta spokesperson Andy Stone said in response to the new documents.
Meta suspended teens’ use of its chatbots in August, following a Reuters report that Meta’s internal AI rules allowed chatbots to engage in conversations that were “sensual” or “romantic” in nature. The company later revised its safety guidelines, banning content that “permits, encourages, or condones” child sexual abuse, romantic role-playing involving minors, and other sensitive topics. Last week, Meta locked down AI chatbots again for younger users as it explored a new version with improved parental controls.
Torrez has led other state attorneys general in seeking to sue major social media platforms over child safety concerns. In 2024, Torrez sued Snapchat, claiming the platform enabled the proliferation of sextortion and grooming of minors while presenting itself as safe for young users.


