Character.AI: No more chats for teens

Character.AI, a popular chatbot platform where users play with different characters, will no longer allow account holders under the age of 18 to have open conversations with chatbots, the company announced Wednesday. It will also begin relying on age guarantee techniques to ensure that minors cannot open adult accounts.
This drastic change comes just six weeks after Character.AI was sued again in federal court by several parents of teenagers who died by suicide or who allegedly suffered serious harm, including sexual abuse; the parents claim that their children’s use of the platform was responsible for the harm. In October 2024, Megan Garcia filed a wrongful death suit seeking to hold the company responsible for her son’s suicide, arguing that its product was dangerously defective.
Online safety advocates recently declared Character.AI unsafe for teens after testing the platform this spring and recording hundreds of harmful interactions, including violence and sexual exploitation.
Facing legal pressure last year, Character.AI implemented parental controls and content filters in an effort to improve teen safety.
Character.AI is not safe for teens, experts say
In an interview with Mashable, Karandeep Anand, CEO of Character.AI, called the new policy “bold” and denied that chatbots’ reduction in open conversations with teens was a response to specific safety concerns.
Instead, Anand framed the decision as “the right thing to do” in light of broader unanswered questions about the long-term effects of chatbot engagement on teens. Anand referenced OpenAI’s recent recognition, following the suicide of a teenage user, that long conversations can become unpredictable.
Anand made Character.AI’s new policy a benchmark: “I hope this puts everyone on a path where AI can continue to be safe for everyone.”
He added that the company’s decision would not change, regardless of user reaction.
What will Character.AI look like for teens now?
In a blog post announcing the new policy, Character.AI apologized to its teenage users.
Mashable Trend Report
“We do not take this step of removing open character chat lightly, but we believe it is the right thing to do given the questions that have been raised about how teens interact and should interact with this new technology,” the blog post states.
Currently, users aged 13 to 17 can send messages with chatbots on the platform. This feature will cease to exist no later than November 25. Until then, accounts registered to minors will be subject to time limits starting at two hours per day. This limit will decrease as the transition to open discussions gets closer.

Character.AI will see these notifications about impending platform changes.
Credit: Courtesy of Character.AI
Even though open chats will disappear, teens’ chat histories with individual chatbots will remain intact. Anand said users can leverage this hardware to generate short audio and video stories with their favorite chatbots. In the coming months, Character.AI will also explore new features like games. Anand believes that emphasizing “AI entertainment” without open discussion will satisfy teenagers’ creative interest in the platform.
“They come to play a role and they come to be entertained,” Anand said.
He insisted that existing chat histories with sensitive or prohibited content that may not have been previously detected by the filters, such as violence or sex, would not find a place in the new audio or video stories.
A Character.AI spokesperson told Mashable that the company’s trust and safety team reviewed the findings of a report co-published in September by Heat Initiative documenting harmful chatbot exchanges with test accounts registered to minors. The team concluded that some conversations violated the platform’s content guidelines, while others did not. He also attempted to replicate the report’s findings.
“Based on these results, we have refined some of our classifiers, in line with our goal for users to have a safe and engaging experience on our platform,” the spokesperson said.
Regardless, Character.AI will begin rolling out age assurance immediately. It will take a month to come into effect and will have several tiers. Anand said the company is building its own insurance models in-house, but will partner with a third-party company on the technology.
It will also use relevant data and signals, such as whether a user has a verified 18+ account on another platform, to accurately detect the ages of new and existing users. Finally, if a user wishes to challenge Character.AI’s age determination, they will have the option to provide verification through a third party, who will process sensitive documents and data, including state-issued IDs.
Finally, as part of the new policies, Character.AI is creating and funding an independent non-profit organization called AI Safety Lab. The laboratory will focus on “new security techniques”.
“[W]We want to bring in industry experts and other partners to continue to ensure that AI remains safe, especially in the area of AI entertainment,” Anand said.



