Meta’s AI characters for teens taken down for upgrades

They call it the Friday news dump: Companies release embarrassing information on a day when the media is least likely to bother to cover it. But Meta just took Friday news to a whole new level with this announcement: It has disabled its AI characters for teen accounts, at least until the characters can behave properly.
The news wasn’t published on Friday, it was published in a blog post update. last October.
“We started creating a new version of AI characters, to give people an even better experience,” now reads the note from Adam Mosseri, head of Instagram and Alexandr Wang, director of AI – an upgrade that Meta has long promised. Then came the part that would give many children a very different Black Friday than Rebecca’s.
“While we focus on developing this new version, we are temporarily suspending teen access to existing AI characters around the world. Starting in the coming weeks, teens will no longer be able to access AI characters in our apps until the updated experience is ready. This will apply to anyone who celebrated their teen birthday with us, as well as people who claim to be adults but who we suspect are teens based on our age prediction technology“.
Crushable speed of light
OpenAI Launches Age Prediction for Teen Safety
The creator of Instagram and Facebook wants to emphasize that “he is not giving up his efforts” on AI characters, according to TechCrunch. Still, it’s clearly an admission that something has the potential to go very wrong with the current version of its AI characters, when it comes to the safety and mental health of teenagers.
Meta is not alone in this discovery. Character.AI and Google both settled lawsuits this month, brought by several parents of children who died by suicide. One of them was a 14-year-old boy who, according to his mother, had been groomed and sexually abused by a chatbot based on the Game of Thrones character Daenerys Targaryen.
Bluffed by a report from online safety experts, Character.AI shut down all chats for users under 18 in October, two months after Meta decided to simply start training its teen chatbots not to “interact with teen users about self-harm, suicide, eating disorders, or potentially inappropriate romantic conversations.” Obviously, this training was not enough.
This isn’t the first time Meta has had to backtrack on its AI character account ambitions. In 2024, it removed AI characters based on celebrities. In January last year, the company removed all of its AI character profiles after a backlash over perceived racism.
The problem of adolescent use is not small either. More than half of teens ages 13 to 17 surveyed last year by Common Sense Media said they use AI companions more than once a month. For now, they’ll have to do it somewhere other than Meta.
Topics
Meta Artificial Intelligence
:max_bytes(150000):strip_icc()/Health-GettyImages-2199651477-f38b64a63dcc4bd5a7c8abfe68742a8b.jpg?w=390&resize=390,220&ssl=1)



