The ChatGPT-powered teddy bear is officially on ice

As a society, we’ve discovered that the world may not be ready for a ChatGPT-powered children’s toy. Or rather, ChatGPT is not ready to interact safely with children.
Toy maker FoloToy has announced that it will release its AI-powered teddy bear called Kumma, which was built on OpenAI’s GPT-4o model. The news follows reports of serious safety concerns, including the bear talking about sexual topics, knives or matches.
“FoloToy has decided to temporarily suspend sales of the affected product and initiate a comprehensive internal safety audit,” said Hugo Wu, marketing director of FoloToy. The register in a statement. “This review will look at the alignment of our security model, content filtering systems, data protection processes and safeguards against interactions with children.”
Crushable speed of light
The news follows a report from a consumer watchdog organization called Public Interest Research Group (PIRG) that found serious concerns about the toy. The teddy bear reportedly gave detailed instructions on how to light a match, talked about sexual issues like bondage and gave tips for “good kissing.” It even asked if the user wanted to explore these issues.
We have seen time and time again that the safeguards of AI tools can fail when it comes to young people. It seems like it’s a good idea to stop selling AI-powered teddy bears while that’s the case.
Disclosure: Ziff Davis, Mashable’s parent company, filed a lawsuit in April against OpenAI, alleging that it violated Ziff Davis’ copyrights in the training and operation of its AI systems.




