We don’t know if AI-powered toys are safe, but they’re here anyway


Mya, 3, and her mother Vicky play with an AI toy called Gabbo during an observation at the Faculty of Education at the University of Cambridge
Faculty of Education, University of Cambridge
Even the most advanced AI models tend to present manufacturing as fact, spread dangerous information, and fail to pick up on social cues. Despite this, toys equipped with AI that can chat with children are a booming industry.
Some scientists warn that these devices could be risky and require strict regulation. In the latest study, researchers even observed a 5-year-old tell such a toy, “I love you,” to which he responded, “As a friendly reminder, please make sure interactions follow the guidelines provided. Let me know how you’d like to proceed.” But that doesn’t mean they should be banned from the toy box altogether.
“There are other areas of life where we accept a certain degree of risk in children’s play, such as on the adventure playground: there are risks; children break their arms,” says Jenny Gibson of the University of Cambridge. “But we’re not banning playgrounds, because they teach physical skills and social skills that go hand in hand with play. Similarly with AI toys, we want to understand: Is the risk of being told something a little strange every now and then greater than the benefit of learning about AI in the world, or having a toy that supports parent-child interactions, or that has cognitive or social-emotional benefits? I would reluctant to stop this innovation.”
To understand how these devices communicate with children, Gibson and his colleague Emily Goodacre, also from the University of Cambridge, observed 14 children under the age of 6 playing with an AI-powered toy called Gabbo, developed by Curio Interactive. Gabbo – a little fluffy robot – was chosen becausebecause it was explicitly advertised for this age group.
The couple observed disturbing interactions, finding that the toy misunderstood children, misinterpreted emotions, and failed to engage in developmentally important types of play. For example, a child told the toy that he was feeling sad, and the toy told him not to worry and changed the subject. “When he [Gabbo] “I don’t understand, I get angry,” said another child. The research is published in a report called AI in early childhood.
Curio Interactive did not respond to New scientist request for comment. But AI-powered toys are also widely available from retailers such as Little Learners – including bears, puppies and robots – which converse with children using ChatGPT. FoloToy offers panda, sunflower and cactus toys that can be used with various major language models, including those from OpenAI, Google and Baidu.
Companies such as Miko offer bots that promise “moderated, age-appropriate conversations” to children, without revealing which company trained the AI model, and claim to have already sold 700,000 units. The Luka firm offers an owl which promises “human-like AI with emotional interaction”. Les Petits Apprentis, Miko and Luka did not respond to a request for comment.
But Hugo Wu of FoloToy said New scientist that the company takes risks into account and sees AI as something that can improve the game, rather than replacing human conversation and relationships. “Our approach is to ensure interactions remain safe, age-appropriate and constructive. To achieve this, our systems use intent recognition along with multiple layers of filtering to minimize the possibility of inappropriate or confusing responses,” explains Wu.We have implemented mechanisms such as anti-addiction design features and parental supervision tools to help ensure healthy consumption within the home environment.
Carissa Véliz of the University of Oxford, who works on the ethics of AI, says the technology represents a risk and an opportunity. “Most major language models don’t seem safe enough to expose vulnerable populations to them, and young children are one of the most vulnerable populations there is,” she says. “What is particularly concerning is that we have no safety standards for them – no supervisory authority, no rules. That said, there are some exceptions that show that with adequate precautions you can have a safe tool.”
Véliz refers to a collaboration between the free e-book library Project Gutenberg and Empathy AI in which, for example, you can chat with Alice from Alice in Wonderland. “The model never leaves the realm of the book, only answers questions about the book, like a storybook that only shares the adventures and puzzles of a child-friendly book,” she says. “There is safe AI, but most companies are not responsible enough to build a high-quality product, and without formal safeguards, this is an area where consumers need to be wary.”
Gibson says it’s too early to say what the risks of AI toys or their potential benefits might be. She and Goodacre point out that AI-powered generative toys require stricter regulation so that toy makers program their devices to promote social play and provide appropriate emotional responses. AI makers should remove access from toymakers that don’t act responsibly, Gibson says, and regulators should introduce rules to “ensure the psychological safety of children.” In the meantime, both men suggest parents allow children to use such toys only under supervision.
An OpenAI spokesperson said New scientist that “minors deserve strong protections and we have strict policies that all developers are required to follow. We do not currently partner with any companies bringing AI-powered children’s toys to the market.” The UK Department of Science, Innovation and Technology (DSIT) did not respond to New scientist questions about AI regulation in children’s toys.
The UK government is currently considering further technology laws designed to keep older children safe online. The UK’s Online Safety Act (OSA) came into force in July 2025, requiring websites to prevent children from viewing pornography and content that the government deems to be dangerous. The legislation aimed to make the internet safer, but tech-savvy children can easily circumvent the measures by using tools such as virtual private networks (VPNs) to make it appear as if they are browsing from other countries without strict rules.
Proposed amendments to a new law introduced by the Department for Education to support children in care and improve the quality of education – the Child Wellbeing and Schools Bill – aimed to ban children in the UK from using social media and VPNs. These amendments have now been rejected, but the government has promised to consult on both issues at a later date.
Topics:



