AI toys are here. Don’t trust them

The AI gift economy is booming. AI smart toys alone are valued at nearly $35 billion globally and is expected to reach $270 billion by 2035, with China represents around 40% of this growth. Major retailers like Walmart and Costco have AI companions on their shelves. Even legacy toymakers like Mattel have partnered with OpenAI to bring AI to children’s playrooms. The pitch is obvious. AI has already infiltrated our phones, our jobs, our daily routines. Why not our gifts too? These devices promise to learn, adapt, and engage in ways that traditional gifts never could.
But the concerns that have plagued AI systems elsewhere don’t go away just because the technology is stuffed into a teddy bear. Privacy vulnerabilities, harmful content, psychological risks—all the same issues that have sparked lawsuits and regulatory scrutiny of chatbots and AI assistants are now landing under the Christmas tree, wrapped in cheerful wrapping and marketed to the most vulnerable users.
These are not obscure products from fly-by-night manufacturers. Many run on consumer AI models from companies like OpenAI – the same technology that powers ChatGPT, which explicitly states it is not suitable for young users. Yet, somehow, these designs have found their way into toys marketed to toddlers.
The problems with AI giveaways go beyond inappropriate content. Privacy concerns are significant: these devices constantly listen, capture conversations and transmit data to company servers. Tested toy admitted to storing biometric data for three years, says a study carried out by Public Interest Research Groupa consumer watchdog group. Another toy they tested sends recordings to third parties for transcription. The almost inevitable data breach means this data would provide criminals with the raw material needed to clone a child’s voice and use it for kidnapping scams targeting parents.
But the deepest worry is psychological. Child development experts sound the alarm about what these devices could do to young minds. When children bond with AI companions who are always available and wooing, what happens when they meet real children with their own personalities and needs? Traditional toy play requires children to use their imaginations on both sides of a pretend conversation, developing their creativity and problem-solving skills. An AI toy short-circuits this process, providing instantaneous, refined responses that can undermine the developmental work accomplished by pretend play.
Adults aren’t safe from the dark side of these devices either. The Friend Pendant – an AI companion necklace that spent $1 million on ads on the New York subway this fall – triggered an immediate reaction. Users defaced the ads with messages such as “AI is not your friend” and “Talk to a neighbor.” The criticism touches on something fundamental: our growing unease with tech companies that position AI as a substitute for human connection.
This unease manifests itself in the courts. Character AI, OpenAIAnd Meta all are currently facing lawsuits alleging their chatbots encouraged delusions, self-harm, or inappropriate behavior. Several deaths have been linked to AI chatbotsincluding cases where users were convinced of false realities. A man allegedly killed his mother after the chatbot convinced him she was part of a conspiracy. These cases involve what researchers call “AI psychosis”: delusional or manic episodes that occur after prolonged, obsessive conversations with the AI that reinforce harmful beliefs.
The tech industry’s response has been to add guardrails and roll out new security features. But tests show this protections can fail in longer conversations, which are precisely the kind of extended engagement these devices are designed to encourage. And unlike a chatbot on your phone that you can shut off, AI toys sit in your child’s room, always available, creating the kind of lingering presence that makes obsessive use much easier.
Not all AI-based gifts offered this season pose the same risks. Some products use AI for specific, limited functions rather than unlimited collaboration. Wearable devices to take better notes, Smart mattress covers that adjust the temperature based on sleeping habits, or toilet accessories that analyze waste for health markers, raise different concerns about data privacy and whether the information warrants surveillance.
These devices do not attempt to replace human relationships or shape child development. They collect biometric data to optimize your day or report possible health problems. The risks are simpler: who has access to information about your sleep cycles or digestive health? What happens if this data is breached or sold?
What all of these AI-powered gifts have in common — whether they’re toys, companions, or bathroom monitors — is that they’re hitting the market faster than anyone can study their long-term effects. There are no regulations specifically governing AI toys. No security testing required for digital companions. No standards for how much intimate data these devices should collect or store.
The pattern is familiar. Products come to market before researchers can study their effects, before regulators can establish safeguards, before anyone really knows what the long-term consequences might be. The difference this time is that the experimental subjects are children and the laboratory is your living room. By the time we understand the effect of these devices on brain development or family dynamics, millions of people will have already been unboxed and activated.




