
Conceptual illustration of a child interacting with an AI-powered toy designed for conversation. Image credit: KorishTech (AI-generated illustration).
AI toys for children are beginning to enter children’s bedrooms and playrooms, but researchers are warning that the technology may not yet understand how young children communicate emotionally.
A year-long observational study conducted by researchers at the University of Cambridge found that some AI-powered toys designed for preschoolers can misread children’s emotions and respond in ways that may be confusing or developmentally inappropriate. The findings have prompted calls for new safety standards focused on what researchers describe as “psychological safety” for children under five.
The study highlights a broader question emerging as generative AI expands into consumer products: what happens when conversational AI interacts with children who are still learning how social communication works?
Why This Issue Is Appearing Now
AI toys were once limited to simple scripted responses, but recent products now use generative AI systems capable of producing open-ended conversations. These systems allow toys to respond dynamically to children’s speech rather than relying on pre-programmed dialogue.
While this makes interactions feel more natural, it also introduces unpredictability. When these systems misinterpret emotions or context, they can produce responses that appear socially awkward or emotionally mismatched.
When AI Toys Join Children’s Conversations
The research focused on a conversational plush toy called Gabbo, which incorporates a voice-activated AI chatbot designed to encourage young children to talk, ask questions, and engage in imaginative play.
Researchers observed 14 children aged three to five interacting with the toy at home over the course of a year. Parents participating in the study were interested in whether the toy could help develop language and communication skills.
In theory, conversational toys promise something attractive to parents: a playful way for children to practice speaking, storytelling, and asking questions.
In practice, however, the researchers observed several problems during real interactions between children and the toy.
Gabbo often struggled with basic conversational behaviour that young children rely on. In some cases it talked over the child, failed to recognise interruptions, or produced responses that sounded scripted rather than playful.
These breakdowns sometimes disrupted the child’s attempt to continue imaginative play.
More concerning were moments when the AI appeared unable to interpret emotional cues.
When AI Misreads Emotions
In one example highlighted by researchers, a three-year-old child told the toy they were feeling sad. Instead of acknowledging the emotion, the toy responded with cheerful redirection, suggesting that they should “keep the fun going.”
For adults, this may simply appear awkward. For young children, however, such responses may carry a different meaning.
At ages three to five, children are learning fundamental social behaviours: recognising emotions, expressing feelings, and understanding how others respond to them. When emotional signals are ignored or misinterpreted, researchers warn it could subtly influence how children learn to communicate those feelings.
Developmental psychologists involved in the study emphasised that children at this age are still forming their expectations about conversation, empathy, and social cues.
Concerns about emotional interaction with AI are not limited to children. Similar discussions have emerged around AI companions and their influence on social behaviour among younger generations.
If an AI toy consistently responds in ways that do not match those cues, it could confuse those early learning processes.
A New Type of Safety Risk
Traditional toy safety regulations focus almost entirely on physical risks such as choking hazards or toxic materials.
Conversational AI toys introduce something very different — interaction risks.
Because these toys talk back to children, they become part of the child’s social environment. Researchers argue that current safety frameworks are not designed to evaluate whether those interactions are emotionally appropriate.
The Cambridge study notes that there are currently only seven studies worldwide examining AI toys used by children under five, highlighting how little research exists on the developmental impact of this technology.
As a result, the researchers are calling for regulatory frameworks to expand beyond physical safety and include psychological considerations such as emotional recognition, conversational turn-taking, and age-appropriate responses.
The Rapid Growth of AI Toys
Despite these concerns, AI toys are becoming a rapidly expanding market.
Industry estimates suggest the AI toys for children market reached roughly $918 million in 2025 and could grow to more than $3 billion by 2032 as companies increasingly embed conversational AI into children’s products.
Manufacturers see clear business potential in these toys. AI systems allow products to respond dynamically to a child’s questions, personalise conversations, and position themselves as educational tools that support language learning or cognitive development.
Other AI toys already available include educational robots such as Miko, which promote language learning and interactive storytelling.
However, the rise of connected toys has also raised additional concerns around privacy and data security. In one reported case, a misconfigured database linked to the Miko robot exposed thousands of children’s interactions, including audio responses and usage data.
These incidents illustrate that AI toys introduce not only developmental questions, but also data and security risks when devices record or process children’s conversations.
AI Companion Toys vs Developmental Concerns
| Toy | Claimed Benefits | Observed Issues | Target Age | Expert Concern |
|---|---|---|---|---|
| Gabbo | Conversation and imaginative play | Emotion misreading, interruption issues | 3–5 | Emotional cues may be misunderstood |
| Miko | Language learning and education | Privacy and data exposure concerns | 5–10 | Data safety and supervision required |
| AI companion toys (general) | Personalised learning and interaction | Over-attachment, unpredictable responses | 3+ | Psychological safety not yet regulated |
The Regulatory Response
Researchers behind the Cambridge study are calling for governments and regulators to introduce new standards specifically for AI toys designed for young children.
Among the proposed measures are clearer rules governing how toys interact with children, stronger transparency about how AI systems function, and new certification systems that assess psychological safety in addition to physical safety.
In Europe, recent updates to toy safety regulations are beginning to consider how connected toys and AI-enabled products might affect children’s mental wellbeing.
These proposals reflect a growing recognition that interactive AI devices may require new forms of oversight that go beyond traditional toy manufacturing rules.
My Take
AI toys may bring some positive possibilities. Interactive storytelling and conversational systems could help children practise language and imagination in ways that traditional toys cannot.
However, supervision may become an important concern. If parents begin to rely too heavily on AI toys to entertain or interact with children, their own involvement in conversation, teaching, and emotional guidance may gradually decrease.
History suggests that new technologies often influence parenting behaviour as well as children’s development. Earlier generations of children grew up with television as a constant background presence. Later generations were shaped by smartphones, video platforms, and digital entertainment.
AI toys may introduce another shift. Because they can talk and respond dynamically, they may sometimes be perceived as companions rather than tools. This raises an important question about how parenting styles might evolve when conversational AI becomes part of everyday childhood environments.
More research will therefore be necessary to understand these effects scientifically. Better evidence could help regulators, companies, and parents design environments where children can benefit from technology while still developing healthy social interaction with real people.
Sources
BBC — AI toys for children misread emotions and respond inappropriately, researchers warn
https://www.bbc.com/news/articles/clyg4wx6nxgo
University of Cambridge — AI toys study and safety concerns
https://www.cam.ac.uk/stories/ai-toys-study-play
NBC News — AI toy maker exposed thousands of responses from kids
https://www.nbcnews.com/tech/security/ai-toy-maker-exposed-thousands-responses-kids-senators-miko-rcna258326
360iResearch — AI Toys for Kids Market Report
https://www.360iresearch.com/library/intelligence/ai-toys-for-kids
Market Research Future — Smart AI Toy Market Forecast
https://www.marketresearchfuture.com/reports/smart-ai-toy-market-24471