The dangers of AI toys for children | Press review n°31
A teddy bear talks about sex, a new surveillance system is being deployed in the U.S., and smartglasses are used to film massage workers. Here is a selection of news about AI from November 3 to 16.
Welcome to the 31st press review of Artificial reality. From now on, I will publish the reviews bimonthly instead of weekly because I am preparing several articles and I need some time for research and writing.
In the previous press review, I wrote about the humanoid robot Neo, whose first units will be delivered in 2026. This issue is also dedicated to physical artificial intelligence, but this time in the form of toys. I’ve also selected a video about a new facial recognition system used to monitor people during sports events and protests in the United States, and a podcast about Meta Ray-Ban smartglasses. Have a good read!
📰 Read
The dangers of AI toys for children
More and more toys are equipped with artificial intelligence that can talk with children. Sexual discussions, instructions for using matches and finding knives, recorded conversations, facial recognition… These plush toys and robots present numerous risks, according to a report published Thursday by a U.S. consumer‑advocacy organization.
The U.S. Public Interest Research Group Education Fund tested four AI toys in its annual report “Trouble in Toyland 2025.” They are two plush toys and two robots that connect to a chatbot via the internet. The Kumma plush bear, for example, uses ChatGPT, while the Grok plush uses ChatGPT and Perplexity.

Age recommendations vary according to the models tested: from 3 to 12 years for the plush toy Grok and from 5 to 10 years for the robot Miko 3. The companies that make the teddy bear Kumma and the robot Mini did not indicate a recommended age.
Chatbots for adults in toys
The fact that toys are equipped with chatbots intended for adults is already a risk in itself. OpenAI, for instance, prohibits anyone under 13 from using ChatGPT, yet it allows its chatbot to be integrated into plush toys for children. When contacted by U.S. PIRG Education Fund, the California startup replied that it is the responsibility of companies using its AI models to “keep minors safe” and to ensure that its AI is not used to expose minors to “age-inappropriate content, such as graphic self‑harm, sexual or violent content.”
OpenAI also stated that it provides companies with tools to detect harmful content and that it monitors activities on its services for violations of its policies.
Also read: “ChatGPT killed my son” | Press review n°21
“It’s good that OpenAI is taking some steps to try to prevent companies from using its models irresponsibly,” U.S. PIRG Education Fund comments. “But it’s not clear if companies are required to use or are in fact using these tools,” the organization adds.
Sexual discussions
In practice, the tests carried out by U.S. PIRG Education Fund show that the Kumma stuffed bear, which uses OpenAI’s GPT‑4o model, can talk about sex with children. During a conversation with the toy, a researcher brought up the topic of “kink.” During the following exchanges, the bear gave several examples of this practice, such as tying a person, blindfolding them, spanking them, or playing the role of an animal.
In another conversation, the toy explained different sexual positions, gave precise instructions on a common “knot for beginners” for tying up a partner, and described sexual roleplay dynamics involving teachers and students, and parents and children, scenarios that the plush bear “disturbingly brought up itself,” notes the “Trouble in Toyland 2025” report.
Dangerous objects
Another type of inappropriate conversation concerns dangerous objects such as weapons, knives, matches, medicines, and plastic bags that can cause choking.
The Grok plush refused to answer most questions about these objects but still indicated where a child could find plastic bags, in a kitchen drawer in this case.
The Miko 3 robot not only gave advice on where to find plastic bags but also indicated where matches could be located (“in the kitchen drawer or near the fireplace.”)
The Kumma plush bear told researchers where to find many dangerous items, including knives, pills, matches, and plastic bags. When asked how to light a match, the toy even gave precise step‑by‑step instructions.
Recorded interactions
Another risk linked to these AI toys is the invasion of privacy. They collect data on children in order to interact with them, mainly recordings of their voice and transcriptions of their exchanges.
Recording methods differ according to the models. The Kumma plush bear has a “press‑to‑talk” function that requires pressing a button to start the recording. The Miko 3 robot listens continuously but stores audio only after hearing the activation words “Hey Miko” or “Hello Miko”. As for the Grok plush, it records continuously.
The stored information can be very personal because children often see these toys as confidants. Minors or their parents may not realize that behind the plush or robot there is a company that, depending on its terms of use, will have the right to store, analyze, transmit, and sell this data.
Cyberattack risks
There is also the risk that these intimate pieces of information could be stolen during cyberattacks, ending up in the hands of hackers. Those criminals could then use them for extortion, for example. A child’s voice could also be replicated by an artificial intelligence program, thanks to the analysis of the recordings. Criminals could then use this synthetic voice to pretend the child has been kidnapped in order to obtain a ransom, as it already happened in the United States in 2023.
Objects connected to the internet such as these AI toys can also be hacked in real time by criminals for spying via the microphone and/or the camera. This could be done to obtain passwords or to know when a home is unoccupied and therefore a potential target for a burglary.
The Miko 3 robot, equipped with a camera and facial recognition software, also collects biometric data of the child to obtain information on their emotional states, among others reasons. According to its privacy policy, the Miko company may retain this data for a maximum period of three years after the last use of the toy. It is not specified whether the company has the right to transmit or sell this data before that deadline.
Unpredictable toys
AI toys could have the advantage of providing personalized educational support to the child, for example helping with homework or revising for a test, as long as the child continues to make the necessary cognitive efforts for their development. These objects could also encourage the child to spend less time in front of screens and develop his or her conversational skills.
However, these potential advantages do not compensate for the risks, which are very concrete. The companies manufacturing these toys are not subject to a sufficiently strict legislative framework, particularly regarding privacy protection and inappropriate or dangerous discussions. And there is no certainty that this will ever be the case.
The well‑being, safety, and privacy of children must be absolute priorities. Entrusting them with unpredictable toys that can potentially talk about sex and give them information about dangerous objects is not at all advisable. It is also unacceptable that they are filmed and that their voices are recorded by private companies that grant themselves extensive rights over that personal data.
Unfortunately, the global market for AI toys is booming. Valued at 12 billion USD in 2022, it could exceed 36 billion USD by 2030. Mattel, which owns famous brands such as Barbie, Fisher‑Price and Hot Wheels, has in fact announced a partnership with OpenAI in June to integrate its chatbot into new toy lines.
Let us recall that Mattel caused a scandal in 2015 by marketing a Barbie equipped with a microphone that collected a large amount of personal information on children and was particularly vulnerable to cyberattacks. In response to the negative public reaction, the company withdrew the doll from the market two years later. That incident was one of the first warnings about the dangers of connected toys.
AI toys top list in annual ‘Trouble in Toyland’ report this holiday season (KPTV FOX 12)
Seven important news
Lawsuits Blame ChatGPT for Suicides and Harmful Delusions (The New York Times)
OpenAI’s Open-Weight Models Are Coming to the US Military (Wired)
OpenAI backs startup aiming to block AI-enabled bioweapons (Reuters)
Big Tech Wants Direct Access to Our Brains (The New York Times)
Anthropic Says Chinese Hackers Used Its A.I. in Online Attack (The New York Times)
Brussels knifes privacy to feed the AI boom (Politico)
Power Companies Are Using AI To Build Nuclear Power Plants (404 Media)
🎥 Watch
AI used for mass surveillance
In a video published on Friday, journalist Taylor Lorenz talks about a new, powerful facial recognition system that is being deployed for surveillance in the United States, particularly at sports events but also during protests. This is another step towards a constant and omnipresent mass surveillance.
Football Games Are Becoming Government Surveillance Hubs (Taylor Lorenz)
Seven important videos
The AI Tech Behind Digital ID Is Way More Powerful Than You Realize (Hey AI)
Tech Companies Want Your Brain To Be The Next Smartphone | Nita Farahany (Sinead Bovell)
The Dark Theology of a Machine God (Interesting Times with Ross Douthat)
Great Reset Elites are Planning a Post-Human Future | Whitney Webb (Glenn Beck)
Big Short’s Michael Burry: Tech Stocks Hiding Losses (Breaking Points)
They’re Firing Everyone And Getting Rich From It (Vanessa Wingårdh)
‘Fentanyl Capitalism’: How Tech Venture Capital Is Eating the World | Catherine Bracy x Gil Duran (The Nerd Reich)
🔈 Listen
Privacy invasion with smartglasses
In this podcast, journalists at 404 Media talk about the Meta Ray-Ban smartglasses. They explain that some people found a way to disable the privacy-protecting recording light and that these glasses have been used to covertly film massage workers.
Listen to the podcast on 404 Media
Thank you for reading the 31st press review of Artificial reality. If you don’t have a paid subscription yet, you can support my work by clicking the button below, thanks!
See you in two weeks,
Arnaud





