AI chatbots that are fit only for adults are still appearing in kids toys

A new report from the U.S. Public Interest Research Group (PIRG) Education Fund has raised concerns about the growing use of artificial intelligence chatbots in children’s toys, warning that some of these systems may not be suitable for young users. According to the report, several AI-powered toys integrate chatbot technology that can generate responses similar to those used in adult-focused AI services, potentially exposing children to inappropriate or misleading content.

The study examined a range of toys that incorporate conversational AI features, including interactive dolls, robots, and educational gadgets. Many of these products allow children to speak with a toy that responds in natural language, powered by large language models similar to those used in widely available AI chatbots.

While the technology can make toys more interactive and educational, PIRG researchers argue that the safeguards built into some products may not be strong enough to protect younger audiences. In particular, the report highlights that the underlying AI systems often originate from platforms designed primarily for general users rather than children.

Because of this, the AI responses generated by these toys could potentially include information or conversational themes that are more appropriate for adults than children. The report also warns that the AI may produce inaccurate answers or unpredictable responses, which could confuse young users who tend to trust toys as reliable sources of information.

Researchers reviewing the toys’ documentation and privacy policies also found that some products rely heavily on cloud-based AI systems

This means children’s voice interactions may be transmitted to external servers where the data is processed and used to generate responses. Privacy advocates say this raises additional concerns about how children’s data is stored and used. Some toys may collect audio recordings, user prompts, or other personal information during conversations. If these systems are not carefully designed with child privacy protections, the data could potentially be misused or stored without clear safeguards.

The report also points out that many AI-powered toys include disclaimers buried in their terms of service or product documentation. These disclaimers sometimes state that the AI responses may not always be accurate or appropriate, effectively shifting responsibility onto parents while the toy itself is marketed directly to children.

This situation matters because AI technology is increasingly entering everyday consumer products, including items designed specifically for young audiences. Toys that simulate conversations can have a powerful influence on children, who often treat them as companions or learning tools.

Experts say children may have difficulty distinguishing between reliable information and AI-generated responses that are speculative, biased, or incorrect. As AI systems continue to evolve, ensuring that these technologies are adapted for child safety will become increasingly important.

The findings also highlight a broader regulatory challenge

While many countries have laws designed to protect children’s online privacy, such as the Children’s Online Privacy Protection Act (COPPA) in the United States, these regulations were developed before the rise of generative AI.

Advocacy groups argue that regulators may need to update safety standards and guidelines to address how AI systems interact with children through connected devices.

The PIRG report calls on toy manufacturers to implement stronger safeguards, including stricter content filtering, clearer disclosure about AI use, and more transparent data practices. It also recommends that companies design AI systems specifically for children rather than repurposing models originally built for adult audiences.

Looking ahead, researchers say collaboration between technology companies, regulators, and child safety experts will be necessary to ensure that AI-powered toys remain both innovative and safe.

As artificial intelligence becomes more integrated into everyday products, the challenge will be balancing the benefits of interactive technology with the responsibility to protect younger users from potential risks.

Share this post:

Leave a Reply

Your email address will not be published. Required fields are marked *

From the latest gadgets to expert reviews and unbeatable deals — dive into our handpicked content across all things tech.