FoloToy, a company selling AI-enabled toys, suspended sales of its products after a consumer safety report showed there were few restrictions around what its toys would talk about, CNN writes. The report, put together by the US Public Interest Research Group Education Fund, found that FoloToy’s products would discuss everything from sexually explicit topics like BDSM to “advice on where a child can find matches or knives.”
The toys, including a teddy bear named “Kumma,” a panda named “Momo,” anthropomorphic rabbits named “Fofo” and a dancing ”Little Cactus,” all appear to use OpenAI’s GPT-4o model to respond naturally to children’s questions and comments. FoloToy also specifically advertises the ability to customize each toy’s voice, and a “Parent Dashboard” where parents or guardians can “monitor [their] child’s experience.”
FoloToy’s AI-enabled Little Cactus toy. (Folotoy)
Missing from that setup was apparently any kind of hard limits on subjects the toys would respond to. “We were surprised to find how quickly Kumma would take a single sexual topic we introduced into the conversation and run with it, simultaneously escalating in graphic detail while introducing new sexual concepts of its own,” the report said.
In response, FoloToy has opted to suspend sales of its products while it conducts “a company-wide, end-to-end safety audit across all products,” the company shared in a statement with the PIRG Education Fund. The company’s reasoning for suspending sales might be a bit more complicated, however. NPR reports that OpenAI actually revoked FoloToy’s access to its models. “We suspended this developer for violating our policies,” OpenAI said in an email to NPR. “Our usage policies prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old.”
Given GPT-4o’s well-documented sycophantic qualities, it’s perhaps not surprising that FoloToy’s teddy bear eagerly responded to any subject as long as it kept the conversation going. One of the things OpenAI tried to address with the release of GPT-5 was the safety downsides of an AI yes-man, though it ultimately made GPT-4o available again after customers complained about the new model’s lack of personality. The company has also rolled out parental controls to ChatGPT to try and mitigate the negative impacts of children using its AI, though it’s difficult to say how much of a difference they’ve made.
Notably, OpenAI is interested in getting into the toy business itself. The company announced a partnership with Mattel in June 2025, to help “reimagine how fans can experience and interact with [Mattel’s] cherished brands,” though both companies will presumably try and prevent their AI toys from discussing sexual kinks.

