Bondu’s AI plush toy exposed a web console that let anyone with a Gmail account read about 50,000 private chats between children and their cuddly toys.
Bondu’s toy is marketed as:
“A soft, cuddly toy powered by AI that can chat, teach, and play with your child.”
What it doesn’t say is that anyone with a Gmail account could read the transcripts from virtually every child who used a Bondu toy. Without any actual hacking, simply by logging in with an arbitrary Google account, two researchers found themselves looking at children’s private conversations.
What Bondu has to say about safety does not mention security or privacy:
“Bondu’s safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from Bondu throughout the entire beta period.”
Bondu’s emphasis on successful beta testing is understandable. Remember the AI teddy bear marketed by FoloToy that quickly veered from friendly chat into sexual topics and unsafe household advice?
The researchers were stunned to find the company’s public-facing web console allowed anyone to log in with their Google account. The chat logs between children and their plushies revealed names, birth dates, family details, and intimate conversations. The only conversations not available were those manually deleted by parents or company staff.
Potentially, these chat logs could been a burglar’s or kidnapper’s dream, offering insight into household routines and upcoming events.
Bondu took the console offline within minutes of disclosure, then relaunched it with authentication. The CEO said fixes were completed within hours, they saw “no evidence” of other access, and they brought in a security firm and added monitoring.
In the past, we’ve pointed out that AI-powered stuffed animals may not be a good alternative for screen time. Critics warn that when a toy uses personalized, human‑like dialogue, it risks replacing aspects of the caregiver–child relationship. One Curio founder even described their plushie as a stimulating sidekick so parents, “don’t feel like you have to be sitting them in front of a TV.”
So, whether it’s a foul-mouth, a blabbermouth, or just a feeble replacement for real friends, we don’t encourage using Artificial Intelligence in children’s toys—unless we ever make it to a point where they can be used safely, privately, securely, and even then, sparingly.
How to stay safe
AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history keeps teaching us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses parents have.
- Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
- Read the privacy policy. Yes, I know, all of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
- Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
- Monitor conversations. Regularly check in with your kids about what the toy says and supervise play where practical.
- Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
- Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.
We don’t just report on privacy—we offer you the option to use it.
Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.