AI chatbots have become a big part of all of our lives since they burst onto the scene more than three years ago. ChatGPT, for example, says it has around 700 million weekly active users, many of whom are “young people.” A UK study from July 2025 found that nearly two-thirds (64%) of children use such tools. A similar share of parents is worried their kids think AI chatbots are real people.
While this may be a slight overreaction, legitimate safety, privacy and psychological concerns are emerging due to frequent use of the technology by youngsters. As a parent, you can’t assume that all platform providers have effective child-appropriate safeguards in place. Even when protections do exist, enforcement isn’t necessarily consistent, and the technology itself is evolving faster than policy.
What are the risks?
Our children use generative AI (GenAI) in diverse ways. Some value its help when doing homework. Others might treat the chatbot like a digital companion, asking it advice and trusting its responses as they would a close friend. There are several obvious risks associated with this.
The first is psychological and social. Children are going through an incredible period of emotional and cognitive development, making them vulnerable in various ways. They may come to rely on AI companions at the expense of forming genuine friendships with classmates – exacerbating social isolation. And because chatbots are pre-programmed to please their users, they may serve up output that amplifies any difficulties young people may be going through – like eating disorders, self-harm and/or suicidal thoughts. There’s also a risk that your child spends time with their AI that edges out not only human friendships, but also time that should be spent on homework or with the family.
There are also risks around what a GenAI chatbot may allow your child to access on the internet. Although the main providers have guardrails designed to limit links to inappropriate or dangerous content, they are not always effective. In some cases, they may override these internal safety measures to share sexually explicit or violent content, for example. If your child is more tech savvy, they may even be able to ‘jailbreak’ the system through specific prompts.
Hallucinations are another concern. For corporate users, this can create significant reputational and liability risks. But for kids, it may result in them believing false information presented convincingly as fact, which results in them taking unwise decisions on medical or relationship matters.
Finally, it’s important to remember that chatbots are also a potential privacy risk. If your child enters sensitive personal and financial information in a prompt, it will be stored by the provider. If that happens, it could theoretically be accessed by a third party (e.g., a supplier/partner), hacked by a cybercriminal, or regurgitated to another user. Just as you wouldn’t want your child to overshare on social media, the best course of action is to minimize what they share with a GenAI bot.
Some red flags to look out for
Surely the AI platforms understand and are taking steps to mitigate these risks? Well, yes, but only up to a point. Depending on where your children live and what chatbot they’re using, there may be little in the way of age verification or content moderation going on. The onus, therefore, is definitely on parents to get ahead of any threats through proactive monitoring and education.
First up, here are a few signs that your children may have an unhealthy relationship with AI:
- They withdraw from extracurricular time spent with friends and family
- They become anxious when not able to access their chatbot, and may try to hide signs of overuse
- They talk about the chatbot as if it were a real person
- They repeat back to you as “fact” obvious misinformation
- They ask their AI about serious conditions such as mental health issues (which you find out about by accessing conversation history)
- They access adult/inappropriate content served up by the AI
Time to talk
In many jurisdictions, AI chatbots are restricted to users over 13-years-old. But given patchy enforcement, you may have to take matters into your own hands. Conversations matter more than controls alone. For the best results, consider combining technical controls with education and advice, delivered in an open and non-confrontational manner.
Whether they’re at school, at home or taking part in an after-school club, your children have adults telling them what to do every minute of their waking lives. So try to frame your outreach about AI as a two-way dialog, where they feel comfortable sharing their experiences without the fear of punishment. Explain the dangers of overuse, hallucinations, data sharing, and over-relying on AI for help with serious problems. Help them to understand that AI bots aren’t real people capable of thought – that they’re machines designed to be engaging. Teach your kids to think critically, always fact check AI output, and never substitute a chat with their parents for a session with a machine.
If necessary, combine that education piece with a policy for limiting AI use (just as you might limit use of social media, or screen time in general) and restricting use to age-appropriate platforms. Switch on parental controls in the apps they use to help you monitor usage and minimize risk. Remind your kids never to share personally identifiable information (PII) with AI and tweak their privacy settings to reduce the risk of unintentional leaks.
Our children need humans at the center of their emotional world. AI can be a useful tool for many things. But until your kids develop a healthy relationship with it, their usage should be carefully monitored. And it should never replace human contact.






