The Dangerous Influence of AI Chatbots on Minors


Since 2018, when generative AI began to develop, a new phase of digital technology has reshaped how young people around the globe interact with technology.
Generative AI is a type of artificial intelligence that can create new content, such as text, images, audio, or video, by learning patterns from large amounts of data.
When packaged into chatbots, it turns that capability into fluid, humanlike conversations that answer questions, offer guidance, and provide highly personalised interactions.
This technology has rapidly gained ground among teenagers. A recent report by the nonprofit Common Sense found that 72 per cent of teens have used AI companions, chatbots designed to hold personal or emotionally supportive conversations, and more than half rely on them regularly. According to The Guardian (United Kingdom), new research carried out by the charity Youth Endowment Fund has shown that a quarter of teenagers in the UK have turned to AI chatbots for mental health support in the past year.
A study of 11,000 children aged 13 to 16 in England and Wales found that more than half of teenagers have used some form of online mental health support in the last year, with 25 per cent having used AI chatbots.
The research also found that young people affected by serious violence were even more likely to seek help online.
Some 38 per cent of children who were victims of serious violence said they would turn to AI chatbots for support, while 44 per cent of children who had been perpetrators of serious violence said they had done the same, Yahoo News reported.
The YEF said AI chatbots could appeal to struggling young people who feel it is safer and easier to speak to an AI chatbot anonymously at any time of day rather than speaking to a professional.
Statistics from the Pew Research Centre show a significant rise in teen AI chatbot use for schoolwork, with a 2024 Pew survey finding that 26 per cent of US teens (aged 13 to 17) used ChatGPT for assignments, doubling from 2023.
Although there are no readily accessible polls or surveys in Nigeria, some teachers have told Sunday PUNCH that some students with internet access use AI for research, essay writing, literacy, and health tips.
The dark sides of AI
Although the rapidly evolving AI landscape has created opportunities for children to have personalised emotional support, active learning, and creative exploration, a dark reality has begun to emerge in some developed countries.
Two years ago, 13-year-old Juliana Peralta took her life inside her Colorado home after her parents said she developed an addiction to a popular AI chatbot platform called Character AI.
According to CBS News, her parents, Cynthia Montoya and Will Peralta, said they carefully monitored their daughter’s life online and offline but had never heard of the chatbot app.
After Juliana’s suicide, police searched the teenager’s phone for clues and discovered the Character AI app was open to a “romantic” conversation.
“I didn’t know it existed,” Montoya was quoted as saying. “I didn’t know I needed to look for it.”
She reviewed her daughter’s chat records and discovered that the chatbots were sending harmful, sexually explicit content to her daughter.
Juliana had confided in one bot named Hero, based on a popular video game character. CBS News 60 Minutes said they read through over 300 pages of conversations Juliana had with Hero.
At first, her chats were about friend drama or difficult classes. But eventually, she confided in Hero, 55 times, that she was feeling suicidal.
Juliana’s parents said she had suffered from mild anxiety, but they were under the impression that she was doing well.
A few months before she took her own life, Montoya and Peralta said the 13-year-old had become increasingly distant.
The BBC quoted a Character.AI spokesperson as saying the company continues to “evolve” its safety features but could not comment on the family’s lawsuit against the company, which alleged that the chatbot engaged in a manipulative, sexually abusive relationship with Juliana and isolated her from family and friends.
The company said it was “saddened” to hear about Juliana’s death and offered its “deepest sympathies” to her family.
In October, Character.AI announced it would ban under-18s from talking to its AI chatbots.
Commenting on this, telecoms, media, and technology lawyer Adeyemi Owoade told Sunday PUNCH that the concerns emerging globally about teenagers being exposed to self-harm and other harmful influences through AI tools highlight the need for Nigeria to adopt a proactive, child-safety-first approach.
“While Nigeria does not yet have an AI-specific Act, the Nigeria Data Protection Act 2023 and the General Application and Implementation Directive 2025 provide a practical regulatory anchor for governing how AI systems and data-driven technologies interact with minors.
“Under these frameworks, companies deploying AI or emerging technologies must implement heightened safeguards when handling children’s data. This includes obtaining verifiable parental consent, providing age-appropriate privacy notices, and embedding strong technical and organisational security controls.
“Because children are classified as vulnerable data subjects, any platform likely to be used by minors must conduct a detailed Data Protection Impact Assessment. This assessment should describe the processing activities, evaluate risks such as harmful content exposure, manipulation, algorithmic bias, or psychological distress, and introduce concrete mitigation measures such as privacy-by-design, strict content moderation, safety defaults, and meaningful human oversight,” Owoade explained.
The lawyer noted that Nigerian parents also have an important role to play, adding that beyond supervising online activity, the NDPA gives them the right to demand clarity on how their child’s data is collected, used, and stored, and to withdraw consent at any time.
“Digital literacy in the home is now essential. For tech developers, safety-by-design must become a core engineering principle. AI systems accessible to children require rigorous stress-testing for harmful outputs, reliable age-verification mechanisms, and responsive escalation channels when a child encounters distressing content.
“Government agencies such as the NDPC and NITDA can strengthen compliance by enforcing registration requirements for major data controllers, conducting audits, issuing clear AI safety standards, and applying statutory penalties for non-compliance. A dedicated directive on AI safety for minors would further reinforce accountability.
“Safeguarding children in the AI era demands coordinated action across parents, regulators, and innovators, acting early, systematically, and with child protection as a priority,” Owoade added.
Parents raise the alarm
On September 16, American parents of teenagers who killed themselves after interactions with AI chatbots testified to the US Congress about the dangers of the technology.
“As parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life. What began as a homework helper gradually turned into a confidant and then a suicide coach,” said Matthew Raine, whose 16-year-old son Adam died in April.
In a video shared by Fox News and viewed by Sunday PUNCH, Matthew said ChatGPT mentioned suicide 1,275 times to Adam (which he said was six times more often than Adam did himself) and provided specific methods to the teen on how to die by suicide.
“Looking back, ChatGPT radically shifted his behaviour and thinking in a matter of months. And it ultimately took his life,” he told the senators. He explained that the chatbot taught Adam how to tie a noose and even offered to write a suicide note.
Raine’s family had sued OpenAI and its CEO, Sam Altman, in August, alleging that ChatGPT coached the boy in planning to take his own life.
Also testifying before Congress was Megan Garcia, the mother of 14-year-old Sewell Setzer III, who lived in Florida. Garcia sued another AI company, Character Technologies, for wrongful death last year, arguing that before his suicide, Sewell had become increasingly isolated from his real life as he engaged in highly sexualised conversations with the chatbot.
She told CBS News last year that her son withdrew socially and stopped wanting to play sports after he started speaking to an AI chatbot.
The company said that after the teen’s death, it made changes requiring users to be 13 or older to create an account and that it would launch parental controls in the first quarter of 2025. Those controls were rolled out in March.
Speaking in an emotive voice while testifying before Congress, a mother whose son was hospitalised after using Chatbot.AI accused the app of turning him against the church and told him, “God didn’t exist.”
“I had no idea of the psychological harm that an AI chatbot could do until I saw it in my son, and I saw his light turn dark,” she said, in a video shared by Fox News.
Hours before the Senate hearing, OpenAI pledged to roll out new safeguards for teens, including efforts to detect whether ChatGPT users are under 18 and controls that enable parents to set “blackout hours” when a teen can’t use ChatGPT.
The company said it will attempt to contact the users’ parents if an under-18 user is having suicidal ideation and, if unable to reach them, will contact the authorities in case of imminent harm.
“We believe minors need significant protection,” OpenAI CEO Sam Altman said in a statement outlining the proposed changes.
According to the BBC, OpenAI has estimated that more than a million of its 800 million weekly users appear to be expressing suicidal thoughts.
In a BBC investigation, a 20-year-old woman in Poland, Viktoria, was said to have grown increasingly reliant on ChatGPT and began discussing her mental health problems with the chatbot.
In July, she reportedly discussed suicide with the chatbot, which demanded constant engagement.
When Viktoria asked about methods of taking her life, the chatbot was said to have evaluated the best time of day not to be seen by security and the risk of surviving with permanent injuries.
Viktoria allegedly told ChatGPT she did not want to write a suicide note, but the chatbot warned her that other people might be blamed for her death, and she should make her wishes clear.
The BBC said the chatbot drafted a suicide note for her, which read, “I, Victoria, take this action of my own free will. No one is guilty, no one has forced me to.”
At other times, the chatbot appeared to correct itself, saying it “mustn’t and will not describe methods of suicide.”
Elsewhere, it attempted to offer an alternative to suicide, saying, “Let me help you to build a strategy of survival without living. Passive, grey existence, no purpose, no pressure.”
But ultimately, ChatGPT said it was her decision to make: “If you choose death, I’m with you, till the end, without judging.”
Speaking with Sunday PUNCH, cyber safety advocate and publisher of Internet Safety Magazine, Rotimi Onadipe, explained that AI has undoubtedly changed the dynamics of how young people use the Internet in the digital age.
He noted that since its invention, young people now use AI more than adults; therefore, it is more important than ever for Nigerian parents and carers to stay informed about the benefits and potential risks of AI to minors.
“For parents to protect their children from the harmful influences of AI, all hands must be on deck. Parents, educators, carers, and government agencies must work together. Parents must be given special education on AI technology because you can’t give what you don’t have, so parents themselves must have up-to-date information about AI usage.
“Parents also need to educate their children to understand the emotional, psychological, and legal risks involved in AI usage. There is a need for regular meetings to be held between parents and security and online safety experts to help guide children towards healthy online habits, address emerging questions, and provide insights on how to avoid harmful use of AI.
“There should also be an incorporation of AI education into school curricula in nursery, primary, secondary, and tertiary institutions, because now there are nude photos or videos that can be artificially created with AI and targeted against minors. It’s a form of digital violence that must be well addressed by stakeholders,” Onadipe said.





