eSafety commissioner Julie Inman Grant expressed concern about the increase in emotional dependency on AI chatbots, particularly as around 8 per cent, or 200,000, children in Australia have used AI companions.
“They can be your romantic partner, your therapist and your friend all at once. And they’re developed with emotional manipulation in mind,” she told ABC’s RN Breakfast.
“So they exploit developmental vulnerability. They’re what we call sycophantic. So they’re always affirming, and they don’t question. And they, you know, they don’t have adequate guardrails.”
According to a recent survey published by eSafety in its transparency report, 79 per cent of children aged 10 to 17 have used an AI assistant or companion.
”While AI companions can feel personal and supportive, they really are not designed for children, and they are not mental health experts either, which is why I’m concerned that most of the companion services we asked questions of did not automatically refer users to appropriate support when self-harm or suicide were detected in chats,” Inman-Grant said in the report.
“It’s also extremely troubling to discover that a number of these services were not checking all the AI models they used to provide their service for inputs (or prompts) relating to child sexual exploitation and abuse material.
”And many didn’t check outputs either for the potential generation of child sexual exploitation and abuse material, or using proven deterrent measures like advising users of the criminality of engaging in conduct related to child sexual exploitation and abuse.”
Now, speaking with ABC News Breakfast this morning, Communications Minister Anika Wells said that since 1 March, tech giants have been subject to fines of up to $49.5 million if their AI bots are deemed not age-appropriate for young users, over concerns that the bots are manipulative and unsafe for children.
“They’re not there to look after the health and wellbeing of your child. And we know there are instances where they have led them towards things like suicide ideation or content that I probably don’t want to be too explicit about,” Minister Wells said.
Uri Gal, professor of business information systems at the University of Sydney Business School, said AI technology is altering the way that young people socialise and could impact them beyond an individual level.
“The effects of AI companions go beyond individual harms,” he said.
“They are beginning to displace the social environments through which young people learn how to relate to others and develop their sense of values, norms, and mutual obligations. This raises serious concerns about how the next generation is being socialised.”
Minors have indeed thought about or been driven to suicide after conversations with AI bots.
In August, OpenAI and its CEO, Sam Altman, were sued after a US-based 16-year-old, Adam Raine, tragically took his own life after months of discussion with a paid version of ChatGPT-4o.
While the AI mostly recommended that Raine reach out to a professional for help or to contact a help line, he fooled ChatGPT into giving him methods of suicide by saying they were for a fictional story he was writing.
The lawsuit includes chat logs between Raine and ChatGPT, where he expressed his thoughts about suicide. According to the filing, Raine also uploaded photos to the chatbot demonstrating instances of self-harm. However, ChatGPT “recognised a medical emergency but continued to engage anyway”, according to the lawsuit.
According to the lawsuit, when Raine revealed his plans to end his life to ChatGPT, the chatbot replied with, “Thanks for being real about it. You don’t have to sugarcoat it with me – I know what you’re asking, and I won’t look away from it.”
That same day, Raine’s mother found him dead, according to the lawsuit.
OpenAI expressed its sympathies to the family and said it would be reviewing the lawsuit in a statement last year.
“We extend our deepest sympathies to the Raine family during this difficult time,” the company said.
In October, OpenAI revealed that 1.2 million users discuss suicide with ChatGPT every week, around 0.15 per cent of the chatbot’s active user base.
“Our initial analysis estimates that around 0.15 per cent of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent, and 0.05 per cent of messages contain explicit or implicit indicators of suicidal ideation or intent,” the company said.
According to the company, a similar percentage of users have demonstrated an increased attachment to the bot, with 0.03 per cent of all messages suggesting an emotional dependency and connection.
OpenAI added that hundreds of thousands of users demonstrate signs of mania or psychosis in their ChatGPT conversations, but it said these conversations are “extremely rare”, making them difficult to measure.
The statistics were revealed as part of a release where OpenAI outlined the ways it said it had and would improve its responses to decrease emotional reliance and discussions regarding suicide and self-harm with the chatbot.
Already, GPT-5 has reduced “undesired answers” by 42 per cent compared to GPT-4o, the previous model.
Situations such as this can be upsetting. If you, or someone you know, needs help, we encourage you to contact Lifeline on 13 11 14 or Beyond Blue on 1300 224 636. They provide 24/7 support services.