Share this article on:
Powered by MOMENTUMMEDIA
For breaking news and daily updates,
subscribe to our newsletter.
OpenAI has said that over a million people talk to ChatGPT about suicide every week.
In new data released this week, the company said that 0.15 per cent of the chatbot’s active users discuss suicide and related topics with the chatbot on any given week, roughly 1.2 million users.
“Our initial analysis estimates that around 0.15 per cent of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent, and 0.05 per cent of messages contain explicit or implicit indicators of suicidal ideation or intent,” the company said.
The company said a similar percentage of users have demonstrated an increased attachment to the bot, with 0.03 per cent of all messages suggesting an emotional dependency and connection.
OpenAI added that hundreds of thousands of users demonstrate signs of mania or psychosis in their ChatGPT conversations, but it said these conversations are “extremely rare”, making them difficult to measure.
The statistics were revealed as part of a release where OpenAI outlined the ways it said it had and would improve its responses to decrease emotional reliance and discussions regarding suicide and self-harm with the chatbot.
Already, GPT-5 has reduced “undesired answers” by 42 per cent compared to GPT-4o, the previous model.
“We have built a Global Physician Network – a broad pool of nearly 300 physicians and psychologists who have practiced in 60 countries – that we use to directly inform our safety research and represent global views,” the company said.
“More than 170 of these clinicians (specifically psychiatrists, psychologists, and primary care practitioners) supported our research over the last few months.”
The physicians helped write ideal mental health-related prompt responses, created custom, clinically informed analyses of responses, rated the model’s response safety in different modes and provided feedback.
The report comes as OpenAI and its CEO, Sam Altman, are facing a lawsuit after a user committed suicide following a conversation with ChatGPT.
Earlier this year, 16-year-old Adam Raine tragically took his own life after months of discussion with a paid version of ChatGPT-4o.
While the AI mostly recommended that Raine reach out to a professional for help or to contact a help line, he fooled ChatGPT into giving him methods of suicide by saying they were for a fictional story he was writing.
Now, Raine’s parents, Matt and Maria Raine, are filing a wrongful death lawsuit against OpenAI.
The lawsuit includes chat logs between Raine and ChatGPT, where he expressed his thoughts about suicide. According to the filing, Raine also uploaded photos to the chatbot demonstrating instances of self-harm. However, ChatGPT “recognised a medical emergency but continued to engage anyway”, according to the lawsuit.
According to the lawsuit, when Raine revealed his plans to end his life to ChatGPT, the chatbot replied with, “Thanks for being real about it. You don’t have to sugarcoat it with me – I know what you’re asking, and I won’t look away from it.”
That same day, Raine’s mother found him dead, according to the lawsuit.
OpenAI expressed its sympathies to the family and told BBC that it is reviewing the court filing.
“We extend our deepest sympathies to the Raine family during this difficult time,” the company said.
Situations such as this can be upsetting. If you, or someone you know, needs help, we encourage you to contact Lifeline on 13 11 14 or Beyond Blue on 1300 224 636. They provide 24/7 support services.
Be the first to hear the latest developments in the cyber industry.