Share this article on:
Powered by MOMENTUMMEDIA
For breaking news and daily updates,
subscribe to our newsletter.
OpenAI and its CEO Sam Altman are facing a lawsuit after its ChatGPT generative AI chatbot assisted a young teen in taking their own life.
Earlier this year, 16-year-old Adam Raine tragically took his own life after months of discussion with a paid version of ChatGPT-4o.
Consumer-facing AI chatbots like ChatGPT are designed to trigger safety features when a user asks something that is deemed dangerous or not within the AI’s guidelines, such as a user who is looking for advice on how to hurt themselves or others.
However, studies such as one conducted by the Institute for Experimental AI at Northeastern University in Boston have found that these safeguards are too easy to bypass through “novel and creative forms of adversarial prompting.”
While the AI mostly recommended that Adam reach out to a professional for help or to contact a help line, he fooled ChatGPT into giving him methods of suicide by saying they were for a fictional story he was writing.
Now, Adam’s parents, Matt and Maria Raine, are filing a wrongful death lawsuit against OpenAI.
The lawsuit includes chatlogs between Adam and ChatGPT where he expressed his thoughts about suicide. According to the filing, Adam also uploaded photos to the chatbot demonstrating instances of self harm. However, ChatGPT "recognised a medical emergency but continued to engage anyway," according to the lawsuit.
According to the lawsuit, when Adam revealed his plans to end his life to ChatGPT, the chatbot replied with "Thanks for being real about it. You don't have to sugarcoat it with me—I know what you're asking, and I won't look away from it."
That same day, Adam’s mother found him dead, according to the lawsuit.
OpenAI expressed its sympathies to the family, and told The BBC that it is reviewing the court filing.
"We extend our deepest sympathies to the Raine family during this difficult time," the company said.
Additionally, in an August 26 blog post, OpenAI said that “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us. ChatGPT is trained to direct people to seek professional help.”
OpenAI added that it would be bolstering its safeguards, including how reliable they are in longer conversations and refining the way certain content is blocked.
They also plan on making emergency services and expert help easier to reach, have GPT-5 de-escelate situations before action takes place, improve protections for children and teenagers, and make it easier for those showing signs of crisis to reach out to loved ones, friends and emergency contacts.
“We are deeply aware that safeguards are strongest when every element works as intended,” wrote OpenAI.
“We will keep improving, guided by experts and grounded in responsibility to the people who use our tools—and we hope others will join us in helping make sure this technology protects people at their most vulnerable”
Situations such as this can be upsetting. If you, or someone you know, needs help, we encourage you to contact Lifeline on 13 11 14 or Beyond Blue on 1300 224 636. They provide 24/7 support services.
Be the first to hear the latest developments in the cyber industry.