You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

For breaking news and daily updates, subscribe to our newsletter.
Advertisement

‘Clear and present’ danger: eSafety updates Online Safety Act to protect children from harmful AI

The Australian eSafety Commissioner has launched world-first new codes to protect children from harmful conversations with AI chatbots.

‘Clear and present’ danger: eSafety updates Online Safety Act to protect children from harmful AI
expand image

According to eSafety commissioner Julie Inman Grant, Australian schools have found that Australian children aged 10 to 11 are spending as many as six hours a day interacting with chatbots, “most of them sexualised chatbots”, she told ABC News.

In an effort to protect the nation’s children, Inman Grant registered six new codes under the Online Safety Act, which would restrict the number of children using the dangerous technology.

“I don’t want to see Australian lives ruined or lost as a result of the industry’s insatiable need to move fast and break things,” she said.

 
 

The new codes apply to social media platforms, app stores, technology manufacturers and AI chatbots themselves, all of which will be required to verify the age of the user before allowing access to the content.

Inman Grant said Australia is the first country to introduce such protections and that the new codes would require tech giants “to embed the safeguards and use the age assurance”.

She added that the onus to police the use of these technologies would be on the tech companies.

“We don’t need to see a body count to know that this is the right thing for the companies to do,” she added.

Inman Grant’s concerns are very real. Earlier this year, US teen Adam Raine, 16, tragically took his own life after months of discussion with a paid version of OpenAI’s ChatGPT-4o.

Consumer-facing AI chatbots like ChatGPT are designed to trigger safety features when a user asks something that is deemed dangerous or not within the AI’s guidelines, such as a user who is looking for advice on how to hurt themselves or others.

However, studies such as one conducted by the Institute for Experiential AI at Northeastern University in Boston have found that these safeguards are too easy to bypass through “novel and creative forms of adversarial prompting”.

While the AI mostly recommended that Raine reach out to a professional for help or to contact a help line, he fooled ChatGPT into giving him methods of suicide by saying they were for a fictional story he was writing.

Raine’s parents, Matt and Maria Raine, are now filing a wrongful death lawsuit against OpenAI, alleging that the chatbot assisted Adam and continued to engage in discussion despite recognising “a medical emergency”.

Following this, OpenAI announced that it would be bolstering its safeguards, including how reliable they are in longer conversations and refining the way certain content is blocked, as well as other measures.

“We extend our deepest sympathies to the Raine family during this difficult time,” the company said.

Situations such as this can be upsetting. If you, or someone you know, needs help, we encourage you to contact Lifeline on 13 11 14 or Beyond Blue on 1300 224 636. They provide 24/7 support services.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.