Share this article on:
Powered by MOMENTUMMEDIA
For breaking news and daily updates,
subscribe to our newsletter.
Australia’s eSafety Commissioner chalks up an enforcement action win in the fight to protect consumers from online and AI harms.
The UK-based developer of three highly popular “nudify services” has blocked Australians from accessing those services following an enforcement action by the eSafety Commissioner.
According to eSafety, the nudify services provided by the unnamed company were accessed approximately 100,000 times a month by Australian users, and were commonly used for “creation of AI-generated sexual exploitation material of students in Australian schools”.
“We issued the UK-based company with a formal warning in September for non-compliance with Australia’s mandatory codes and standards, which require all members of the online industry to take meaningful steps to tackle the worst-of-the-worst online content, including child sexual abuse material,” eSafety said in a 27 November statement.
The eSafety Commissioner, Julie Inman-Grant (pictured), said she was pleased to see Australia’s online safety requirements “continuing to have real-world impacts”.
“We know ‘nudify’ services have been used to devastating effect in Australian schools, and with this major provider blocking their use by Australians, we believe it will have a tangible impact on the number of Australian school children falling victim to AI-generated child sexual exploitation and deepfake image-based abuse,” Inman-Grant said in a post to LinkedIn.
“We also have to address the risks posed by underlying AI models when safety is not built in – which is why safety-by-design is critical through every stage of the AI life cycle and across the AI stack.”
The nudify block comes at the same time as AI model hosting platform Hugging Face taking steps to comply with Australian laws, following warnings from eSafety regarding its criminal misuse by Australians to create child sexual exploitation material.
“Following engagement from eSafety about compliance concerns, Hugging Face has now changed their terms of service so that all account holders are required [to] take steps to minimise the risks associated with models that they upload, specifically to prevent misuse to generate child sexual exploitation material or pro-terror material,” eSafety said.
Inman-Grant said that open AI models, when used without the appropriate guardrails, can pose “serious risks”.
“There have been instances where models were downloaded from AI model hosting platforms like Hugging Face by Australian users and used to create child sexual exploitation material, including depictions of real children and survivors of sexual abuse,” Inman-Grant said.
“Hugging Face is a ‘gatekeeper’ of distribution to these powerful AI models, much the same way the traditional app stores are for rogue apps (noting Google and Apple have recently deplatformed dangerous chatroulette app OmeTV).”
Inman-Grant said that by targeting both consumer tools and the models that power them, eSafety is “tackling harm at multiple levels of the technology stack”.
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.
Be the first to hear the latest developments in the cyber industry.