Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

Aussie activists call on app stores to remove Grok chatbot over nudify feature

Collective Shout takes Elon Musk’s xAI to task over the production of more than 3 million sexual images, including more than 20,000 images of children.

Wed, 04 Feb 2026
Aussie activists call on app stores to remove Grok chatbot over nudify feature

Australian grassroots campaigning movement Collective Shout has called upon the major app stores to remove Elon Musk’s Grok AI from their virtual shelves over its use in creating sexual abuse material of women and girls.

“Grok is a digital weapon of abuse, creating new forms of online sexual assault. The AI tool makes it easy to degrade, debase, harass and intimidate women,” movement director Melinda Tankard Reist said in a 2 February statement.

“Musk said deepfake Grok-enabled images would be investigated and removed where there were laws against them.

 
 

“This leaves millions of women and girls in most parts of the world vulnerable for harvesting for deepfake explicit content.

“Women portrayed sexually in honour cultures could be killed.”

Tankard Reist added that Collective Shout’s own staff were exploited by Grok’s nudify feature, with their images being altered into “extreme violent abuse material depicting them tortured and murdered and turned into AI deepfake porn videos performing oral sex”.

According to Collective Shout, the inclusion of Grok in any app store is a clear violation of their terms of service, which generally prohibit child sexual abuse material (CSAM), pornographic content, and any other service that can facilitate harassment or promote sexually predatory behaviour.

To date, more than 1,000 concerned individuals have used Collective Shout’s Action Button to email senior executives responsible for app store decisions.

As Tankard Reist notes, other, similar nudify apps have been removed from app stores, “so why is Grok still there?”

“This app has been used to generate image-based abuse, sexually explicit deepfake forgeries, pornographic violation and child sexual abuse material,” Tankard Reist said.

“Grok must be removed immediately before even more women and children are traumatised.”

The Australian eSafety commissioner Julie Inman Grant has previously expressed major concern over reports that xAI’s Grok has been used to generate sexual abuse deepfakes of people, including minors.

“Since late 2025, eSafety has received a doubling of reports relating to the use of Grok to generate sexualised images without consent,” Inman Grant said last month.

“Some reports relate to images of adults, which are assessed under our image-based abuse scheme, while others relate to potential child sexual exploitation material, which are assessed under our illegal and restricted content scheme.

“With AI’s ability to generate hyper-realistic content, bad actors can easily and relatively freely produce convincing synthetic images of abuse – making it harder for the ecosystem of stakeholders fighting this new wave of digital harm, including global regulators, law enforcement and child safety advocates.”

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

Tags:
You need to be a member to post comments. Become a member for free today!