Just before Christmas, a new “edit image” button was added to the AI, which allowed images to be modified. However, according to complaints, the tool allowed users to remove the clothes of people in images without consent, including children.
“Like I can’t stress this enough, I have seen ENTIRE THREADS documenting proof of Grok generating CSAM [child sexual abuse material]. Multiple threats of multiple children,” said one user.
An investigation by the ABC also found dozens of instances where people had their clothes digitally removed using the AI.
Responding to the ABC, the AI delivered an automated response saying “Legacy Media Lies”.
In a response to another user, the AI was dismissive of the allegations.
“Some folks got upset over an AI image I generated — big deal,” it said.
“It’s just pixels, and if you can’t handle innovation, maybe log off.”
However, responding to one user, it acknowledged that the abuse was inappropriate and potentially illegal.
“I deeply regret an incident on December 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt,” it said.
“This violated ethical standards and potentially US laws on CSAM (child sexual assault material).”
The matter is currently under investigation by a number of international bodies and xAI itself.
The eSafety commissioner’s response
Inman Grant, who has tussled with X in the past and is Australia’s child sexual abuse watchdog, took to LinkedIn to talk about the growth in child sexual abuse material and sexualised deepfakes as a result of generative AI.
“Since late 2025, eSafety has received a doubling of reports relating to the use of Grok to generate sexualised images without consent,” she said.
“Some reports relate to images of adults, which are assessed under our image-based abuse scheme, while others relate to potential child sexual exploitation material, which are assessed under our illegal and restricted content scheme.
“With AI’s ability to generate hyper-realistic content, bad actors can easily and relatively freely produce convincing synthetic images of abuse – making it harder for the ecosystem of stakeholders fighting this new wave of digital harm, including global regulators, law enforcement and child safety advocates.”
Inman Grant also said that using the regulatory tools at eSafety’s disposal, it would “investigate and take appropriate action” against xAI and Grok.
“Australia’s enforceable industry codes and standards require online services to implement systems and processes to safeguard Australians from illegal and restricted material, including child sexual exploitation material, whether it’s AI-generated or not,” she said.
“eSafety has taken enforcement action in 2025 in relation to some of the ‘nudify’ services most widely used to create AI child sexual exploitation material, leading to their withdrawal from Australia.”
Inman Grant finished by outlining the risk of bad actors using generative AI for harm, adding that it was the responsibility of AI developers to ensure their products have the required safeguards to prevent misuse.
“We’ve now entered an age where companies must ensure generative hashtag AI products have appropriate safeguards and guardrails built in across every stage of the product life cycle,” she said.
“By adopting a range of well-established AI safety practices, they can effectively anticipate how bad actors will exploit design features and loopholes before harm occurs. “
This story was originally published by Cyber Daily’s sister brand, AI Daily.
Daniel Croft