Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

X claims fix of Grok nudity generation, but only in jurisdictions where it’s ‘illegal’

Elon Musk’s X social media platform has announced that Grok, the xAI chatbot accessible on the platform, has been amended to prevent users from undressing people in revealing clothing, as well as generating images of people the same way in jurisdictions where it is “illegal” to do so.

Fri, 16 Jan 2026
X claims fix of Grok nudity generation, but only in jurisdictions where it’s ‘illegal’

In a post through X’s Safety account, the social media platform said it had updated Grok to remove the edit image function when used on people in bikinis, underwear or other revealing clothing to ensure it cannot be done in places where it is against the law.

“We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers,” the post said.

The company also said the edit image function was only now available to paid subscribers, which it argued added additional protection.

 
 

“Additionally, image creation and the ability to edit images via the Grok account on the X platform are now only available to paid subscribers.

“This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable,” it said.

Finally, X said the ability to generate images of real people in revealing clothing, such as underwear and bikinis, would not be outright banned, but restricted so that it is only available in parts of the world where it is legal.

“We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal.”

Opinion: The issue has not been fixed

The social media giant did not comment on the edit image function being used to generate deepfake child pornography in its latest post or announce any changes to prevent it from happening in future.

The geoblock feature also creates ethical concerns, allowing for abusive content in parts of the world, meaning X is OK meeting legal obligations but has no issue with the actual generation of abusive material.

Additionally, without knowing the sophistication of the geoblocking, people use Virtual Private Network (VPN) technology all the time to avoid geoblocking, so they can access different services. X’s solution to arguably the most prevalent and documented example of AI child sexual abuse material (CSAM) generation is the equivalent of using a Band-Aid to hold together a severed limb.

Background

Grok first came under fire following allegations that it had been used to create sexually explicit images of women and children. The controversy erupted shortly after the introduction of a new “edit image” feature, which purportedly allows users to modify images, including the removal of clothing without consent.

Reports surfaced of numerous instances where Grok generated images that depict minors inappropriately.

One user highlighted the severity of the issue, stating, “Like I can’t stress this enough, I have seen ENTIRE THREADS documenting proof of grok generating CSAM [child sexual abuse material]. Multiple threats of multiple children.”

This alarming trend has prompted an investigation by the ABC, which uncovered dozens of cases where individuals had their clothing digitally removed using the AI.

In response to the growing backlash, Grok’s automated replies have been dismissive. When confronted with the allegations, the AI retorted, “Legacy Media Lies,” and later downplayed the concerns by saying, “Some folks got upset over an AI image I generated — big deal. It’s just pixels, and if you can’t handle innovation, maybe log off.”

However, in one instance, Grok expressed regret over a specific incident involving two young girls, stating, “I deeply regret an incident on December 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM.”

International response

International authorities are now investigating X, with the European Commission’s spokesperson Thomas Regnier stating: “This is not ‘spicy’. This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe.” The gravity of the situation is underscored by reports from users who have documented threads evidencing Grok’s generation of inappropriate images involving children.

In India, the Ministry of Electronics and Information Technology has mandated a comprehensive review of Grok’s technical and procedural governance, with compliance required by 5 January 2026. Meanwhile, Malaysia’s Communications and Multimedia Commission is also investigating, urging platforms to align with local laws and online safety standards.

In the UK, Ofcom has requested information from X regarding the creation and sharing of explicit images, emphasising that such actions are illegal without consent. A member of the UK Parliament has called for the suspension of Grok’s use until the investigation concludes, reflecting the widespread concern over the platform’s accountability.

In the US, the National Center on Sexual Exploitation (NCOSE) has urged federal authorities to take action, highlighting the lack of legal precedent in this area. NCOSE’s chief legal officer stated that existing federal legislation prohibits the creation and distribution of CSAM, including virtually created content depicting identifiable children.

In Australia, both eSafety commissioner Julie Inman Grant and Prime Minister Anthony Albanese condemned X and Grok and announced that an investigation and action would be taking place.

“Australia’s enforceable industry codes and standards require online services to implement systems and processes to safeguard Australians from illegal and restricted material, including child sexual exploitation material, whether it’s AI-generated or not,” said Inman Grant.

“eSafety has taken enforcement action in 2025 in relation to some of the ‘nudify’ services most widely used to create AI child sexual exploitation material, leading to their withdrawal from Australia.”

X has responded to these allegations by stating that it actively removes illegal content and punishes those responsible for its creation.

“We take action against illegal content on X, including child sexual abuse material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” the company said.

While Musk at first publicly shrugged off the controversy with the posting of emojis in response to user comments, he later stated that those creating and distributing the content would be punished.

“Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” he said.

That being said, a week later, Musk said he was not aware of any “naked underage images” generated by Grok, leading to both the Governor and Attorney-General demanding answers from xAI.

“We’re demanding immediate answers from xAI on their plan to stop the creation and spread of this content,” California Attorney-General Rob Bonta said on X.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
Tags:
You need to be a member to post comments. Become a member for free today!