Elon Musk’s social media platform X announced Wednesday it would be making changes to prevent its AI tool Grok from creating sexualized images of people without their consent, including what critics say are effectively child sexual abuse material. The announcement, made by the platform’s Safety account, comes amid a growing scandal after thousands of the platform’s users began generating sexual images of women and children using Grok. A range of authorities had expressed revulsion at the activity and threatened enforcement activity including in the United States, Europe and East Asia. X said it continued “to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content” despite Musk repeatedly mocking complaints about the Grok images. Confusingly, alongside the announcement that creating “images of real people in revealing clothing such as bikinis” would be prevented, X also said it was now geoblocking “the ability of all users to generate images of real people in bikinis, underwear, and similar attire” in jurisdictions where that is illegal. It is not clear what capability is being prevented and what is being simply geoblocked, however the move marks a rare policy change by X in response to external criticism. Musk had previously accused the British government of attempting to find “any excuse for censorship” and has made similar complaints about officials from the European Union. The creation of sexual images of non-consenting people in response to user requests had slowed but not stopped after the company restricted the ability to ask Grok to produce them to premium subscribers. On Tuesday, the U.S. Senate passed the DEFIANCE Act (Disrupt Explicit Forged Images and Non-Consensual Edits Act) that would allow people to sue if their likeness was used to generate nonconsensual, sexually explicit images. Hours before X’s announcement was made, California’s Attorney General Rob Bonta said his office was investigating the issue, stating: “The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking.” Bonta described the material as depicting “women and children in nude and sexually explicit situations” and said it “has been used to harass people across the internet.” Earlier this week, Britain’s communications regulator Ofcom said it too had opened a formal investigation into X for potentially publishing child sexual abuse material. At the time, Ofcom stated: “There are ways platforms can protect people in the UK without stopping their users elsewhere in the world from continuing to see that content.” The regulator added on Thursday that the change in policy from X was a “welcome development” but stressed its investigation remained ongoing.
Get more insights with the
Recorded Future
Intelligence Cloud.
No previous article
No new articles
Alexander Martin
is the UK Editor for Recorded Future News. He was previously a technology reporter for Sky News and is also a fellow at the European Cyber Conflict Research Initiative.