Elon Musk is in international hot waters. In December, X—formerly Twitter—updated Grok, its generative artificial intelligence (AI) model, with a new prompt-based image generator. The new version of Grok appended a prompt feature to every image on the website, allowing users to freely manipulate and distort other people’s images without the consent of the original user. There was no opt-out.
The havoc that followed was entirely predictable. Users immediately deluged X with worst-case scenarios: user-generated child sexual abuse material (CSAM) as well as nonconsensual deepfakes of celebrities and noncelebrities alike, with a rash of fascist imagery, including women being nonconsensually deepfaked into swastika bikinis. Sexually explicit Grok-generated images of minors began to proliferate on the dark web. Viral prompts from users included “put a swastika bikini on her,” “enhance her breasts,” “add blood,” and “replace face with Adolf.”
After the fatal shooting of a U.S. civilian by Immigration and Customs Enforcement officer Jonathan Ross, Grok somehow got worse, with users trying to identify the shooter from entirely made-up images of a person’s face and others generating sexualized images of the victim, Renee Good. This included manipulating sensitive images of her body at the crime scene. Grok’s own website, which is separate from X, seems to have even fewer restrictions, and has reportedly been generating extremely graphic and violent content to its own users.
Normally, this would bring dire consequences in the United States, where hosting CSAM in particular means deep legal trouble. But Musk seems convinced, probably rightly, that his closeness to the Trump administration will shield him from the law.
But the rest of the world is another matter. Things have gotten so dire on X that authorities around the globe are rushing to enforce limits on the platform, whereas U.S. legislators seem unable or unwilling to act. Authorities and civic leaders in Australia, Canada, Ireland, and Britain have called on governing bodies and lawmakers to leave X altogether. X is losing the last remnants of the once-vital public square that was Twitter as it transforms into little more than a haven for porn, misinformation, and shitposting.
Under U.S. law, social media platforms and other websites have 48 hours after notification to remove sexually explicit imagery, including AI-generated imagery, that was created or published without the subject’s consent. This content has long been referred to as “revenge porn,” though the advent of deepfakes and now Grok’s easy-generate prompt tool has greatly broadened the scope and scale of the phenomenon. The media has quickly termed this new phase “digital undressing.”
Yet despite the law, Musk’s platform has been slow to respond, even to a request from one of his ex-partners, who asked that X remove digitally undressed images of her—both as an adult and based on images taken when she was a minor. Musk and X’s safety team have reiterated that they are working to remove all illegal imagery on the site, and they have urged users to report any CSAM to federal authorities.
But CNN also reports that Musk chafes against the idea of censoring images, and that prior to the release of the image-prompt tool, he dramatically downsized his trust and safety team, letting go numerous staff members who had expressed concerns about exactly this scenario.
Indeed, instead of shutting down the tool until a tighter filter can be placed on it to keep it from generating illegal content, Musk has instead engaged in a strange ploy to profit off the tool. On Jan. 9, he claimed to have restricted the tool by placing it behind a partial paywall, limiting it to only verified users.
But this claim seems to be barely true at all: While users can no longer directly prompt Grok in replies to other posts, as of publication time, the entire tool was still showing up for all X users, easily accessible by hovering over any image on the website. Free users could also still go to Grok’s own separate website and use the tool there.
In other words, by claiming to have locked the tool when he hasn’t, Musk seems to be creating an excuse to push people to sign up to become verified—a paid privilege—thereby profiting from the content that users are generating rather than actually meaningfully restricting it.
While he may be eager to profit from the content, Musk’s own statements about the flood of illegal images make it clear that he doesn’t think Grok should be held accountable for it. His stance represents a major wrinkle for conservative U.S. lawmakers who have long sought to weaken and even entirely overturn Section 230 of the 1996 Communications Decency Act, the crucial safeguard of internet freedom in the United States that ensures that internet platforms can’t be held responsible for what their users post.
Section 230 has never applied to illegal content posted online, but even so, over the past decade, lawmakers have battered 230 repeatedly with a litany of laws, all ostensibly designed to fight CSAM and protect children from sexual predators—while in practice disproportionately affecting LGBTQ people, sex workers, and sex educators.
Musk, however, is clearly using Section 230 as a shield, despite his platform’s clear violation of numerous laws against online CSAM and nonconsensual imagery. So far, U.S. courts have leaned toward holding platforms, not users, responsible for illegal content that their built-in platform tools, such as Grok, enable users to create.
The U.S. Justice Department, meanwhile, assured CNN that it would “aggressively prosecute any producer or possessor of CSAM.” Still, U.S. law has yet to firmly establish who is legally responsible for AI-generated imagery—the user or the tool-maker—so Musk is well within his rights to hide behind the protections that Section 230 offers him and every internet website owner.
Ordinarily, that wouldn’t matter. Republican lawmakers would see prosecuting a CSAM-generating website as an easy win and an urgent priority—the kind of aggressive pursuit that has led to the takedowns of websites such as Backpage and the advent of age-verification gateways in some states.
This time, however, the owner of the offending website is both the richest man in the world and a key ally of U.S. President Donald Trump. That means that cracking down on Grok is unlikely to be a high priority for the Republican lawmakers in power at the moment, regardless of how much CSAM Musk’s fun toy has been generating.
On Friday, three Senate Democrats—including Sen. Ron Wyden, who co-authored Section 230—had taken the mildest step by calling upon Google and Apple to remove X and Grok from their app stores, making the extremely obvious observation that by producing CSAM and nonconsensual imagery, X is in clear violation of the app stores’ respective content policies. So far, Google and Apple, typically swift to remove apps that violate or appear to violate policies, have allowed X to remain in their stores.
At publication time, only a handful of leading Republicans, including Sens. Ted Cruz and Marsha Blackburn, had spoken out against X and Musk. In a Washington meeting last week, Vice President J.D. Vance reportedly told British Deputy Prime Minister David Lammy that he found the Grok image tool “unacceptable” and that generative AI was producing “hyper-pornographied slop,” but showed little sign of wanting to act on the issue.
As the Verge points out, this may mean that the role of actually regulating X could default to individual states that have already strengthened their own internet laws against illegal content, but given X’s size—the platform has about 125 million daily users—that might be difficult.
None of this reticence from U.S. authorities has stopped authorities overseas from weighing in, however, and their outrage couldn’t be clearer. Across the globe, leaders are condemning X and Musk. Dozens of nations, including India, France, Brazil, and the European Commission, have begun investigations into the extent of Grok’s content or threatened to remove X from their countries altogether.
“This is not ‘spicy,’” European Commission spokesperson Thomas Regnier told journalists on Monday. “This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe.”
“I think X is very well aware that we are very serious about DSA enforcement,” Regnier added, referring to the EU’s Digital Services Act, which carefully regulates user experiences on websites. He also noted that the European Commission had already recently fined X for a November incident in which Grok generated comments containing Holocaust denial.
In the United Kingdom, politicians’ uproar over Grok has led to intense sustained backlash against Musk and X and investigations by multiple government agencies. British Prime Minister Keir Starmer has thrown his support behind Ofcom, the U.K.’s media regulatory office, to regulate the platform, and Technology Secretary Liz Kendall has suggested that the United Kingdom would block X for violating its laws against deepfakes. In response, Musk accused the U.K. of wanting “to suppress free speech.”
This is far from the first time that global authorities have rushed to restrict U.S. tech companies, typically over privacy issues. Silicon Valley’s biggest companies have spent years battling sanctions and paying out fines over the EU’s General Data Protection Regulation, which restricts platforms’ use of private user data.
This newest unprecedented front line seems to be pushing regulators in some countries, such as Germany and Indonesia, to consider prosecuting offenders in criminal courts, not just in a regulatory capacity. On Sunday, Indonesia and Malaysia became the first nations to temporarily ban the Grok tool; it’s unclear whether this includes removing the feature from X itself or whether it applies only to the separate Grok app.
So far it seems that few other nations have gone further, despite all the outrage, than launching regulatory investigations into the matter. Critics have called the U.K.’s response “worryingly slow.” On Saturday, British media reported that the United Kingdom was engaging in talks with Canada and Australia to jointly sanction or ban X, but Evan Solomon, Canada’s minister of AI and Digital Innovation, subsequently denied that a ban was in the works.
All of this is taking place as the EU prepares to overhaul its regulations supporting internet network expansion and infrastructure, which U.S. tech companies reportedly will be allowed to treat as “best practices” guidance rather than legally binding obligations. This comes after the EU fined X $140 million in December, a move that drew backlash from the Trump administration, including an apparently official threat from the U.S. Trade Representative’s office, issued on X, to retaliate against the EU should it continue to use “discriminatory means” against U.S. tech companies.
Then there’s the fact that, as 404 Media noted, both Musk and previous Twitter owner Jack Dorsey oversaw staggering amounts of CSAM to proliferate across the site; under Musk’s ownership and with the advent of generative AI, the problem has exploded.
When multiple media outlets have tried contacting Musk’s AI company, xAI, all have received the same autogenerated response: “Legacy media lies.” Will governments get the same response?