A defining theme of the second Trump administration is the regular plumbing of new depths of moral depravity. You think they can’t surprise you any longer, and then Trump—or one of his billionaire allies—digs a little further. The latest story comes courtesy of Elon Musk, and it is a doozy. His AI chatbot Grok has apparently been trained to digitally undress people (or at least, safety limits to prevent that have been removed), and a huge population of perverts and shut-ins who hang around on Twitter/X are taking full advantage.
Though it won’t render people fully naked, it will put them in highly revealing clothing. As The Guardian reports, thousands of deepfakes have been produced, some of them of children. Some of the victims have been extremely prominent, like two members of the British cabinet. In a particularly disgusting example, Bellingcat’s Eliot Higgins showed one user prompting Grok to produce steadily more degrading deepfakes of Ebba Busch, Sweden’s deputy prime minister.
Ashley St. Clair, the mother of one of Musk’s many children, testified that Grok users were producing intimate deepfakes of her, some of them based on an image of her as a child. Grok can also produce such images privately—meaning it’s a safe bet that the scope of what is happening is much, much worse than what can be viewed publicly. Musk’s personal reaction to all this was his trademark cry-laughing emoji.
Last weekend, X promised that it would suspend people who prompt Grok to produce such images, but The Guardian found them still being created on Monday, and The Washington Post reported the same thing Tuesday: “Some of the images appear to portray children, The Washington Post found. Despite callouts from high-profile victims including St. Clair, X hasn’t stopped Grok from generating explicit images.”
With considerable reluctance, I created a fresh X account—whose “For You” feed was immediately stuffed with right-wing agitprop—and confirmed the availability of these images myself. xAI did not respond to an emailed request for comment.
There are some relevant laws on the books.
It escaped my notice at the time, but it turns out that a deepfake nude and revenge porn ban, called the TAKE IT DOWN Act and sponsored by none other than Sen. Ted Cruz (R-TX), was signed into law by Trump last May. (It seems that it was a Melania pet project.) This law appears to straightforwardly ban what the Twitter perverts are doing, stating that anyone who publishes a “digital forgery” (meaning any intimate image of someone created by a computer) that is intended to cause harm or does cause harm can be punished with a large fine and up to two years in prison. Someone who does that to a minor can get up to three years.
The law also lays out a set of requirements that Grok does not seem to be following, though it has until May of this year to comply. By that time, platforms are supposed to implement a simple and obvious system for reporting deepfake images, and remove any flagged images within 48 hours. Platforms that don’t comply will be subject to Federal Trade Commission sanctions.
But that’s just the start. What Grok is enabling almost certainly violates many more laws than this, and in some cases the platform itself can be held responsible.
For one thing, social media platforms are famously protected from being prosecuted for hosting most defamatory speech by Section 230 of the Communications Decency Act, but that does not include content that violates federal law, including an explicit carve-out for child sexual exploitation. The relevant statutes are extremely strict, making it a federal crime to produce, distribute, receive, or possess child pornography, with punishments of up to 20 years in prison. And as the Department of Justice itself explains, that includes “computer generated images indistinguishable from an actual minor, and images created, adapted, or modified, but appear to depict an identifiable, actual minor.”
Moreover, Section 230 and the TAKE IT DOWN Act apply to content posted by a platform’s users, but Grok is actually creating this stuff, through prompts suggested by users. It is not hard to make a case that Grok is the publisher, not just a host. As Sen. Ron Wyden (D-OR) posted on Bluesky, “I wrote Section 230 to protect user speech, not a company’s own speech. I’ve long said AI chatbot outputs are not protected by 230 and that it is not a close call.”
Similar laws are on the books in all 50 states, and in most of the European Union. Basically across the board, it is illegal to produce or distribute nonconsensual intimate images, with more severe punishment if the victims are children. Producing and disseminating child porn is one of the few remaining behaviors that receives nearly universal condemnation. And Grok is doing this on an unlimited scale, with more out in the world for viewing with every prompt. Simply opening the X app on your phone, and potentially downloading a deepfake that comes passively into your feed, could be a major crime in most countries.
An ordinary person who facilitated mass creation and distribution of pornographic images without consent would already be facing approximately 500 quadrillion years in prison. Yet as usual, Western democracies have not yet attempted to punish Musk or Twitter/X in any way. Obviously Trump, who recently had dinner with Musk as their bromance resumes, is not even pretending to enforce the law he signed just a few months ago, but European countries are not throwing the book at him either. Britain’s technology regulator Ofcom has made “urgent contact” with X, and is reportedly considering a regulatory response, while the European Commission is “very seriously looking” at the situation. But that’s about it.
Musk, like Trump, has been pushing the legal envelope for his entire career and has gotten away with it every time. He won’t stop until someone makes him stop—which will require more than a desultory fine. There may be enough illegal activity here to permanently ban Twitter/X from operating in a particular jurisdiction. European nations with a modicum of self-respect could at least ask the question.

