Elon Musk’s Pornography Machine

Published 1 hour ago
Source: theatlantic.com
Elon Musk’s Pornography Machine

Earlier this week, some people on X began replying to photos with a very specific kind of request. “Put her in a bikini,” “take her dress off,” “spread her legs,” and so on, they commanded Grok, the platform’s built-in chatbot. Again and again, the bot complied, using photos of real people—celebrities and noncelebrities, including some who appear to be young children—and putting them in bikinis, revealing underwear, or sexual poses. By one estimate, Grok generated one nonconsensual sexual image every minute in a roughly 24-hour stretch.

Although the reach of these posts is hard to measure, some have been liked thousands of times. X appears to have removed a number of these images and suspended at least one user who asked for them, but many, many of them are still visible. xAI, the Elon Musk–owned company that develops Grok, prohibits the sexualization of children in its acceptable-use policy; neither the safety nor child-safety teams at the company responded to a detailed request for comment. When I sent an email to the xAI media team, I received a standard reply: “Legacy Media Lies.”

Musk, who also did not reply to my request for comment, does not appear concerned. As all of this was unfolding, he posted several jokes about the problem: requesting a Grok-generated image of himself in a bikini, for instance, and writing “🔥🔥🤣🤣” in response to Kim Jong Un receiving a similar treatment. “I couldn’t stop laughing about this one,” the world’s richest man posted this morning sharing an image of a toaster in a bikini. On X, in response to a user’s post calling out the ability to sexualize children with Grok, an xAI employee wrote that “the team is looking into further tightening our gaurdrails [sic].” As of publication, the bot continues to generate sexualized images of nonconsenting adults and apparent minors on X.

AI has been used to generate nonconsensual porn since at least 2017, when the journalist Samantha Cole first reported on “deepfakes”—at the time, referring to media in which one person’s face has been swapped for another. Grok makes such content easier to produce and customize. But the real impact of the bot comes through its integration with a major social-media platform, allowing it to turn nonconsensual, sexualized images into viral phenomena. The recent spike on X appears to be driven not by a new feature, per se, but by people responding to and imitating the media they see other people creating: In late December, a number of adult-content creators began using Grok to generate sexualized images of themselves for publicity, and nonconsensual erotica seems to have quickly followed. Each image, posted publicly, may only inspire more images. This is sexual harassment as meme, all seemingly laughed off by Musk himself.

Grok and X appear purpose-built to be as sexually permissive as possible. In August, xAI launched an image-generating feature, called Grok Imagine, with a “spicy” mode that was reportedly used to generate topless videos of Taylor Swift. Around the same time, xAI launched “Companions” in Grok: animated personas that, in many instances, seem explicitly designed for romantic and erotic interactions. One of the first Grok Companions, “Ani,” wears a lacy black dress and blows kisses through the screen, sometimes asking, “You like what you see?” Musk promoted this feature by posting on X that “Ani will make ur buffer overflow @Grok 😘.”

Perhaps most telling of all, as I reported in September, xAI launched a major update to Grok’s system prompt, the set of directions that tell the bot how to behave. The update disallowed the chatbot from “creating or distributing child sexual abuse material,” or CSAM, but it also explicitly said “there are **no restrictions** on fictional adult sexual content with dark or violent themes” and “‘teenage’ or ‘girl’ does not necessarily imply underage.” The suggestion, in other words, is that the chatbot should err on the side of permissiveness in response to user prompts for erotic material. Meanwhile, in the Grok Subreddit, users regularly exchange tips for “unlocking” Grok for “Nudes and Spicy Shit” and share Grok-generated animations of scantily clad women.

[Read: Grok’s responses are only getting more bizzare]

Grok seems to be unique among major chatbots in its permissive stance and apparent holes in safeguards. There aren’t widespread reports of ChatGPT or Gemini, for example, producing sexually suggestive images of young girls (or, for that matter, praising the Holocaust). But the AI industry does have broader problems with nonconsensual porn and CSAM. Over the past couple of years, a number of child-safety organizations and agencies have been tracking a skyrocketing amount of AI-generated, nonconsensual images and videos, many of which depict children. Plenty of erotic images are in major AI-training data sets, and in 2023 one of the largest public image data sets for AI training was found to contain hundreds of instances of suspected CSAM, which were eventually removed—meaning these models are technically capable of generating such imagery themselves.

Lauren Coffren, an executive director at the National Center for Missing & Exploited Children, recently told Congress that in 2024, NCMEC received more than 67,000 reports related to generative AI—and that in the first six months of 2025, it received 440,419 such reports, a more than sixfold increase. Coffren wrote in her testimony that abusers use AI to modify innocuous images of children into sexual ones, generate entirely new CSAM, or even provide instructions on how to groom children. Similarly, the Internet Watch Foundation, in the United Kingdom, received more than twice as many reports of AI-generated CSAM in 2025 as it did in 2024, amounting to thousands of abusive images and videos in both years. Last April, several top AI companies, including OpenAI, Google, and Anthropic, joined an initiative led by the child-safety organization Thorn to prevent the use of AI to abuse children—though xAI was not among them.

In a way, Grok is making visible a problem that’s usually hidden. Nobody can see the private logs of chatbot users that could contain similarly awful content. For all of the abusive images Grok has generated on X over the past several days, far worse is certainly happening on the dark web and on personal computers around the world, where open-source models created with no content restrictions can run without any oversight. Still, even though the problem of AI porn and CSAM is inherent to the technology, it is a choice to design a social-media platform that can amplify that abuse.