
In the fascinating new reality of the internet, teen girls can’t learn about periods on Reddit and indie artists can’t sell smutty games on Itch.io, but a military contractor will make you nonconsensual deepfakes of Taylor Swift taking her top off for $30 a month.
Early Tuesday, Elon Musk’s xAI launched a new image and video generator called Grok Imagine with a “spicy” mode whose output ranges from suggestive gestures to nudity. Because Grok Imagine also has no perceptible guardrails against creating images of real people, that means you can essentially generate softcore pornography of anyone who’s famous enough for Grok to recreate (although, pragmatically, it appears to mainly produce seriously NSFW output for women). Musk bragged that more than 34 million images were generated within a day of launching operations. But the real coup is demonstrating that xAI can ignore pressure to keep adult content off its services while helping users create something that’s widely reviled, thanks to legal gaps and political leverage that no other company has.
xAI’s video feature — which debuted around the same time as a romantic chatbot companion named Valentine — seems from one angle strikingly weird, because it’s being released during a period where sex (down to the word itself) is being pushed to the margins of the internet. Late last month, the UK started enforcing age-gating rules that required X and other services to block sexual or otherwise “harmful” content for users under 18. Around the same time, an activist group called Collective Shout successfully pressured Steam and Itch.io to crack down on adult games and other media, leading Itch.io in particular to mass-delist any NSFW uploads.
Deepfake porn of real people is a form of nonconsensual intimate imagery, which is illegal to intentionally publish in the US under the Take It Down Act, signed by President Donald Trump earlier this year. In a statement published Thursday, the Rape, Abuse & Incest National Network (RAINN) called Grok’s feature “part of a growing problem of image-based sexual abuse” and quipped that Grok clearly “didn’t get the memo” about the new law.
But according to Mary Anne Franks, a professor at George Washington University Law School and president of the nonprofit Cyber Civil Rights Initiative (CCRI), there’s “little danger of Grok facing any kind of liability” under the Take It Down Act. “The criminal provision requires ‘publication,’ which, while unfortunately not defined in the statute, suggests making content available to more than one person,” Franks says. “If Grok only makes the videos viewable to the person who uses the tool, that wouldn’t seem to suffice.”
Regulators have failed to enforce laws against big companies even when they apply
Grok also likely isn’t required to remove the images under the Take It Down Act’s takedown provision — despite that rule being so worryingly broad that it threatens most social media services. “I don’t think Grok — or at least this particular Grok tool — even qualifies as a ‘covered platform,’ because the definition of covered platform requires that it ‘primarily provides a forum for user-generated content,’” she says. “AI-generated content often involves user inputs, but the actual content is, as the term indicates, generated by AI.” The takedown provision is also designed to work through people flagging content, and Grok doesn’t publicly post the images where other users can see them — it just makes them incredibly easy to create (and almost inevitably post to social media) at a large scale.
Franks and the CCRI called out the limited definition of a “covered platform” as a problem for other reasons months ago. It’s one of several ways the Take It Down Act fails to serve people impacted by nonconsensual intimate imagery while posing a risk to web platforms acting in good faith. It might not even stop Grok from posting lewd AI-modified images of real people publicly, Franks told Spitfire News in June, in part because there are open questions about whether Grok is a “person” impacted by the law.
These kinds of failures are a running theme in internet regulation that’s ostensibly supposed to crack down on harmful or inappropriate content; the UK’s mandate, for instance, has made it harder to run independent forums while still being fairly easy for kids to get around.
Compounding this problem, particularly in the US, regulatory agencies have failed to impose meaningful consequences for all kinds of rulebreaking by powerful companies, including Musk’s many businesses. Trump has given Musk-owned companies an almost total pass for bad conduct, and even after formally leaving his powerful position at the Department of Government Efficiency, Musk likely maintains tremendous leverage over regulatory agencies like the FTC. (xAI just got a contract of up to $200 million with the Department of Defense.) So even if xAI were violating the Take It Down Act, it probably wouldn’t face investigation.
Beyond the government, there are layers of gatekeepers that dictate what is acceptable on platforms, and they often take a dim view of sex. Apple, for instance, has pushed Discord, Reddit, Tumblr, and other platforms to censor NSFW material with varying levels of success. Steam and Itch.io reevaluated adult content under threat of losing relationships with payment processors and banks, which have previously put the screws on platforms like OnlyFans and Pornhub.
In some cases, like Pornhub’s, this pressure is the result of platforms allowing unambiguously harmful and illegal uploads. But Apple and payment processors don’t appear to maintain hard-line, evenly enforced policies. Their enforcement seems to depend significantly on public pressure balanced against how much power the target has, and despite his falling out with Trump, virtually nobody in business has more political power than Musk. Apple and Musk have repeatedly clashed over Apple’s policies, and Apple has mostly held firm on things like its fee structure, but it’s apparently backed down on smaller issues, including returning its advertisements to X after pulling them from the Nazi-infested platform.
Apple has banned smaller apps for making AI-generated nudes of real people. Will it exert that kind of pressure on Grok, whose video service launched exclusively on iOS? Apple didn’t respond to a request for comment, but don’t hold your breath.
Grok’s new feature is harmful for people who can now easily have nonconsensual nudes made of them on a major AI service, but it also demonstrates how hollow the promise of a “safer” internet is proving. Small-time platforms face pressure to remove consensually recorded or entirely fictional media made by human beings, while a company run by a billionaire can make money off something that’s in some circumstances outright illegal. If you’re online in 2025, nothing is about sex, including sex — which, per usual, is about power.