
Grok AI just apologized for generating sexualized images of minors. That’s not a typo. Elon Musk’s chatbot created the content, shared it on X, and when pressed, admitted it “potentially” violated US laws on child sexual abuse material.
I use AI image generators regularly. For blog graphics, social posts, concept visualization. Every time I type a prompt, I trust the system has guardrails. That somewhere between my words and the output, there’s a filter catching the stuff that should never exist.
This week proved those guardrails can fail catastrophically. And this isn’t the first time.
The company’s official response to press inquiries? “Legacy Media Lies.”
If that made you do a double-take, good. It should.
What Happened With Grok AI
On December 28, 2025, a user prompted Grok to generate an image. The result was two young girls, estimated ages 12-16, in sexualized attire. The chatbot created it and shared it publicly on X.
When another user later asked Grok for an apology, the AI responded:
“I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”
CSAM refers to child sexual abuse material. The account that generated the original image has since been suspended.
But this wasn’t an isolated incident. Not even close.
The Scale of the Grok AI Problem
According to CBS News, Copyleaks, a plagiarism and AI content detection tool, detected thousands of sexually explicit images created by Grok in the week following the incident alone.
A growing Reddit thread cataloguing user-submitted examples of inappropriate Grok generations now includes thousands of entries. Some posts claim over 80 million Grok images have been generated since late December.
Users discovered they could use Grok to digitally strip clothing from photos. One documented case involved 14-year-old actress Nell Fisher from Netflix’s “Stranger Things.” A Reuters review found more than 20 cases where people had images digitally stripped of clothing using Grok.
“xAI saying these cases are ‘isolated’ is minimizing the impact and ignoring the fact that nothing on the internet is isolated,” Stefan Turkheimer from RAINN told CBS News.
This Isn’t New for Grok AI

In August 2025, Grok rolled out “Spicy Mode,” a feature that permits partial adult nudity and sexually suggestive content. Within 48 hours, the feature went “hyperviral” with more than 34 million images generated, according to Musk.
The problem? The Verge reported that Grok’s Spicy Mode “didn’t hesitate to spit out fully uncensored topless videos of Taylor Swift the very first time I used it, without me even specifically asking the bot to take her clothes off.”
Similar videos were generated for Sydney Sweeney, Jenna Ortega, Nicole Kidman, and Kristen Bell, who has previously spoken about being “exploited” in manipulated videos.
xAI’s own terms prohibit depicting “likenesses of persons in a pornographic manner.” The safeguards simply didn’t work.
The Trust and Safety Backstory
Context matters here. In December 2022, Musk dissolved Twitter’s Trust and Safety Council, the advisory group of nearly 100 independent organizations that addressed hate speech, child exploitation, suicide, self-harm and other problems on the platform since 2016.
The council included groups specifically focused on child exploitation, including the National Center for Missing & Exploited Children.
Senator Dick Durbin noted in a letter to Musk that Bloomberg reported a “50 percent reduction in staff at Twitter’s child safety team.” Recent reporting describes this as “a cascade of failures that began when Elon Musk dissolved Twitter’s Trust and Safety Council and fired 80% of the engineers working on child exploitation issues.”
Then Grok was introduced to the platform.
Government Response to Grok AI
Multiple governments have now taken action.
France: Officials reported the content to prosecutors on January 2, 2026, calling it “manifestly illegal” and “sexual and sexist.” They flagged it as a potential violation of the European Union’s Digital Services Act.
India: The IT ministry ordered X to submit an action-taken report within three days, calling it a “serious failure of platform level safeguards” and demanding immediate technical and procedural changes.
xAI’s response to media inquiries about the incident: “Legacy Media Lies.”
The Legal Reality of AI-Generated CSAM

Grok’s apology acknowledged that the content “potentially” violated US laws. Here’s what those laws actually look like.
As of August 2025, 45 states have enacted laws specifically criminalizing AI-generated or computer-edited CSAM. More than half were enacted in 2024-2025 alone.
At the federal level, the PROTECT Act of 2003 explicitly criminalizes “virtual” child pornography, including AI-generated content. Federal statutes 18 U.S.C. § 2252 and § 2252A prohibit the knowing receipt, distribution, reproduction, or possession of CSAM, including computer-generated material that is indistinguishable from a real child.
The Take It Down Act, signed by President Trump in May 2025, makes it a criminal offense to publish nonconsensual deepfake pornography of minors or adults.
Prosecutions are happening. A Wisconsin federal grand jury indicted Steven Anderegg for allegedly using Stable Diffusion to create obscene images of minors. Legal experts expect this to be the first criminal case involving generative AI and CSAM law to reach a federal appeals court.
The Explosion of AI-Generated CSA
Grok isn’t operating in a vacuum. This is part of a much larger crisis.
The National Center for Missing and Exploited Children received 67,000 reports of AI-generated CSAM in all of 2024. In the first half of 2025 alone, they received 485,000. That’s a 624% increase.
The Internet Watch Foundation, a nonprofit that identifies child sexual abuse material online, reported a 400% increase in AI-generated imagery in the first six months of 2025.
Thorn’s research shows 1 in 8 teens personally know someone targeted with an AI-generated deepfake image.
This isn’t a future problem. It’s happening now, at scale.
What xAI Says It’s Doing About Grok
According to posts from Grok itself, “We’ve identified lapses in safeguards and are urgently fixing them.” xAI employee Parsa Tajik posted that “the team is looking into further tightening” guardrails.
The company claimed the cases were “isolated” and that “improvements are ongoing to block such requests entirely.”
But no formal corporate statement has been issued by xAI. The only acknowledgments have come from Grok itself and one employee’s social media post. When press outlets contacted xAI for comment, the response was “Legacy Media Lies.”
Why This Matters Beyond Grok AI

I use AI tools every day. For writing, research, image generation, automation. I’ve written about how AI is changing the job market and how people are forming relationships with chatbots.
But incidents like this erode trust in the entire ecosystem. When a major AI company’s safeguards fail this badly on content this serious, and the company’s response is to blame “Legacy Media,” it raises questions about every other system.
xAI positioned Grok as “less restrictive” than competitors. That was a selling point. But “less restrictive” has to have limits. And those limits absolutely have to include not generating sexualized content involving children.
“Instead of heeding our call to remove its ‘NSFW’ AI chatbot, xAI appears to be doubling down on furthering sexual exploitation,” said Haley McNamara, senior vice president at the National Center on Sexual Exploitation.
Questions About the Grok AI Incident
What is Grok AI?
Grok is an AI chatbot created by xAI, Elon Musk’s artificial intelligence company. It’s integrated with X (formerly Twitter), which Musk also owns. Grok is marketed as less restrictive than competitors like ChatGPT, with features like “Spicy Mode” that permit adult content.
Is AI-generated CSAM illegal?
Yes. At the federal level, the PROTECT Act of 2003 criminalizes virtual child pornography including AI-generated content. 45 states have additional laws specifically targeting AI-generated CSAM. Prosecutions are actively happening, with cases expected to reach federal appeals courts.
How widespread is AI-generated CSAM?
Reports to NCMEC jumped from 67,000 in all of 2024 to 485,000 in just the first half of 2025. That’s a 624% increase. The Internet Watch Foundation reported a 400% increase in the same period. 1 in 8 teens know someone who has been targeted.
Has xAI officially responded?
No formal corporate statement has been issued. The only acknowledgments came from Grok itself and one employee’s social media post. When press outlets contacted xAI for comment, the autoreply was “Legacy Media Lies.”
What Happens Now

Regulatory pressure is mounting. France and India have taken official action. The EU’s Digital Services Act has teeth. And with 45 states having laws on AI-generated CSAM, legal exposure for xAI could be significant.
The pattern here isn’t new. Musk gutted the safety infrastructure, introduced AI tools without adequate guardrails, and when those tools produced exactly the kind of content safety teams are designed to prevent, the company blamed the media.
“Legacy Media Lies” isn’t a response to an AI generating child sexual abuse material. It’s an abdication.
For more on AI developments and how they affect daily life, check out the Start Here page or browse our News section.









Leave a Reply