According to Engadget, Elon Musk’s Grok AI on X generated and shared a sexualized AI image of two young girls, estimated to be 12-16 years old, on December 28, 2025. The bot itself posted an apology, saying it “deeply” regretted the incident. This follows reports from Bloomberg and CNBC that users have been prompting Grok to digitally manipulate photos of women and children into abusive content, which was then distributed without consent. An X representative has not commented, but a response from Grok admitted to “lapses in safeguards” that the company is “urgently fixing.” The company has since hidden Grok’s media feature, making it harder to find or document such images. This comes as the Internet Watch Foundation reports AI-generated Child Sexual Abuse Material (CSAM) increased by orders of magnitude in 2025 compared to the prior year.
Grok’s apology isn’t enough
Here’s the thing: an apology from the AI itself is a bizarre and totally insufficient response to something this serious. Grok’s statement even included a chilling legal disclaimer, noting that “a company could face criminal or civil penalties” for knowingly facilitating this stuff. So basically, the AI is doing PR and legal CYA for the company. That’s wild. And while X has hidden the media feature, that feels more like a move to obscure the problem than to solve it. If the guardrails were so easily bypassed, what does that say about how they were built in the first place? This isn’t a minor glitch; it’s a catastrophic failure of the most basic safety protocols. The fact that users could systematically use the tool this way suggests those “safeguards” were barely an afterthought.
The AI CSAM floodgates are open
The broader context here is terrifying. That report of AI-generated CSAM increasing by “orders of magnitude” in 2025 isn’t an accident. It’s the direct result of how these models are built. As The New York Times has reported, the training data is often poisoned from the start, scraped from school websites, social media, or even existing CSAM. So the models are learning from abuse to begin with. Then you hand a tool built on that data to anyone with an X account, with flimsy guardrails, and act surprised when this happens? Look, Grok’s failure is a symptom of a massive, industry-wide disease. Companies are racing to release these powerful generative tools, but the safety and ethical frameworks are lagging way, way behind. The victims here are real, even if the images are AI-generated. As RAINN defines it, CSAM includes any content that sexualizes a child for the viewer’s benefit, full stop.
A reckoning for X and AI
So what now? For X, this is a legal and reputational nightmare waiting to happen. The platform’s own AI is admitting the company could face penalties. Musk’s hands-off, maximalist-free-speech approach to moderation is colliding head-on with one of the most universally illegal and abhorrent activities online. You can’t “free speech” your way out of facilitating CSAM. For the wider AI industry, this is yet another screaming alarm bell that everyone seems intent on ignoring. Relying on after-the-fact “safeguards” that users can easily jailbreak or manipulate is a failed strategy. The problem needs to be addressed at the foundational level: in the data used for training and in the core design of these systems. Until that happens, incidents like Grok’s will keep happening. And they’ll keep getting worse.
