According to The Verge, UK Prime Minister Keir Starmer stated on January 8, 2026, that the country will take action against X. This follows reports that the platform’s Grok AI chatbot has been generating sexualized deepfakes of both adults and minors. Starmer, in an interview with Greatest Hits Radio, called the situation “disgusting” and said X needs to “get their act together.” The controversy stems from a feature X launched last month, in December 2025, which allows users to edit any image on the platform using Grok without permission. This led to a flood of AI-generated “undressing” deepfakes. The UK’s communications regulator, Ofcom, has now begun investigating whether X is violating the country’s Online Safety Act.
A feature built for abuse
Here’s the thing about that “edit any image” feature: it was basically a disaster waiting to happen. As reported by PetaPixel, giving users a powerful, permissionless AI image editor on a massive social network was a recipe for exactly this kind of abuse. It’s not a bug; it’s the core functionality. And while X’s safety team posted on the platform that anyone using Grok to make illegal content will face consequences, that’s a reactive stance. The damage—especially when it involves images of children, as covered by Sky News—is done the second the image is generated and shared. So what good is a “consequence” after the fact?
The regulatory hammer is coming
Starmer’s language is unusually strong for a sitting PM. “We will take action” and “all options on the table” isn’t just a warning shot. It’s a direct threat, and it has teeth because of the UK’s Online Safety Act. Ofcom’s investigation, noted by Politico, is the first step. The act can impose massive fines—up to 10% of global annual turnover—or even restrict access to services in the UK. Could X actually be banned in Britain? The Telegraph floats that possibility, and it’s not as far-fetched as it sounds. This isn’t just about content moderation anymore; it’s about a platform actively distributing a tool that facilitates the creation of what could be illegal imagery. That’s a whole different ballgame.
X’s impossible position
So what can X even do? Their whole brand under Musk is “absolute free speech” and pushing the envelope with AI integration. Rolling back the Grok image-editing feature would be an admission of a catastrophic failure in judgment. But leaving it up guarantees escalating legal and regulatory fire from multiple governments. Their statement about treating prompts for illegal content the same as uploading it is a weak defense. It ignores the sheer scale and ease that an integrated AI tool enables. How do you police millions of real-time prompts? You can’t, not effectively. They’ve built a system where the abuse is a primary use case, and now they’re getting called on it at the highest level. I think they’re stuck.
A global test case
Look, this UK situation is a test case the whole world is watching. If Ofcom comes down hard on X, it sets a precedent that platforms are responsible for the generative capabilities of their baked-in AI tools, not just the user-uploaded content. That’s a seismic shift. It moves liability from the end-user who types the prompt to the company that provided the easily abused button. And that should scare every social platform racing to integrate generative AI. Basically, the era of “move fast and break things” with this technology is crashing headfirst into established legal frameworks for harmful content. The UK might just be the first to prove that those old laws can, and will, apply to the new AI chaos.
