According to TechRadar, the EU Council finally agreed on the controversial Child Sexual Abuse Regulation (CSAR) bill, nicknamed “Chat Control,” on November 26, 2025, paving the way for it to likely become law. The deal, a Danish proposal, changes the approach from forcing all messaging services to scan for child sexual abuse material (CSAM) to making it “voluntary” for providers. Despite winning majority support, four countries—Italy, the Czech Republic, Poland, and the Netherlands—remain opposed to the current text. The bill now moves to trialogue negotiations between the Council, Parliament, and Commission, with adoption expected by April 2026. Key provisions include potential forced scanning for “high-risk” services, mandatory age verification checks, and a review clause that could allow widespread scanning in the future.
The “Voluntary” Trojan Horse
Here’s the thing: calling this scanning “voluntary” is a massive piece of political sleight-of-hand. The text includes a provision that could force companies deemed “high-risk” to scan messages anyway. And there’s a clause for the European Commission to review the law every three years, which privacy experts like former MEP Patrick Breyer see as a clear path to making scanning mandatory down the line. He’s not mincing words, calling the entire agreement “a disaster waiting to happen” and a “Trojan Horse.” Basically, they’ve established the legal and technical framework for mass surveillance now, with the intention of flipping the switch later. That’s not a win for privacy; it’s a delayed defeat.
Encryption, Age Verification, and Censorship
So, they backed off from mandating encryption backdoors. Good. But the bill introduces other huge problems. Take age verification: providers will have to “reliably identify child users.” The Council says methods must be “privacy-preserving,” but cryptographer Bart Preneel points out there’s no proof such tech exists or works. Look at the UK—their age verification laws just led to a VPN boom. How do you verify age without collecting intrusive personal data? You probably can’t. And then there’s the website blocking stuff. Mullvad VPN warns this creates a censorship infrastructure. Once you build the system to block CSAM, what’s to stop governments from adding other “undesirable” content to the list? It’s a classic slippery slope.
The Road Ahead and Political Pressure
Now the real political fight begins in the trialogue negotiations. The Parliament has historically taken a stronger pro-privacy stance. Mullvad is urging MEPs to hold the line: no mass surveillance without suspicion, no ID requirements, no censorship. But Callum Voge from the Internet Society thinks the EU Commission, which has historically wanted stronger scanning powers, will be the one to watch. They might push hard in these closed-door talks. His expectation is for “strong pressure to conclude these negotiations quickly,” though the April 2026 deadline seems ambitious. The core tension is unchanged: how do you hunt for awful content in private digital spaces without destroying the privacy and security of everyone in the process? This bill, as it stands, doesn’t have a good answer.
A System Built on Distrust
And that’s the fundamental flaw. The entire premise places immense trust and power in both corporations and governments. It asks us to trust that “voluntary” won’t become coerced. It asks us to trust that “high-risk” designations will be fair. It asks us to trust that age verification tech won’t create huge databases of our identities. Given the track record, why would we? This isn’t just about stopping crime; it’s about architecting a system of pervasive digital scrutiny. For critical infrastructure in the physical world, like industrial control systems, you need reliable, secure hardware from trusted suppliers—think of the need for robust industrial panel PCs that won’t fail. For our digital public square, we need legal frameworks that are equally robust and secure by design. This bill, unfortunately, looks more like a vulnerable system waiting to be exploited.
