
On March 26, members of the European Parliament rejected the Chat Control Act with just a single vote majority. Lawmakers had been trying to push automated assessment of unknown private photos and chat texts as “suspicious” or “unsuspicious” for the last 4 years, and it kept appearing with modifications like a malignant phoenix. Xeovo explains what is the story behind this law and why, in reality, there was little justification for passing it.
The Logic Behind
The idea of scanning private messages for child sexual exploitation material (CSAM) first provoked active discussions in the European Commission back in 2022. Voluntary scanning by service providers has been possible since 2020, but in 2022 lawmakers proposed making it mandatory. The European Parliament rejected that proposal — yet Denmark, which’s been chairing the Council of the EU from July 2025, brought Chat Control back to the table.
The Act was up in the air until October 7, when Germany spoke up against it, opposing 12 supporting member states. Denmark didn’t give up and introduced a compromise version of the Act, removing mandatory monitoring of private chats from the text. This proposal was endorsed by the Council on November 26.
The logic of the very first version of the initiative was simple, transparent, and far from bureaucratic: email and chat platforms would have been required to scan messages on users’ devices before encrypting and sending them. (Nowadays, this is technically impossible.) Should potentially illegal material be found, providers would have been obliged to report it to the yet-to-be-created “EU Centre” — via a system that also doesn’t exist yet. According to the European Commission’s estimates, implementation would cost at least €46.05 million by 2027 and over €116 million by 2030* — not counting the costs service providers themselves would bear.
The initiative comes from Danish Justice Minister Peter Hummelgaard, who has lamented that he couldn't ban messaging apps which facilitate crime (namely Telegram and TikTok) — even though he himself used end-to-end encrypted (E2EE) platforms. More than 500 scientists and digital rights organizations spoke out against it as early as 2023; later, developers of VPNs and privacy-focused messengers did the same.
Sleeping Volcano
In its current version, the Act presumes voluntary scanning, which is far better than centralized mass surveillance, but which still opens the door to abuse. And the first premise for this is introduction of risk categories. Each provider of a hosting and interpersonal communications service assesses itself as “low risk”, “medium risk” and “high risk”.
National agencies of the EU member countries revise then this self-assessment based on “the provider’s policies, safety by design functionalities, mapping of users’ tendencies” etc, and oblige the providers of those high risk services to “take measures to develop technologies to mitigate the risk of child sexual abuse identified on their services”. Is scanning one of these technologies, or “an appropriate risk mitigation measure”? The answer is yes.
The second premise comes from the voluntary nature of scanning, i.e. indiscriminately scanning of private communications by a service provider. Most popular communication platforms in Europe — such as Facebook Messenger and WhatsApp — already check user hats on known CSAM (they compare perceptual hash of an uploaded image with CSAM database). Also the platforms react to accounts’ behavioral signals. Still they can’t see the messages in WhatsApp or encrypted Instagram chats. The Act gives them permission to read the conversations before encryption.
Finally, the Act concerns not only known CSAM, but also materials not previously detected (‘new’ materials) and grooming. Their detection is impossible without AI, which needs to be taught on photos and conversations’ patterns — materials which communication platforms don’t lack. Letting AI scan will likely lead to compromised privacy and a high rate of mistakes, and will hardly be limited to CSAM identification only.
In addition, the Act creates the framework for the close cooperation of the EU Centre and Europol. In particular, it speaks about sharing the Centre’s administrative functions with Europol, including personnel management, information technology and budget implementation. Also the document grants Europol access to the database of indicators of online child sexual abuse, consisting of digital identifiers but not the CSAM themselves, and to the reports database, containing details of the provider, media files and communications metadata, manner in which the provider became aware etc.
This poses the question about EU Centre independence, as well as whether indicators database can be enriched with indicators of other kinds of materials, whichever the Europol considers necessary. A similar antiterrorist framework is already established under the Regulation (EU) 2021/784, but its mechanism is reactive, meaning that providers remove terrorist content after removal order, while Chat Control offers providers to detect CSAM proactively.
Not a Teenage Dream
Similar laws already exist in various forms in the UK (OSA), Australia (TOLA), and China. The UK’s Online Safety Act requires platforms to remove harmful material involving children — whether legal or not — and to scan content for child pornography. The government insists on such scanning only when there’s no other way to detect it. Australia’s TOLA Act is stricter: it compels companies to assist law enforcement by providing decryption or access to user data, which is why Signal, Proton, and others avoid hosting servers there. In the US, similar bills (like the EARN IT Act and the Kids Online Safety Act) have been discussed multiple times but so far never passed on the national level.
Despite its good intentions, the law in its essence seems to be detached from reality. Nearly two-thirds of CSAM-related complaints originate from online forums, not messaging apps or social media — the latter account for less than 1%. Regionally, the EU isn’t even among the worst offenders: per capita, Southeast Asian and Arab countries host far more CSAM content. EU member states register ten times fewer incidents than the top ten countries and about 1.6 times fewer than the global average.
While there must be people sharing CSAM on Instagram or WhatsApp, the vast majority circulates on underground forums within the darknet — spaces already outside the law, and thus largely unaffected by Chat Control. On the contrary, teenagers and people of all ages may exploit the Act for malicious purposes. For instance, by creating a temporary account, sending CSAM to someone, and waiting for their account to be suspended. False positives are a well-known issue, and in theory platforms are responsible for detecting and removing such content, not the users, but the Act doesn’t explicitly consider this case.
Until now, Chat Control was following the same path as its predecessors: European countries hesitate, unable to agree on mass surveillance measures.
Hopefully, today they rejected the indiscriminate and blanket provisions of the Act, but EU governments continue to insist on their demand for “voluntary” indiscriminate Chat Control.
After Denmark’s term ends, Cyprus — a supporter of the very first initiative — will assume the EU Council presidency. In this way, activists launched the Fight Chat Control initiative — a website where you can see the MEPs voted in favor of the Act, and send them objections via a ready-made template. As long as public criticism remains loud, governments are unlikely to be flexible enough to adopt the controversial law.

Use code HBR-10 to get 10% off. Get access to unrestricted internet with Xeovo VPN.
