Rules that allowed tech companies to scan private messages for abuse material have run out. The EU is waiting to see whether online platforms will pull the plug. Child safety advocates fear that more cases will remain undetected, with police losing valuable investigative leads, while the rise of AI-generated content adds fuel to the fire.

The expiry of the EU’s temporary framework has reopened questions over how platforms will handle the detection of child abuse material, in a situation that child safety groups say is not new. Isaline Wittorski, project lead at ECPAT International, a network dedicated to the fight against the sexual exploitation of children, told EU Perspectives that “during the previous lapse in 2021, reports dropped by 58 per cent”.

Back in 2021, new EU telecom rules brought messaging services under strict privacy laws. This created uncertainty over whether companies could continue scanning private communications and prompted some to suspend detection until an interim law restored legal certainty. Other sources show similar numbers: reports of child sexual abuse material in the EU fell by 47 per cent.

According to Wittorski, the decline could be even steeper this time because of the legal uncertainty. “This time, the situation is far less defined,” she said, pointing to the lack of clarity over whether detection will be allowed in the future. She also expects a bigger impact now, as online platforms have more users. 

You might be interested

Still today, major platforms, including Google, Meta, Microsoft, and Snapchat, reaffirmed that they would continue voluntary detection. The European Commission, however, has not clarified whether this practice remains in line with EU law.

Fewer leads, higher risks

A reduction in reporting could have direct consequences for law enforcement. Police rely heavily on platform-generated alerts to identify cases and victims. “Law enforcement will lose critical leads to identify sexual abuse cases and children will remain trapped in abusive situations,” Wittorski said. 

Europol has issued similar warnings. Ahead of the expiry of the temporary rules, Europol Executive Director Catherine De Bolle said the removal of the legal basis for voluntary detection could have “far-reaching implications” for safeguarding children.

Last year alone, Europol processed around 1.1 million CyberTips originating from the US-based National Center for Missing and Exploited Children. Each contained material relevant to investigations across 24 European countries. De Bolle warned that a reduction in such referrals would undermine the ability to generate investigative leads and “severely impair” efforts to identify victims and combat abuse.

According to a new report by Internet Watch Foundation, Europe continues to be the largest host of criminal content the foundation acts on. Overall, 72 per cent of child sexual abuse webpages found by the organisation were traced to hosting services in European countries. This represents a 10 per cent increase from 2024.

AI accelerates the challenge

At the same time, the nature of child sexual abuse material is evolving. Also, the Internet Watch Foundation found a continued rise in AI-generated material, with a 26.6 per cent increase in 2025.

“AI CSAM [Child Sexual Abuse Material] is now increasingly extreme and sophisticated,” Wittorski stated. She warned that the technology allows offenders to generate large volumes of material and create endless variations with minimal effort. 

AI CSAM is now increasingly extreme and sophisticated
— Isaline Wittorski, project lead at ECPAT

For child protection groups, AI-generated abuse material is not victimless. Even when it does not depict a real child directly, such systems are often trained on existing abuse material. This process revictimises those children. AI can also be used to superimpose real children’s images onto pornography or existing abuse material. Such actions further violate their rights and dignity.

Campaigners also warn that AI lowers the barrier to producing abusive content. Besides, it normalises child sexual exploitation by making such material easier to create, customise, and distribute at scale. Child rights advocates believe that the consumption of CSAM can follow an addictive pattern, with offenders seeking increasingly extreme material over time. 

Privacy advocates dispute “protection gap”

But not everyone agrees that the expiry of so-called Chat Control 1.0 has created a broad protection gap. Former MEP Patrick Breyer argues that what ended was only the suspicionless scanning of certain unencrypted private messages, while other tools, including user reporting, targeted surveillance and the monitoring of public content, will remain in place.

For Breyer, the end of the interim regime should be seen as an opportunity to shift towards more targeted and effective child protection measures. In his view, indiscriminate scanning generated large volumes of low-value reports without clear evidence that it improved convictions.

The European Parliament is also far from consensus on this topic. The EPP argues that the lapse leaves children exposed and accuses the Socialists and Democrats of irresponsibility for failing to secure an extension.

Pressure for legal clarity

The expiry of the regulation renewed pressure on EU policymakers to agree on a permanent legal basis. “The end of the interim framework underscores the urgency for a clear, permanent legal basis,” Wittorski said. “Every day without detection means more harm, more victims, and thousands of abusive images and videos spreading freely. EU leaders must now act swiftly to minimise this appalling detection gap.”