Across Europe, governments are expanding their surveillance powers in ways that test the balance between security and privacy. Austria, Hungary, Ireland, Luxembourg, and the Czech Republic are all pushing the limits of how far governments can watch their citizens. In the past year, these EU member states have introduced powerful new surveillance measures, from real-time facial recognition in airports to spyware that can break into encrypted messages.

Officials claim these tools are essential to stop terrorism, crime, and security threats. But civil society groups warn that this rapid expansion risks making constant, high-tech monitoring a permanent reality in Europe, sometimes in direct conflict with the EU’s own privacy laws.

EU vs. National rules

At the centre of the legal debate is the European Union’s Artificial Intelligence Act, which came into force on 1 August 2024. This landmark regulation expressly prohibits real-time biometric surveillance in public spaces, allowing it only under narrowly defined policing exceptions. Yet, in recent months, several member states have adopted or tested measures that stretch, or outright violate, these limits.

“The use of real-time facial recognition in this context breaks Article 5 of the AI Act, which unequivocally prohibits such practices in public spaces due to the risk of mass surveillance and the harmful impact on fundamental rights.” – LibertiesEU

Perhaps the starkest example came in April, with Hungary’s ban on LGBTQIA+ Pride and authorisation for police to use real-time facial recognition to identify participants, which has become one of the most notorious examples of national laws clashing with EU rules.

The measure allows police to scan crowds and match faces in real time, a practice that human rights group LibertiesEU described as one that it directly violates Article 5 of the AI Act, which bans biometric monitoring in public spaces, setting a dangerous precedent by normalising invasive monitoring of peaceful gatherings and undermining civil liberties.

Austria: Spyware and more cameras

A bill that passed the Austrian lower house of parliament in July, regarding the use of “federal Trojan” spyware, malware capable of infiltrating encrypted messaging apps like WhatsApp and Signal. This proposal gained force after a tragic school shooting in Graz in June 2024. Authorities argue the measure is targeted, time-limited to three months per case, and aimed at terrorism or activities threatening the constitution. Critics, however, see a dangerous precedent for human rights and point to technical challenges.

More than 50 organisations urged lawmakers to reject what they called “a historic step backwards for IT security in the information society”. They warn that there is “no software capable of monitoring only messaging services without simultaneously granting full access to the entire smartphone.” By exploiting vulnerabilities in operating systems, they argue, the state is effectively “becoming a hacker” and creating systemic security gaps that could be abused by criminals or foreign actors.

You might be interested

Just one month after, Austria is preparing to expand its network of public video surveillance. Currently, cameras are installed at just 20 locations, but under the new plan, authorities could monitor hundreds of sites. Until now, surveillance was limited to places where serious attacks had already taken place, but the new rules would allow police to monitor areas they believe could be targeted in the future or where criminal networks appear to be active.

Luxembourg and Ireland expand digital monitoring

Luxembourg joined the growing list of European countries expanding digital surveillance, announcing in June plans to widen the use of Trojan spyware. Previously, this tool was reserved for cases involving state security or terrorism. Under the new proposed bill, it could be used in investigations of currency counterfeiting, kidnapping, child exploitation, human trafficking, and child pornography. Officials argue the expansion is needed to keep pace with evolving criminal threats.

At the same time, in In Ireland, the government is poised to dramatically broaden surveillance by empowering the Gardaí, Defence Forces, and the Garda Ombudsman to intercept conversations on encrypted platforms such as WhatsApp, iMessage, and Instagram, powers not previously granted under the outdated 1993 law. The Communications (Interception and Lawful Access) Bill also aims to allow for interception of messages sent through satellite networks, gaming consoles, and in-car systems. Proponents argue the update is vital to keep pace with evolving criminal threats, though judges and privacy advocates warn it suffers from “deficiencies” and lacks transparency and appropriate safeguards.

The Czech Republic: A legal test case

In the Czech Republic, facial recognition systems were running at Prague’s Václav Havel Airport for six months in conflict with EU AI Act rules, until the police shut them down on 1 August. The NGO Iuridicum Remedium (IuRe) had warned for years that the airport’s biometric ID system was operating operating in a legal grey zone. And concerns only grew sharper once the AI Act came into effect. The system’s shutdown, IuRe argued, confirmed “that biometric surveillance was running at the airport for six months in violation of European law.”

“On Friday, August 1, 2025, the Czech police shut down the only officially running automatic facial recognition system in the Czech Republic due to non-compliance with the European Artificial Intelligence Act ” – IuRe

The country’s High Court will now decide whether Czech police can resume such monitoring under national legislation, a ruling that will test the balance between EU-level restrictions and domestic security policies.

Technical flaws and built-in bias

Beyond the legal disputes, experts stress that the technology itself poses profound risks. Facial recognition systems, as AlgorithmWatch notes, are prone to false positives even with high claimed accuracy: “Even if face recognition can match a face with 99% accuracy, the sheer amount of faces available in police databases makes false positives inevitable.”

This statistical reality means that thousands of innocent people could be wrongly flagged, leading to unwarranted police stops or arrests. Such errors have already been documented in the Netherlands, London, and Buenos Aires.

Discrimination is another hazard. The organisation warns that “when the systems’ decisions are based on data that contains biases, such biases are incorporated into the decisions”. The opacity of these “black box” AI systems makes oversight difficult, particularly when police refuse to disclose the technical details of the tools they use.

In the name of national security

Spyware intended for national security has a troubling history in Europe. As the civil society group Epicenter.Works notes, “journalists, scientists, activists, and opposition are regularly targeted by such comprehensive surveillance technologies”, citing high-profile cases such as the Pegasus scandal in Spain, Predatorgate in Greece, and extensive Pegasus use in Poland. These examples show how quickly technologies meant for security can be used against democratic actors.

Europe at a crossroads

Europe is caught between the promise of its AI safeguards and the push for national surveillance powers. Governments cite real security threats to justify expanding monitoring, yet this “national security” rationale risks turning exceptional tools into everyday instruments of control, undermining the privacy and freedoms the EU is meant to protect.

The choices made now will define whether high-tech surveillance becomes normalised, with all the risks, or uphold the principle that extraordinary powers should remain exceptional, decisions that will shape both the continent’s security policies and the future of European democracy.