The European Commission’s plan to double the size of Europol has raised concerns among civil society and ignites questions from some MEPs. Critics say such an expansion will strengthen the agency’s ties with private industry. It could also pose a problem for privacy protection.

In 2025, the EU Agency for Law Enforcement Cooperation (Europol) counts more than 1,700 officials and has an annual budget of €241m. The numbers include internal staff, national experts, and consultants. The planned expansion of the agency “would significantly strengthen the agency’s ties with private industry,” says the NGO Statewatch.

Concerns about the expansion of Europol’s role come also from some MEPs. “With a further, broader reform of Europol’s mandate expected next year, these developments are deeply concerning,” MEP Saskia Bricmont (Greens-EFA/BEL) told EU Perspectives following the vote in the LIBE Committee on strengthening Europol’s mandate.

Suspicious connections to suspicious companies

The close relationship between private industrysometimes bordering on legislative improprietyand Europol is nothing new. Peter Thiel, co-founder of software company Palantir and promoter of PayPal, has recently invested in an American company called Clearview AI. The company built its tools on a database of billions of facial images and other sensitive personal data. Some were extracted from the internet without consent, including content linked to child exploitation. In recent years, data protection authorities in Canada and Europe have fined Clearview AI. The use of its technology was banned within their jurisdictions.

In this context, the European Data Protection Supervisor (EDPS), responsible for monitoring Europol’s compliance with EU data protection law, intervened after Clearview AI presented its products at Europol headquarters in 2020. The EDPS expressly recommended that Europol refrain from using Clearview AI’s services, as this would likely violate the Europol Regulation. It also advised the agency not to promote the use of Clearview AI during Europol events, nor to provide Europol data to be processed by third parties using Clearview at such events.

You might be interested

“In the context of artificial intelligence, data controllers have an obligation to limit the collection and further processing of personal data to what is necessary for the purposes of the processing, avoiding indiscriminate processing of personal data,” reads a report recently published by the supervisory body. The report continues: “This obligation covers the entire lifecycle of the system, including testing, acceptance, and release into production phases. Personal data should not be collected and processed indiscriminately.”

Major threat to privacy?

In 2022, with the reform approved by the European Parliament, MEP Saskia Bricmont (Greens-EFA/BEL) warned that “we are facing yet another reform adopted without any assessment or impact evaluation.” And she went further. “Europol’s powers in the processing and exchange of data, including sensitive biometric information, without adequate safeguards, accountability mechanisms, or oversight,” Ms Bricmont said.

However, in 2024 Europol launched a platform dedicated to engaging the private sector called Research and Industry Days. Companies were invited to present a range of surveillance and investigative technologies. That included open-source intelligence (OSINT) tools for digital investigations and dark web monitoring, online patrolling systems, robotic equipment, and artificial intelligence solutions capable of processing and analyzing audio, text, video, and vast datasets.

That same year, European Union Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security and Justice (eu-LISA), launched its Industry Roundtables. The aim was to promote similar partnerships. In response to an initial request for information from Statewatch, Europol withheld the names of most of the companies participating in the event and refused to publish their presentations. “It later provided an agenda listing 19 entities, and participants ranged from global security giants such as the French firm Idemia to smaller but established companies from the United States, Israel, and Europe, as well as two research centers,” Statewatch reported.

Under the EU Artificial Intelligence Act, systems such as Clearview AI will be banned starting in 2025. Exemptions can be granted only on grounds of national security. For Ms Bricmont, the use of AI decision-making tools will affect “the fundamental rights of migrants in situations of high vulnerability, marking yet another step toward the criminalization and mass surveillance of migrants.” In her view, also the privacy and data use of any European citizen could be in danger. Thus, it would undermine the principle of democratic security that the EU should guarantee.