To protect children’s privacy online, the EU now wants them to scan their passports. The European Commission has unveiled an age verification app it says is anonymous and ready to use across the bloc. But critics warn it is a flashy fix that lets social media platforms off the hook — and puts children’s data at even greater risk.
“No more excuses,” declared European Commission President Ursula von der Leyen on Wednesday, unveiling an app she says will finally keep children off age-restricted platforms. Seven countries are already on board. Digital rights groups say the Commission has it backwards.
The app, modelled on the Commission-developed COVID certificate, users can install it on any device using a passport or ID card. Von der Leyen says it is fully anonymous and open source, no one can track users, and anyone can inspect the code. France, Denmark, Greece, Italy, Spain, Cyprus, and Ireland are already planning to integrate it into their national digital wallets.
Children first
“Children’s rights in the European Union come before commercial interest,” von der Leyen said, vowing to build a harmonised European approach to online child protection.The Special Panel on Child Safety Online meets on Thursday. It will deliver recommendations by the summer, potentially including an EU-wide minimum age law.
On Thursday, French President Emmanuel Macron will host a video call with EU leaders to discuss social media bans for minors. Von der Leyen is expected to join.
You might be interested
The app sits alongside broader Commission enforcement under the Digital Services Act. This year, Brussels took action against TikTok over its addictive design. It also opened proceedings against four pornographic platforms for failing to verify users’ ages.
Shifting the blame
Not everyone is convinced. Digital rights groups say child safety has become a top political priority. But they warn deeper contradictions are emerging.
According to EDRi, the Commission’s approach has “placed greater emphasis on safer digital environments, digital literacy and children’s participation. Yet, current policies are increasingly shifting away from this approach, favouring restrictive technological measures while weakening core safeguards.”
Children are especially vulnerable to data-driven harms, including profiling, behavioural targeting, and manipulative platform design. These practices rely on personal data and algorithms built to maximise attention. Data protection and platform regulation are the real safeguards, EDRi argues.
“Policymakers are turning toward flashy technological fixes presented as solutions. Yet, these measures frequently do not address the underlying causes of harm,” wrote Itxaso Domínguez de Olazábal, a policy advisor at EDRi.
Protecting whom?
“The growing focus on age-gating online services, while often framed as a simple way to shield minors from harmful content, frequently and bluntly prevent young people from exercising their rights altogether, or require them to submit identity documents, biometric data, or other sensitive information to access systems supposedly tailored to them. This creates a paradox: in order to be ‘protected,’ children may be required to expose even more personal data,” she said.
If the rules governing digital systems are weakened, the risks do not disappear, they are simply shifted onto those least able to manage them.
— Itxaso Domínguez de Olazábal, policy advisor, EDRi
Age-gating shifts responsibility away from platforms and onto users, Domínguez de Olazábal argues. “This means that Big Tech is let off the hook for the harm it causes to young people. Instead of regulating the systems that create harm, current debates increasingly focus on controlling users themselves. If the rules governing digital systems are weakened, the risks do not disappear, they are simply shifted onto those least able to manage them.”