As the EU moves forward with plans for age verification, lawmakers are split over its role in online child safety. Supporters see a key protective tool. Critics argue it could allow platforms to avoid improving unsafe design and user protections.
After Brussels announced that its European age verification app was technically ready to launch, several questions emerged behind the apparent political consensus on the need to protect minors online.
According to the Commission, one in six young people experienced cyberbullying. EU officials have also pointed to minors’ exposure to pornography, gambling, harmful content, grooming and recommender systems designed to pull users into endless scrolling.
A hearing held by the European Parliament’s LIBE Committee on Wednesday on “Age verification, assurance and estimation techniques for the protection of minors online” debated what type of access control Europe should prioritise.
You might be interested
Besides the EU app, just one day before the hearing, Meta announced new AI age assurance measures. This included expanded use of technology to place suspected teens into Teen Account protections on Instagram in the EU.
The company said its AI systems analyse contextual clues across profiles. For example, birthday celebrations, school grade, posts, comments, bios and captions. It is also scanning photos and videos for age-related cues such as height or bone structure.
Age is an excuse to maintain platform design
For the European Commission, age verification is a missing piece in the EU’s child protection framework. Head of Directorate-General for Communications Networks, Content and Technology (DG CONNECT) Yvo Volman said that the Commission is pushing for a quick rollout.
According to him, several front-runners, including France and Denmark, are expected to move before the summer. Moreover, Brussels wants all member states to make the app available before the end of the year.
However, digital rights groups and several MEPs pushed back against the idea that age verification should become the centrepiece of online child protection. Simeon de Brouwer from European Digital Rights argued that the DSA is already supposed to address systemic risks.
To him, if those risks are properly mitigated, there is less justification for excluding young people from online spaces. Conversely, platforms may have fewer incentives to make their design safer for everyone.
We know from the case of Australia that seven out of 10 minors are on social media anyway. Why are we just using new ways of giving data to big tech? — Markéta Gregorová (Greens-EFA/CZE)
This view was echoed by MEPs who questioned whether the Commission is moving faster on age verification than on enforcing the DSA. Markéta Gregorová (Greens-EFA/CZE) asked the Commission why it was prioritising a new app rather than concluding DSA investigations.
She also pointed to concerns that age verification technologies, especially those involving facial estimation, can be faulty, discriminatory and easy to bypass through AI filters. “We know from the case of Australia that seven out of 10 minors are on social media anyway,” she added. “Why are we just using new ways of giving data to big tech?”
Also, MEP Konstantinos Arvanitis (The Left/GRC) warned against building “a fortified door” while leaving dangerous online environments unchanged. He questioned how policymakers could ensure that age verification would not create a false sense of security, allowing platforms to disinvest in the protection of young users.
Hacked, bypassed and not risk-free
One of the hearing’s discussions concerned technical vulnerabilities and privacy limitations. Hours after the app was unveiled, it was hacked. Commission officials insisted the software was only a development blueprint rather than a finished product.
But Professor Kai Rannenberg from Goethe University Frankfurt pointed to other unresolved vulnerabilities. This included the use of fake certificates, manipulated biometrics, or borrowed devices.
He also raised questions about the attestations requirements. The reference is for users receiving around 30 attestations, but Rannenberg questioned how far that would go if each website or user journey required a separate proof. If attestations are reused, platforms could potentially link activity. If they are linked to user accounts, the privacy may be weakened.
If it doesn’t work, people will say: we want more data. Then it will be upgraded. Then people try to find other ways around it. In the end, we’re collecting more and more data because it’s easy and affordable. — Kai Rannenberg, Goethe University Frankfurt
Besides, if age checks fail to work, policymakers may require more personal data. “If it doesn’t work, people will say: we want more data. We want better assurances. Then they collect more data, then it will be upgraded. Then people try to find other ways around it. In the end, we’re collecting more and more data because data collection is easy and affordable,” he stated.
De Brouwer described that the more privacy-preserving an age verification system is, the easier it may be to bypass. At the same time, the more robust it becomes, the more it risks turning into surveillance.
Giulia Torchio from the 5Rights Foundation also acknowledged the enforcement challenge. “Kids are smart,” she told lawmakers, warning that they will continue trying to bypass restrictions.
Privacy-preserving, or privacy-eroding?
The strongest defence of age verification came from data protection authorities. Vincent Toubiana, head of CNIL’s Digital Innovation Lab, explained that the French authority has worked on age verification since 2021. CNIL examined existing methods, including ID checks, payment cards and facial age estimation, and concluded that no single solution offered all necessary guarantees for reliability, accessibility and data protection.
Rather than relying on one method, CNIL’s preferred model is “double blind”. This happens where the age verification provider does not know which website the user accesses, and the website does not learn the user’s identity. Spain’s data protection authority took a similar line. The representative defended that platforms do not need to identify users, but only to know whether they meet a minimum age.
Meta wants AI age assurance at app-store level
Meta argues that legislation should require app stores and operating systems to verify age, rather than forcing every individual platform to build its own system. The company says this would be more consistent, centralised and privacy-preserving, and claims the approach is supported by 88 per cent of US parents.
That position is likely to appeal to lawmakers who want fewer fragmented verification systems. However, if age assurance is centralised at app-store or operating-system level, enormous power could shift to Apple, Google and other providers. In parallel, platforms may argue that they have fulfilled their responsibilities once users are placed into age-appropriate spaces.