As the EU begins debating the new digital omnibus proposals, a European Parliament study has issued a stark warning. According to it, the bloc’s digital framework is now so dense and overlapping that even the experts tasked with interpreting it admit they can barely keep up.
The Parliament’s Committee on Industry, Research and Energy (ITRE) got together to discuss with MEPs and invited experts the recently published study on the interplay between the AI Act and the EU Digital Legislative Framework.
The document delivers a blunt assessment of Europe’s digital laws. From GDPR and the Data Act to the Digital Services Act (DSA), the Digital Markets Act (DMA), NIS2 directive on cybersecurity and the Cyber Resilience Act (CRA), it states that each regulation functions reasonably in isolation. But together, they form what the authors call an ’extremely difficult to navigate’ legal landscape that risks slowing innovation, creating compliance choke points, and leaving smaller companies stranded in a regulatory labyrinth. Even experts, they argue, struggle to map obligations that intersect, duplicate and sometimes contradict one another.
A regulatory ecosystem collapsing
The report itself goes further than diagnosing problems. It maps how the AI Act collides with the broader digital regulation and concludes that Europe’s legislative stack has three structural flaws. To start, it is burdensome to the point of exclusivity, fragmented across national authorities with diverging priorities, and lacking a coherent logic connecting one law to the next.
The authors argue that the EU may eventually need a unifying set of digital principles to anchor future legislation. In other words, a constitutional-like framework for the digital single market rather than continuing to legislate sector by sector, problem by problem. Such a model, they suggest, could prevent the same tensions from deepening as new technologies emerge.
You might be interested
In the near term, they urge coordination across supervisory bodies, alignment of assessments, and greater clarity for companies. In the medium term, they hint that the AI Act itself may need light legislative adjustments. And in the long term, they call for a fundamental rethink of the EU’s digital architecture. One that would avoid the need for separate, siloed acts such as the AI Act or the Data Act. Instead, they recommend building a horizontal digital rulebook capable of handling AI, data governance, platform power and cybersecurity in a single frame.
The AI Act meets Europe’s regulatory gravity
When the AI Act was adopted in 2024, Brussels hailed it as a triumph, as the bloc was the first to bring regulation on the matter. But the law didn’t emerge into a blank space. It landed in a legal ecosystem already dense with obligations: GDPR, the DSA, DMA, Data Act, NIS2, CRA.
Presenting the study in the ITRE meeting, Hans Graux, one of the authors of the report, delivered the message as someone who has spent too long navigating regulatory crosswinds. He told MEPs that “each law works perfectly”, but the cumulative effect is another matter entirely. When companies, especially Small and Medium Enterprises, attempt to comply with all of them together, “you realise they simply do not fit.” The issue, Mr Graux stressed, is not politics but feasibility. Even specialists “who do nothing else every day” struggle to keep the logic straight.
Every piece of legislation is a beautiful piece of a different puzzle. It’s each of them works perfectly. But if all of these beautiful pieces of the puzzle are thrown at you collectively, you will notice that they simply do not fit very well. – Hans Graux, study author
His colleague, Maarten Botterman, added a technical dimension. AI systems do not behave like traditional software. They learn, adapt and evolve in ways regulators cannot fully predict. Once personal data is absorbed into a machine-learning model, standard GDPR rights, access, rectification, and erasure become nearly impossible to execute in practice. Transparency logs required under the AI Act may overlap with, or even contradict, obligations in cybersecurity legislation. The outcome is that obligations meant to reduce risks may end up amplifying confusion instead.
Parliament reacts: Consensus and fault lines
For the centre-right, the findings are a pragmatic wake-up call. EPP lawmaker Eva Maydell, who requested the study, described the conclusions as “very sobering”. She said the Parliament must take them seriously, especially as it moves into negotiations on the Digital Omnibus. Her concern is competitiveness. Ambiguity in the AI Act’s definitions risks pulling ordinary software into scope, cybersecurity obligations are stacking, and overlap with the GDPR remains unresolved. The EU, she argued, has created a system “increasingly difficult to navigate” and without simplification, Europe risks falling behind.
The centre-left sees the study differently. For MEP Matthias Ecke (S&D, GER) the danger is not over-regulation but poor coordination. Europe’s strong digital rights, he said, must not be sacrificed in the name of administrative convenience. Competitiveness and privacy “are perfectly compatible” he insisted, but the EU must learn to align its legislation without diluting protections. Complexity, in other words, is a challenge, not an excuse.
I don’t think that there’s a choice between competitiveness and data protection or competitiveness and privacy. The two are perfectly compatible. – MEP Matthias Ecke (S&D, GER)
On the far right, the tone was sharper. MEP Barbara Bonte (PfE/BEL) saw the study as confirmation that the EU’s approach to AI is fundamentally misguided. A human-rights-based AI framework, she said, may be noble in intent but “punishes innovation” by forcing small firms into duplicative assessments and contradictory requirements. “Elsewhere in the world, in China, for instance, they’re storming ahead”, she warned.
The Digital Omnibus changes the game
Hovering over the entire debate is the Digital Omnibus. The Commission aims to delay high-risk AI obligations until the EU has the necessary standards and guidance ready. A move that could push enforcement into late 2027. It also proposes removing the requirement for companies to register self-assessed high-risk systems, effectively allowing a year or more of unmonitored deployment.
Supporters say this adds predictability and reduces red tape for SMEs. However, critics see it as a dangerous gap in oversight for systems used in hiring decisions, credit scoring, law enforcement or essential services.