An autonomous robot navigating a dangerous environment could help save lives. But before it can enter the market, its developer may have to certify the same AI system under two separate EU regulatory frameworks. Industry groups warn that this kind of overlap could slow innovation in Europe — and are now urging lawmakers to simplify the rules.
Sectors such as healthcare technology, manufacturing and connected devices are the most affected, industry representatives say, as they fall simultaneously under the AI Act and sector-specific legislation such as the Machinery Regulation, the Medical Devices Regulation or the Radio Equipment Directive.
This could require companies to undergo parallel conformity assessments and prepare additional technical documentation before products can enter the market. “Overlapping—and potentially conflicting—documentation and conformity assessments risk delaying certification and slowing the deployment of innovative products,” they argue in a statement released this week.
According to the associations, many industrial technologies could also be classified as ‘high-risk’ under the AI Act while already complying with strict safety frameworks under existing legislation. “The AI Act must therefore ensure a coherent regulatory approach for products already governed by sectoral legislation,” the statement added.
You might be interested
Digital Omnibus under fire
The companies point to some real-life examples. One involves autonomous inspection robots used in hazardous industrial environments. Developers say their AI navigation systems could require certification under both the Machinery Regulation and the AI Act, leading to double assessments and higher costs.
We call on the European Parliament and the Council to request that the European Commission table a targeted, standalone proposal to postpone the AI Act application deadlines. — DIGITALEUROPE
The call is coordinated by DIGITALEUROPE, a coalition of digital technology industry, and enjoys support of over twenty national and sectoral associations. It comes amid debate on the Digital Omnibus package — a set of EU measures simplifying digital rules. The initiative proposes delaying obligations for high-risk AI systems, originally expected in August 2026, and introducing simplification measures for small and medium-sized enterprises (SMEs).
This has divided the European Parliament. While some MEPs argue that simplification is necessary to boost competitiveness, others warn that reopening digital rules could weaken hard-won safeguards.
Extra costs for businesses
For the industry, the Commission’s omnibus does not go far enough. According to the coalition, companies integrating AI into machinery, medical devices or connected equipment may face twice the obligations under both the AI Act and other existing EU product-safety legislation.
Besides, the coalition cites a CEPS study estimating that developing a single high-risk AI system could cost SMEs between €216k and €319k in initial compliance costs, with additional recurring expenses.
Beyond regulatory simplification, the coalition urges EU lawmakers to delay the application of certain AI Act obligations while broader reforms are discussed. “We call on the European Parliament and the Council to request that the European Commission table a targeted, standalone proposal to postpone the AI Act application deadlines,” the statement said.
The associations also argue that the timelines of EU digital policies should be better coordinated. “More broadly, the timelines of the AI omnibus and the digital omnibus should be better aligned, as the content of these frameworks is closely interconnected.”
Critics warn of digital-rights rollback
The concerns arise from discussions on the Digital Omnibus. The Commission introduced the package in November 2025, aiming to simplify several digital regulations.
The initiative looks to reduce administrative burdens for companies and improve coherence across EU laws. One of the most significant proposals extends the timeline for the rules governing high-risk AI systems. Instead of starting in August 2026, the Commission suggests that these obligations should only apply once technical standards, guidelines and support tools are in place, with a maximum delay of 16 months.
However, civil society organisations argue that the Digital Omnibus risks weakening key safeguards in Europe’s digital rulebook. For EDRi, the proposals could undermine transparency and enforcement provisions in the AI Act while offering “negligible benefits for companies”.
Other watchdog groups, including Corporate Europe Observatory and LobbyControl, claim several proposed changes mirror long-standing lobbying demands from major technology companies. Their analysis argues that reforms framed as simplification could weaken oversight and reduce protections around data use and AI deployment.