EU policymakers have signalled a possible delay in the rollout of key provisions of the bloc’s Artificial Intelligence Act. Concerns mount over missing compliance tools and increasing pressure from major tech companies.
The AI Act, Europe’s landmark attempt to regulate artificial intelligence technologies, formally entered into force on 1 August, 2024. But many of its obligations are to roll out gradually. Major provisions related to general-purpose AI (GPAI) models are to take effect from August 2 this year. With just over a month to go, enforcement timelines now hang in the balance.
“If we see that the standards and guidelines (…) are not ready in time, we should not rule out postponing some parts of the AI Act,” Executive Vice President Henna Virkkunen said during a meeting of EU digital ministers in Luxembourg, according to POLITICO.
At the centre of the delay worries is the long-overdue Code of Practice guiding compliance for developers of GPAI models like ChatGPT. The final version of the Code, required under Article 56(9) of the AI Act and originally due May 2, is still awaiting publication. Without it, governments and companies alike say they lack the tools needed to enforce or meet the law’s expectations.
You might be interested
August rollout in jeopardy
Poland, which currently holds the rotating presidency of the Council of the EU, has emerged as a key voice in the debate. “First, we need to have a plan: what we want to do within those additional months,” said Poland’s junior digital minister Dariusz Standerski, who chaired the Luxembourg meeting. “Only then would Poland be open to the idea [of a delay],” he told Politico Europe. “To postpone the enforcement for 12 months and do nothing in the meantime … would be in vain.”
According to MLex, Mr Standerski emphasised that a potential delay must be conditional: “it’s not about freezing the enforcement. It’s about the enforcement under some conditions.” Those conditions should be the availability of compliance tools like the Code of Practice and technical standards for high-risk AI systems.
Behind the scenes, the drafting of the Code has become a lightning rod for lobbying. According to meeting minutes reviewed by MLex, tech giants including Microsoft, Google, Amazon, IBM, and OpenAI have repeatedly pushed for changes, arguing that the Code stay within the AI Act’s legal boundaries, minimise administrative burden, and allow ‘sufficient time’ for implementation.
Civil society sounds the alarm
Each successive draft of the Code has introduced significant changes, often weakening initial proposals for rights protections. Industry pressure reportedly peaked earlier last spring on the back of the Trump White House support, according to PYMNTS.
Rights groups warn that any delay would come at the cost of public protections and regulatory credibility. The Centre for Democracy & Technology (CDT Europe), a Brussels-based non-profit advocating for digital civil rights (with a sister organisation in Washington), told EU Perspectives that the AI Act’s protective power depends on its timely and robust implementation.
“Postponing the entry into applicability of sections of the AI Act risks undermining the Act’s potential to protect people and build trust in AI, at a time when the European Commission has placed significant reliance on AI to drive European competitiveness forward.” – Centre for Democracy & Technology Europe
The CDT Europe has actively participated in the negotiations and post-adoption processes of the AI Act. During the negotiations, the organisation and other civil society actors pushed for the inclusion of fundamental rights impact assessments and stronger protections in high-risk systems. The organisation said their input shaped some final elements of the AI Act, including the risk-based approach and prohibitions on certain AI practices.
Deregulation through the back door
However, they criticised the drafting process of the Code of Practice. While the latest, still-unpublished version is said to incorporate fundamental rights risks in the mandatory systemic risk taxonomy, a shift from previous drafts, CDT said earlier versions had diluted those protections.
“From a process perspective, the Code of Practice suffered from significant delays and unpredictability,” the group said. “This had real impacts felt particularly by civil society organisations and overall less well-resourced actors… leading to inconsistencies in the quality of the input that participants were able to provide.”
CDT Europe warns that delaying implementation could inadvertently serve as a vehicle for deregulation. “This also shows a concerning alignment with industry calls for simplification,” the group said. It argues that the EU focus instead on building its governance ecosystem, including the formal setup of the AI Advisory Forum.
“It also shows a concerning alignment with industry calls for simplification” – Center for Democracy & Technology
Uncertain timelines
The final version of the Code is to see the light of day in early July. But that will not settle things. The Commission and EU countries must pass an “adequacy assessment” before it earns recognition as a valid compliance instrument. Should member states request changes, it may miss the August 2 deadline, potentially derailing the rollout of GPAI obligations.
According to MLex, internal Commission talks around delays have thus far focused only on the high-risk AI obligations. But the debate appears to be widening, with Germany reportedly “very open” to extending the timeline and Czechia offering strong support for the idea.
For now, the clock is still ticking. But the longer the Commission fails to deliver the necessary enforcement tools, it may sleepwalk into what some stakeholders have already demanded—stopping the rulebook in its tracks.