The European Union begins enforcing a key chapter of its Artificial Intelligence Act, as obligations for providers of general-purpose AI models become applicable. While the act officially entered into force last year, today marks the first time concrete compliance requirements, covering transparency, copyright, and safety, begin to apply. It’s a critical milestone in the EU’s broader effort to regulate AI systems before the law comes into effect in full in August 2026.
From today, providers of GPAI systems, including large language models such as Google’s Gemini and OpenAI’s ChatGPT, must comply with new obligations under the AI Act. These include transparency about the training of the models, adherence to copyright protections, and disclosure of training data sources.
GPAI models are defined as those trained using at least 10²³ FLOP (floating point operations) and capable of generating language. Developers of even more powerful systems, those exceeding 10²⁵ FLOP, face additional requirements, including notifying the Commission and demonstrating their model’s safety and security.
Safe and transparent
To support implementation, the European Commission has issued official guidelines clarifying the reading of the GPAI-related provisions. According to tech commissioner Henna Virkkunen, the aim is to ensure a smooth rollout of the rules by giving providers legal certainty on the scope of their obligations. “We are helping AI actors, from start-ups to major developers, to innovate with confidence,” she said, while also ensuring models remain “safe, transparent, and aligned with European values.”
The Commission and member states have also endorsed the voluntary GPAI Code of Practice as a tool for providers to demonstrate compliance. Signing the code offers providers a reduced regulatory burden and increased legal certainty.
You might be interested
“Co-designed by AI stakeholders, the Code is aligned with their needs. Therefore, I invite all general-purpose AI model providers to adhere to the Code. Doing so will secure them a clear, collaborative route to compliance with the EU’s AI Act.” – Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy
At the same time, all EU Member States have to designate national competent authorities responsible for enforcing the AI Act. These authorities must be notified to the European Commission, and Member States must establish rules on penalties, including administrative fines for non-compliance. Penalties vary based on the severity of breaches, including:
- Up to €7.5 million or 1% of global turnover for supplying incorrect information
- Up to €35 million or 7% of global turnover for severe violations such as prohibited AI practices
- Up to €15 million or 3% of global turnover for non-compliance with high-risk AI system requirements and for general-purpose AI providers
Implementation challenges
The activation of the AI Act’s enforcement provisions marks a major milestone, but much of the infrastructure required to make those rules operational is still missing.
“Delays in the appointment of national-level regulators are likely,” warns Laura Lazaro Cabrera, Counsel and Director of the Equity and Data Programme at CDT Europe, “but that doesn’t make them any less costly: without national regulators, there is no one to enforce key AI Act rules, including the prohibitions on AI systems with unacceptable risk.”
“Without national regulators, there is no one to enforce key AI Act rules” – Laura Lazaro Cabrera, Counsel and Director of the Equity and Data Programme at CDT Europe
The absence of designated authorities undermines the Act’s immediate enforceability and could stall the application of its most critical safeguards. Lazaro Cabrera emphasises that these appointments must not only happen quickly, but be strategic: “It’s imperative that national authorities are appointed as soon as possible and that they are competent and properly resourced to oversee the broad range of risks posed by AI systems, including those to fundamental rights.”
She adds that effective enforcement will require interdisciplinary oversight: “Appointments must reflect the interdisciplinary scrutiny needed for AI systems, and leverage the experience and tech expertise of existing regulators in the digital field. For example, data protection authorities have long reflected on fundamental rights risks posed by technology and have a key role to play as enforcers of the AI Act.”
Industry Pressure and Civil Society Pushback
The rollout has not been without controversy. Since its drafting, the AI Act has sparked fierce debate and criticism. Central to recent tensions was the long-awaited Code of Practice for General-Purpose AI GPAI models, which was due in May but was only published on July 10, 2025, just weeks before key provisions of the Act came into force. The delay fueled industry concerns and even triggered calls from some stakeholders to “stop the clock” on implementation until technical standards and guidance were fully in place.
On the other hand, civil society organisations pushed back strongly. In a joint statement, a coalition of 52 NGOs and academics, including CDT Europe, BEUC, and EDRi, warned the Commission that any delay would “undermine rights, public trust, and Europe’s digital leadership”.
The GPAI Code of Practice is a voluntary tool to help providers comply with transparency and copyright obligations under the AI Act. It is accompanied by official Commission guidelines clarifying key GPAI concepts. Both the European Commission and the AI Board have confirmed that adherence to the Code is a valid way for companies to demonstrate compliance.
European Tech Alliance, which represents 33 European tech companies, told EU Percepectives they welcomed the Code as an important milestone but noted that businesses are watching closely: “The GPAI Code and the guidelines from the European Commission are key to ensuring the uptake of responsible AI in Europe. European tech companies need clear, consistent and predictable rules… to foster innovation and tech leadership.”
Copyright and transparency
In a joint statement, a broad coalition of European and international authors, artists, publishers, and producers has strongly criticised the recently published European AI Code of Practice and its accompanying guidelines. They argue that the current implementation of Article 53 of the AI Act fails to adequately protect their intellectual property rights against unauthorised use by generative AI models.
The coalition warns that this shortfall threatens one of Europe’s most important cultural and economic sectors, accounting for nearly 7% of the EU’s GDP and employing millions, and risks enabling structural copyright infringements. They call on the European Commission, Parliament, and Member States to intervene with stronger enforcement measures to truly defend creators’ rights.
US tech and international impact
The AI Act’s influence is extending well beyond EU borders. Several major US tech firms have reportedly signed the GPAI Code of Practice even as US officials express concerns about the bloc’s regulatory approach.
Most US companies, despite voicing reservations, have shown support for the code. Meta stands out as the only major US player publicly refusing to sign, criticising the framework as legally ambiguous and exceeding the AI Act’s intended scope. In contrast, Microsoft, Google, Anthropic, OpenAI, and France’s Mistral AI have endorsed the voluntary Code.
In a nuanced position, xAI has announced it will sign only the safety and security chapter of the Code of Practice for generative AI. While supporting AI safety provisions, xAI criticises other sections, particularly those on copyright, for impeding innovation and overstepping. The Code is divided into three chapters: transparency, copyright, and safety and security.
CDT Europe also raises concerns about how divergence in industry engagement with the GPAI Code of Practice could complicate compliance oversight. While several companies are signing on, some, including Meta, are not.
“Refusal to sign the Code of Practice is a compliance roll-of-the-dice,” Lazaro Cabrera notes. “The obligations in the AI Act apply irrespective of what the Code requires, and providers choosing not to sign… will have to convince the Commission that they comply with the AI Act despite doing things differently.” She points out that since the Code sets baseline requirements, “any departure or discrepancy will legitimately raise questions as to whether proposed alternative measures are sufficient to meet the Act’s requirements.”
Next chapters
The full AI Act will apply from August 2, 2026, covering high-risk AI systems, transparency disclosures, and user protections. Between now and then, Member States must build up enforcement capacity, industry must adjust development pipelines, and regulators must iron out technical ambiguities.
Whether the AI Act ultimately becomes a global benchmark or a cautionary tale will depend on how these next 12 months are handled, and whether Europe’s model of “trustworthy AI” proves workable in practice.