The European Union’s attempt to regulate artificial intelligence has received first positive signals from US-based leaders in the field. If a couple of swallows make the Union’s summer, however, remains an open question.

The European Commission published the much-anticipated General-Purpose AI Code of Practice (GPAI Code) on July 10, 2025, providing a voluntary framework to help providers of AI models like ChatGPT and Gemini comply with the EU AI Act. The release comes after several months of delay and just weeks before new rules for general-purpose AI models take effect on August 2.

The General-Purpose AI Code of Practice falls into three distinct chapters. Each addresses key compliance areas aligned with the AI Act’s requirements.

Transparency, copyright, safety & security

The first two chapters, Transparency and Copyright, apply to all providers of general-purpose AI models and help them meet the obligations set out in Article 53 of the EU AI Act. The Transparency chapter includes a user-friendly Model Documentation Form that assists providers in recording key information about their AI models, ensuring they meet transparency requirements. The Copyright chapter offers practical guidance for providers to establish policies that comply with EU copyright law.

The Code is designed to help industry comply with the AI Act’s rules on general-purpose AI. — Thomas Regnier, Commission Spokesperson for Tech Sovereignty

The third chapter, Safety and Security, is relevant only for a smaller group of providers. This includes the most advanced AI models that carry systemic risks, as described in Article 55 of the AI Act as models with large-scale impact, high-risk capabilities, and the potential to harm public safety, fundamental rights, or democratic institutions across society. This chapter outlines state-of-the-art practices to manage those risks, helping providers ensure their AI systems operate safely and securely without harming public safety, health, or fundamental rights.

OpenAI and Mistral on board

Thomas Regnier, Commission Spokesperson for Tech Sovereignty, said: “The Code is to help industry comply with the AI Act’s rules on general-purpose AI, which start applying on August 2, 2025. It offers companies less administrative burden and more legal certainty. ”

Soon after it was made public, OpenAI and Mistral announced their intention to sign the code, indicating early support from prominent providers. However, the total number of companies expected to sign remains uncertain, and it is not clear if providers will adopt the entire code or only select elements.

You might be interested

About the number of companies that are likely to sign the code, the Commission Spokeperson mentioned: “I’m confident we’ll have a good number of signatories. Mistral and OpenAI, for example, have already announced their intention to sign the code.”

I’m confident we’ll have a good number of signatories. — Thomas Regnier, Commission Spokesperson for Tech Sovereignty

Progress and limitations

Laura Lazaro Cabrera, Counsel and Programme Director for Equity and Data at CDT Europe, welcomed the Code’s inclusion of fundamental rights risks but noted important shortcomings. “The Code of Practice is a first-of-its-kind, ambitious step forward in the governance of GPAI models, but ultimately is only the beginning of the regulatory conversation, with European standards to follow. The Code’s potential will only work fully in practice through ongoing multi-stakeholder dialogue, knowledge-sharing and exchange of best practices to ensure meaningful risk assessments and effective mitigations for these complex and evolving societal impacts,” Ms Lazaro said.

The CoP is only the beginning of the regulatory conversation, with European standards to follow. — Laura Lazaro Cabrera, CDT Europe

Following its publication, the Code will await assessment by EU member states and the Commission to confirm its adequacy. While voluntary, signing the Code offers AI providers a clearer, less burdensome path to comply with the new EU AI Act rules, aiming to ensure AI systems are safe, transparent, and respectful of copyright protections.