Key rules governing high-risk artificial intelligence systems will not kick in until late 2027 at the earliest, after the European Parliament voted to delay core parts of the EU’s AI Act. The same vote backed a ban on so-called “nudifier” apps, which generate explicit images of real people without their consent. MEPs approved the position by 569 votes to 45, clearing the way for negotiations with EU governments.

The vote follows last week’s committee agreement, which already supported postponing rules for high-risk AI systems—a move welcomed by industry but criticised by digital rights groups. In a joint statement, AI safety and governance organisations warned the changes could weaken the AI Act’s broader framework if too many systems fell outside its scope.

In its original proposal, the European Commission did not set a fixed start date. Instead, it suggested the rules should apply only once technical standards and guidance were ready, with a delay of up to 16 months.

Clear dates for AI rules

Parliament set fixed dates instead. Under its position, rules for stand-alone high-risk AI systems would apply from 2 December 2027, rather than August 2026. These include systems used in biometrics, critical infrastructure, education, employment, law enforcement, and border management. For AI systems already covered by EU product safety laws, such as medical devices and radio equipment, the deadline would extend to 2 August 2028.

MEPs also set 2 November 2026 as the deadline for complying with transparency rules, such as labelling AI-generated content.

You might be interested

Ban on ‘nudifier’ apps

Alongside the delay, lawmakers backed a ban on AI systems used to create or manipulate sexual images of real people without their consent. The measure targets so-called ‘nudifier’ apps, which can generate explicit images from ordinary photos. The ban would not apply to systems that include safeguards preventing such misuse.

A huge win for women’s rights and child protection.—Kim van Sparrentak, Greens MEP

The issue has become one of the most high-profile aspects of the legislation, amid growing concern over deepfake abuse online. Greens MEP Kim van Sparrentak (Greens/NLD) described the ban as “a huge win for women’s rights and child protection”, adding that women across the EU were being targeted by tools that stripped them of their dignity and made them vulnerable to blackmail and abuse.

More flexibility for companies

The Parliament position also eases compliance for businesses. MEPs agreed that companies may process personal data to detect and correct bias in AI systems, but only when strictly necessary and with safeguards in place. They also supported extending certain support measures to small mid-cap companies, helping them as they grow beyond small and medium-sized enterprise (SME) status.

Another key change concerns products already regulated under EU safety laws. Parliament argues AI Act requirements can be lighter in such cases, to avoid overlap. This reflects industry concerns about double regulation. Business groups have warned that companies could otherwise face parallel requirements under both the AI Act and sector-specific rules, increasing costs and delaying market entry. Ahead of the vote, industry representatives argued the AI omnibus was a chance to address these issues, particularly in sectors such as healthcare, manufacturing, and connected devices.

Council position aligns with delays

EU governments have taken a similar approach. The Council also backed fixed dates for delayed high-risk AI rules, and member states support banning AI systems used to generate intimate images without consent.

With both Parliament and Council having agreed their positions, interinstitutional negotiations can now begin. The talks will determine the final shape of the rules—including exact deadlines and the scope of the nudifier ban. The outcome will set the tone for how the EU balances AI innovation with the protection of fundamental rights.