Companies currently have no clear way to prove compliance with the EU’s AI Act. “The issue is the absence of harmonised standards,” explains Member of the European Parliament Michael McNamara. The politician outlines why key rules may be delayed and where the toughest battles in upcoming negotiations will be fought.

Trilogue negotiations are now underway, following European Parliament’s vote to delay key obligations and ban on nudifier apps. In an interview with EU Perspectives, Michael McNamara (Renew/IRL), rapporteur on the AI file under the European Commission’s AI Omnibus package, speaks about the expected friction points, the risk of blurring the line between simplification and deregulation, and why energy—not regulation—could become Europe’s biggest AI problem.

The European Parliament recently backed a ban on so-called nudifier apps. How quickly will it take effect, and what must platforms do to comply?

The proposal is to ban the non-consensual use of these systems, classifying it as a prohibited practice under the AI Act. It would take effect very quickly after adoption, within weeks of publication in the Official Journal, so likely around mid-2026.

Some present this as simplification, but others, including the Commission, have reservations.

The ban applies specifically to non-consensual use. Using such tools with consent would still be allowed. But providers offering these services must establish and verify that consent. That would be their responsibility, even if it is technically difficult.

You supported delaying high-risk AI obligations from 2026 to 2027/2028. What is missing today?

The key issue is the absence of harmonised standards. Without them, there is no clear way for companies to demonstrate compliance with the high-risk requirements. We cannot afford to reach the new deadlines and still not have those standards in place. Once they are in place, companies will be able to say: these are the requirements, these are the benchmarks set by independent bodies, and this is how we meet them.

What does this delay mean in practice? What should companies be doing in the meantime?

At the moment, there is no specific regulatory framework applying to these high-risk systems. This is similar to the situation that pertains in jurisdictions like the United States, Canada, or the UK. Industry often argues that putting rules in place in Europe can be detrimental to competitiveness when other regions do not have equivalent requirements.

But regulation is there to ensure AI systems are reliable, safe and secure, particularly in high-risk areas. We are already seeing problems emerging in different sectors globally, and that is something Europe is trying to avoid. The European approach focuses regulation on high-risk uses. The majority of AI systems will not be affected, only those with the most significant potential impact.

The AI Omnibus is entering trilogue negotiations with the Council. Where do you expect the biggest friction points?

There is a considerable convergence between the Parliament and the Council. The differences are mainly in wording and different emphasis in some aspects.

One of the more difficult points will be proposals around how certain systems are treated under the framework. Particularly the idea of moving some of them into another regulatory category, where they would rely on product rules. Some present this as simplification, but others—including the Commission—have reservations.

You might be interested

This proposal initially had limited support at the Council, and alternative approaches have also been discussed. But I believe that a compromise acceptable to both co-legislators can be found.

Are some member states pushing for further simplification of the AI Act?

There are clearly different positions among member states. Some governments support changes to avoid overlap between the AI Act and existing legislation, like machinery, radio equipment, or toys. That has also been a strong demand from the industry. 

AI development requires a lot of energy, and that is something Europe will need to address.

But others urge caution and point out that there is a difference between simplification and deregulation. Delaying the implementation of harmonised standards could be seen as deregulatory, even if it is presented as simplification.

The AI Act was proposed in 2022, before the recent surge in AI development. Will it still be fit for purpose by the time key rules apply?

I don’t think we will need entirely new legislation in the short term. The framework allows for changes through delegated legislation. That means definitions, such as what is considered high-risk or how general-purpose AI is treated, can be updated over time.

The bigger challenge may not be regulation itself, but resources. AI development requires a lot of energy, and that is something Europe will need to address going forward. Especially given the situation in the Gulf and the energy dependency of many EU member states on hydrocarbons.