Artificial intelligence can “undress” anyone in seconds—without their knowledge or consent. While the EU’s law requires so called deepfakes to be properly labelled, it does not explicitly prohibit AI systems that generate non-consensual sexualised content. Now some lawmakers are trying to change that.
In the last few months, nude pictures have flooded the internet. Only they weren’t real photos, but AI-generated versions of original images—without clothes. While the person in the original image had no knowledge of it.
Now, one EU lawmaker, MEP Sergey Lagodinsky (Greens/GER), has decided to change that.
As a rapporteur for opinion in the European Parliament’s Legal Affairs Committee on the AI Omnibus, legislation meant to simplify the bloc’s rulebook for artificial intelligence, he suggested not only removing rules, but also adding one. Specifically, to add AI systems capable of generating or manipulating sexualised audio, images or videos of individuals without consent to a list of prohibited practices named under Article 5 of the AI Act. In simple terms: banning nudifier apps.
You might be interested
From labelling to banning
Under the Artificial Intelligence Act, deepfakes already now need to follow transparency rules, ensuring that synthetic content is labelled.
These provisions aim to inform users that what they see is not real. However, these do not distinguish between artistic, satirical, political or other uses of deepfakes.
For MEP Lagodinsky, that framework is not enough when it comes to so-called “nudifier” tools or non-consensual sexualisation technologies.
“We need to go beyond transparency for deepfakes that go against human dignity and are partly criminal”, he told EU Perspectives.
The framework already identifies certain AI uses as incompatible with fundamental rights and therefore prohibited. In his view, nudification systems, which digitally remove clothing or generate sexualised images of real individuals without consent, should fall into that category.
His proposed amendment would add a ban on: “The placing on the market, the putting into service or the use of an AI system that can generate or manipulate sexualised audio, images and videos of individuals, thereby facilitating non-consensual sharing of intimate or manipulated material”.
EU regulatory gaps
The proposal comes amid fresh controversy as it emerged that X’s chatbot Grok had been generating sexualised deepfakes.
Yet this is not an isolated case. Research by CEE Digital Democracy Watch indicates that up to 96 per cent of deepfake videos online are pornographic, and nearly all target women.
Increasingly, this affects minors as well, particularly in school settings, as nudification tools become cheaper, faster and easier to access.
So why, in the middle of an expanding web of European regulation, does the problem remain so pervasive?
For Lagodinsky, the reason lies in the structure of the existing framework. The Digital Services Act focuses on platform responsibilities and systemic risks, intervening once harmful content spreads.
We need to go beyond transparency for deepfakes that go against human dignity and are partly criminal. – MEP Sergey Lagodinsky (Greens/GER)
By contrast, the AI Act regulates the placing on the market and use of AI systems, which makes it the appropriate instrument for addressing technologies whose primary function may undermine fundamental rights. “This case is about technology,” he said. “The right place to regulate is the AI Act.”
Rather than targeting individual platforms, his proposal would address the upstream capability. “We are banning a specific technology,” he said. “Any AI that enables this particular option should be taken offline.”
A missed opportunity or a second chance?
The AI Act requires the Commission to periodically assess whether to update the list of prohibited practices.
For MEP Lagodinsky, the absence of an explicit reference to non-consensual nudification tools reflects a missed opportunity. “They missed this opportunity this year”, he said, referring to earlier discussions on expanding the list of banned practices.
The debate now unfolds in the context of the Parliament negotiations under the Digital Omnibus on AI. However, much of the political discussion has been about deregulation, focused on easing compliance burdens and delaying some obligations for high-risk systems.
I expect that all those who are now condemning this technology will support the ban – MEP Sergey Lagodinsky (Greens/GER)
“I don’t see why the Omnibus should be just for deleting”, Lagodinsky said. “It should also be about creating more legal certainty. For example, having a clear ban under the law.”
And the support of the ban goes beyond the Greens. “Within the Parliament, some other groups are supportive of this ban”, Lagodinsky noted. “Also, in the Council, there are voices who would go in a similar direction.”
But at the same time, the MEP criticised what he sees as a gap between public condemnation and legislative action.
“Groups should put their mouth where their word is”, he said. “I expect that all those who are now condemning this technology will support the ban.”