It sounds like an early Christmas present for anyone uneasy about the idea of their face being pasted onto someone else’s naked body. The EU has agreed to ban so-called ‘nudifier’ apps — AI tools capable of generating sexualised images and videos of real people without their consent. The new rules will come into force on 2 December.
The restriction will cover both the use of such systems within the EU market and their distribution or release without proper safeguards. In plain terms, developers and platforms will no longer be allowed to offer AI tools designed primarily to create these non-consensual intimate images.
“We are banning nudification apps and, of course, the creation of child sexual abuse material using AI systems. This way, we have the tools to act if providers do not address AI systems that compromise fundamental rights or human dignity,” co-rapporteur Michael McNamara (Renew/IRL) stated.
The agreement between the European Parliament and member states is part of a broader overhaul of the EU’s Artificial Intelligence Act. The package, known as Digital Omnibus, aims to streamline AI regulation while tightening rules in areas seen as particularly problematic.
You might be interested
Focus on child protection
Nudifier apps—which can strip clothing from photographs and generate explicit imagery—became a focal point as well as child sexual abuse material (CSAM). Lawmakers emphasized that the new rules target at specific harmful applications rather than AI technology itself. “We ensured in the deal that we are not affecting the technology and the capabilities of AI providers,” said co-rapporteur Arba Kokalari (EPP/SWE).
One of the vocal supporters of the ban was MEP Markéta Gregorová (Greens-EFA/CZE). She argued during negotiations that these tools are far from harmless entertainment and amount to a form of digital abuse. “These tools have no place in our society. The victims are primarily women and children,” she said.
The issue has gained further traction recently following a scandal involving the social platform X and its AI chatbot Grok. The users generated sexualised images of real people without consent. The case triggered widespread criticism and regulatory scrutiny, including action from the UK communications regulator Ofcom.
Pushing back deadlines
The newly reached agreement is not only about prohibitions. The EU is also moving deadlines for other parts of its AI framework. Requirements to watermark AI-generated audiovisual content will now apply from December, rather than earlier planned date. Other key obligations under the AI Act are being delayed as well. Rules for high-risk systems are now expected in 2027, while sector-specific measures have been pushed to 2028.
The AI Act changes should reduce overlapping requirements by ensuring that AI in machinery mainly follows sector-specific safety rules instead of multiple regulatory frameworks. This should simplify compliance while keeping safety protections in place. “It will make significant impact to cut red tape for European startups and European industries,” Ms Kokalari added.
Small and mid-sized companies get some relief from certain obligations, and oversight of some general-purpose AI systems will be handled more centrally by the EU AI Office.
The package still needs formal approval from the Parliament and member states, but the political agreement is already in place. And that signals a clear shift: within the EU, the era of non-consensual ‘digital undressing’ is coming to an end.