While AI can strip clothes from a photo in seconds, the law moves far more slowly. The latest scandal involving X’s chatbot Grok has exposed how easily non-consensual sexualised images circulate online, while EU safeguards struggle to keep up. “In some countries, prosecutors simply say their hands are tied,” says synthetic media expert Mateusz Łabuz.
The case follows months of warnings from civil society groups that X has become a key hub for so-called NSTs (non-consensual sexualisation tools), like “nudify apps”. Despite existing obligations under the Digital Services Act, researchers say the platform has failed to curb the spread of such content.
While X has framed the Grok incident as a problem of user misuse, watchdogs argue the issue runs much deeper. According to AlgorithmWatch, accounts openly promoting nudification tools operate visibly on X.
A systemic problem
“It is significantly easier – very easy, in fact – to locate accounts spreading NSTs on X than on other platforms,” said Oliver Marsh, Head of Tech Research at AlgorithmWatch. “And that has been the case for months.”
“It is significantly easier – very easy, in fact – to locate accounts spreading NSTs on X than on other platforms” — Oliver Marsh, Head of Tech Research at AlgorithmWatch.
Besides, research done by CEE Digital Democracy Watch suggests the trend is accelerating. Studies indicate that up to 96 per cent of deepfake videos online are pornographic, and nearly all target women. Increasingly, minors are also affected, particularly in school settings, as tools become cheaper, faster, and easier to use.
“There is an ongoing normalisation of this content,” said Mateusz Łabuz, an expert on synthetic media at the organisation. “People still treat it as ‘fun’, but this is a serious violation of psychological, physical and sexual integrity.”
You might be interested
Low barriers for NST
Mr Łabuz warns that today’s NST ecosystem is no longer limited to technically skilled users. “Nudify apps”, which use AI to digitally remove clothing from images of real people, are increasingly accessible. Many are free, mobile-friendly, and capable of working with photos uploaded by users.
The expert noted that many of these systems are trained primarily on images of women. This affects how they perform and who is most often targeted, a dynamic that has already led to cases involving minors in school settings.
As an example, in South Korea, a surge in deepfake-related sex crimes led to public outcry and legal reform. In 2024, lawmakers criminalised the creation, possession and distribution of sexually explicit deepfake material. The decision followed hundreds of police investigations involving minors.
However, in much of the EU, victims still face legal uncertainty, fragmented laws across member states, and a high risk of secondary victimisation when reporting cases. “Many victims don’t go to the police,” Mr Łabuz said. “They fear humiliation, being blamed, or they are told there is no clear legal path.”
EU response under fire
Following the Grok revelations, the European Commission said the case was “illegal, unacceptable, disgusting”. But critics argue that the strong language hasn’t matched the action. “If they are so disgusted, they should at least be raising, clearly and publicly, stronger and more appropriate measures”, said Mr Marsh. To him tha could mean “another fine” or “demanding Grok use be paused while they investigate”.
In the same tone, Mr Łabuz argued that responsibility must extend beyond individual users to platforms and AI providers themselves. “If companies are fined properly, they will find ways to stop this,” he said.
After the scandal, X restricted Grok’s image-generation feature to paying users. However, experts say the move fails to address the core problem. “That approach doesn’t reduce harm,” Mr Łabuz stated. “It simply monetises it.”
Regulation still catching up
At the EU level, the regulatory framework remains fragmented. The AI Act imposes transparency obligations on certain synthetic content, including deepfakes, but does not explicitly prohibit the creation or distribution of non-consensual sexualised images. Meanwhile, a separate Directive on combating violence against women and domestic violence requires member states to criminalise cyberviolence, including non-consensual sharing of intimate images, by June 2027. Still, implementation across the bloc is uneven.
Mr Łabuz added that enforcement gaps between member states create legal uncertainty for victims and investigators alike. “In some countries, prosecutors simply say their hands are tied”. He points to the difficulty of applying laws written for the analogue world to AI-generated abuse.
In some countries, prosecutors simply say their hands are tied. — Mateusz Łabuz, CEE Digital Democracy Watch
The specialist referred to a recent case in Bydgoszcz, Poland. Two high school students allegedly used AI tools to create a synthetic nude image of a female classmate by placing her face onto a naked body. The case has drawn public attention because authorities struggled to determine which criminal provisions applied.
Across the EU, comparable cases are handled inconsistently. The outcomes are largely determined by national criminal law, civil remedies, or interventions by data protection authorities rather than an EU-wide framework.
Beyond Grok
As technology companies experiment with the rollout of more permissive generative modes, such as the announced ChatGPT spicy features, experts warn that risks could escalate unless safeguards are enforced before deployment.
For victims, the consequences are already severe. Psychological trauma, fear of social exposure and professional harm are common, while justice systems struggle to cope with the sheer volume of synthetic content now circulating online. “The volume is exploding”, Mr Łabuz said. “Investigators are overwhelmed, and that means real victims risk being missed”.