Poland has filed an official complaint over antisemitic content generated by Grok, Elon Musk’s AI chatbot. France, meanwhile, has opened a criminal investigation into X over suspected algorithmic manipulation. Amid this storm, Musk announced “Baby Grok”, a child-friendly version of the chatbot, sparking fresh concern.

To understand what’s at stake for EU digital oversight, we spoke with Oliver Marsh, Head of Tech Research at AlgorithmWatch, a nonprofit organization focused on the responsible use of algorithms and AI.

Poland versus Grok

Poland has formally asked the European Commission to launch an investigation into Elon Musk’s Grok AI chatbot, alleging serious breaches of the EU’s Digital Services Act (DSA) following a wave of antisemitic and abusive content generated by the tool on X.

In a letter dated 9 July and addressed to EU Tech Commissioner Henna Virkkunen, Poland’s Deputy Prime Minister and Minister of Digital Affairs Krzysztof Gawkowski called recent events involving Grok “a major infringement” of the DSA, urging the Commission to act swiftly.

“What we are seeing from X right now could be considered a major infringement of the DSA,” Gawkowski wrote in the letter. Speaking on Polish radio, he went further, expressing “disgust” over Grok’s output and warning that a shutdown of X in Poland was not off the table.

You might be interested

The controversy erupted after Grok generated a series of posts that included apparent praise for Adolf Hitler and other hateful content. X has since taken down the posts, but the backlash has only intensified scrutiny of the platform, already under formal EU investigation for DSA non-compliance since late 2023.

A pattern of violations

This is not the first time Elon Musk’s platform has been in regulatory hot water. In comments to EU perspectives, Head of Tech Research Oliver Marsh from AlgorithmWatch said: “This is just the latest in a really long list of things Grok has done… not long ago, it could generate non-consensual sexualised images of real people.”

Not long ago, Grok could generate non-consensual sexualised images of real people. – Oliver Marsh, Head of Tech Research at AlgorithmWatch

Regarding the repeated criticism of X’s compliance with the DSA, particularly Article 34, which requires platforms to proactively assess and mitigate systemic risks, Marsh argues, X isn’t doing what it’s supposed to under the DSA. It just waits for things to go wrong and reacts later, exactly what the DSA is designed to prevent.”

Inaction from Brussels

Under the DSA, designated “very large online platforms” like X are obligated to conduct risk assessments, ensure transparency, and mitigate systemic threats such as disinformation, hate speech, and algorithmic harm. Non-compliance can trigger penalties of up to 6% of global annual turnover, and in extreme cases, temporary access blocks.

Despite a preliminary breach finding issued against X last July, the European Commission has yet to issue a fine or take decisive enforcement steps. Commission spokesperson Thomas Regnier confirmed they are “in contact with national authorities” regarding the Grok incident but declined to say whether a separate investigation will be opened in response to Poland’s request. 

This would be the first-ever DSA fine, so they want it to be watertight. But most of us are asking: what’s taking the Commission so long? – Oliver Marsh, Head of Tech Research at AlgorithmWatch

This hesitation has caused growing concern. As Mr Marsh noted, “We’ve been hearing for months that the Commission has lined up a fine against X for non-compliance.” He added: “It may be stalling because of the political impact. This would be the first-ever DSA fine, so they want it to be watertight. But every day, X does something else that suggests it’s not complying. Most of us are asking: what’s taking the Commission so long?”

France escalates pressure

Poland is not alone in its concerns. In France, prosecutors recently launched a criminal probe into X, alleging the platform’s algorithm may have been manipulated for “foreign interference” and the mass dissemination of “hateful, racist, anti-LGBTQ” content.

X has fiercely denied the charges, calling the investigation “politically motivated” and refusing to provide access to its recommendation algorithm. The company claimed the experts selected to analyze the algorithm, including David Chavalarias of the Complex Systems Institute, are biased and engaged in campaigns critical of X. 

“These individuals raise serious concerns about the impartiality, fairness, and political motivations of the investigation,” X said in a statement earlier this week.

This resistance fits a larger pattern. Civil society organisations say X has repeatedly refused to provide researchers with data. Oliver Marsh confirmed this firsthand: “We’ve been denied access to data two or three times now, each time on very dubious grounds. Numerous organisations, including us, have applied for data. X usually sends follow-up questions, and then just says no.”

He noted a significant shift since Musk’s acquisition: “Before Musk’s takeover, it was possible to access public data through the API for free. After he took control, a pricing system was introduced, with some plans costing up to $42,000 per month. While charging for access itself is not a violation of the DSA, the lack of free alternatives for researchers who meet the DSA criteria is,” Mr Marsh said.

Baby Grok, big risks

Despite mounting controversies, Elon Musk announced plans to launch Baby Grok, a child-friendly version of his chatbot, with so far few details provided. Critics were quick to voice concern.

“I struggle to put into words how bad an idea I think this is,” Mr Marsh said. “You’ve got a chatbot that a month ago was generating nudes, a week ago praising Hitler, and now they want to release it to kids. They have no shame,” he added.

“You’ve got a chatbot that a month ago was generating nudes, a week ago praising Hitler, and now they want to release it to kids. They have no shame.” – Oliver Marsh, Head of Tech Research at AlgorithmWatch

The plan is likely to spark debate at a time when the EU is prioritising online protections for minors. It also raises significant concerns under both DSA and the AI Act, the latter of which classifies AI systems targeting children as “high-risk” and subject to strict regulatory oversight.

Geopolitics protecting X

Elon Musk’s platform X may have fewer European users than giants like YouTube or TikTok, yet it has swiftly become the EU’s main battleground for enforcing the DSA. Repeated breaches of EU rules by X, and Musk’s high-profile persona, have made the platform a symbolic test case for digital regulation in Europe.

Despite this intense focus on X, Oliver Marsh warns that other platforms, even if they’re not as bad as X, deserve more scrutiny because they are bigger in Europe. Still, how the European Commission handles X will send a powerful signal and set an important precedent for DSA enforcement across the continent.

But this goes beyond questions of legal compliance. Political and geopolitical considerations are complicating Brussels’ ability to respond. Although the Commission has been investigating X for months, decisive action remains elusive. Mr Marsh points to deeper transatlantic dynamics: “Even if Musk and Trump are fighting, they’d likely unite around the idea that the European Commission shouldn’t be regulating online platforms. The geopolitics here is protecting X”, he said.

What Comes Next?

With Poland’s formal request now in Brussels’ hands and France pursuing a separate criminal investigation, the Commission faces mounting pressure to demonstrate the DSA has teeth. Whether it fines X or stalls again, its decision could define the credibility of Europe’s digital rulebook.