As the EU enforces its digital rules on online platforms, the relationship with major tech companies is entering a more confrontational phase. A recent €120m fine against X under the Digital Services Act has drawn strong reactions from US politicians and platform owners, bringing to the spotlight the debate on how far Brussels can go in applying its digital laws.
EU Perspectives spoke with MEP Sandro Gozi (Renew/FRA), part of the DSA working group, about the decision and what it means for future legislation enforcement. He also looked at upcoming plans for the Digital Fairness Act, EU rules for Gen AI, privacy, and Parliament’s digital priorities for the coming year.
Renew welcomed the fine against X, but the backlash from the US has been sharp. How confident are you that the EU can follow through on DSA enforcement as political pressure ramps up?
We welcomed the fine because it confirms a basic principle: no platform is above the law, including X. The decision is based on concrete transparency breaches – deceptive blue checkmarks, an opaque ad library, lack of access for researchers – not on political views.
You might be interested
No platform is above the law, including X – MEP Sandro Gozi (Renew/FR)
What matters now is follow-through. The Commission should continue investigations into information manipulation and recommender systems, and impose further sanctions if violations are confirmed. The legal basis is solid: the DSA provides clear obligations and strong enforcement tools, from periodic penalties to larger fines and, ultimately, service restrictions.
This is not about geopolitics or “punishing American companies”. It is about regulating access to the EU single market and protecting European consumers. If the EU backed down under pressure, it would undermine the entire digital framework. That is simply not an option.
Given Musk’s attacks on the EU, does it make sense for EU representatives to continue using X?
Musk’s attacks are irresponsible, but millions of Europeans are still on X. Abandoning the platform entirely would mean leaving the public square to extremists and disinformation.
That said, X should never be treated as critical infrastructure. EU institutions should diversify their communication channels, keep a clear “home base” on official websites and alternative platforms, and reduce dependency on any single private actor.
In short: stay present for now, enforce the rules strictly, diversify channels, and prepare an exit strategy in parallel.
Looking at the upcoming Digital Fairness Act, what gaps should it address – and how should it complement existing rules?
The Digital Fairness Act should target what currently falls between the cracks. The DSA is the lex generalis; the DFA should be narrow, precise and complementary.
Key gaps include dark patterns and addictive design. These are only indirectly covered today through consumer law, GDPR or the AI Act, with fragmented enforcement. The DFA should clearly ban manipulative interfaces – deceptive buttons, endless consent nags, deliberately confusing cancellations – and design exploiting children or vulnerable users.
The DFA should clearly ban manipulative interfaces – deceptive buttons, endless consent nags, deliberately confusing cancellations – and design exploiting children or vulnerable users. – MEP Sandro Gozi (Renew/FR)
It should also address exploitative influencer and behavioural marketing, where transparency rules remain patchy. The goal is not overlap, but clarity, harmonisation and faster enforcement.
ChatGPT may soon also fall under DSA obligations, alongside the AI Act. How far should the DSA go in regulating Gen-AI?
The two frameworks are complementary. The AI Act should regulate the technology itself: risk management, safety, training data governance, and oversight for high-risk uses.
The DSA should focus on Gen-AI as a digital service: systemic risk assessments, mitigation of disinformation and manipulation, transparency around algorithms, notice-and-action mechanisms, cooperation with trusted flaggers, and access for vetted researchers.
Put simply: the AI Act governs how AI is built; the DSA governs how it behaves in the online ecosystem. Together, they protect users and democracy while preserving innovation.
On the Digital Omnibus proposal, Renew warns against weakening privacy. What are your main concerns?
Simplification must not become a Trojan horse for weaker rights. Redefining “personal data” too narrowly risks removing large volumes of data from GDPR protection.
We are also concerned about watering down consent and increasing discrimination risks. If safeguards are lowered, big platforms benefit most. SMEs and citizens don’t need weaker GDPR – they need clearer guidance, interoperable tools and effective enforcement.
Our message is simple: yes to genuine simplification and less bureaucracy; no to dismantling GDPR and ePrivacy through the back door.
Looking ahead to 2026, what should Parliament’s main digital priorities be?
First: enforcement. We must ensure real compliance with the rules we have adopted.
Second: make the Digital Fairness Act a true consumer shield, tackling dark patterns, addictive design, manipulative marketing and stronger protection for minors.
Third: defend the core of GDPR and ePrivacy in the Digital Omnibus, clarifying overlaps without weakening rights.
Finally: strengthen cybersecurity and digital resilience, including for EU institutions and critical infrastructure, while reducing unnecessary burdens for SMEs.
Overall, the priority is clear: defend Europe’s digital sovereignty by enforcing our rulebook, closing real gaps, and resisting any “simplification” that actually means fewer rights for Europeans.