Experts and lawmakers warn that generative artificial intelligence (Gen AI) is no longer an emerging threat in democratic politics, but a reality. That was the stark message of a public hearing in the European Parliament on Thursday, 18 July. Lawmakers, analysts and civil society groups warned that AI has become a central tool in spreading disinformation and shaping public opinion.

The event, hosted by the Parliament’s Special Committee on the European Democracy Shield, focused on the growing role of generative AI in democracy. Over the course of the morning, experts laid out how AI tools distort information, undermine trust in democratic institutions, and shift public opinion at an unprecedented scale and speed.

Information warfare

Rami Ben Efraim, a retired general in the Israel Deferce Forces, founded the startup Planet 9 in 2023 to work on cybersecurity in his main strategic consulting business, the Singapore based BNF Group. He outlined how AI has accelerated geopolitical disinformation. “While defence budgets rise and military forces tighten borders and streets, the digital space is left undefended,” he said. AI tools like deepfakes and synthetic personas have become statecraft weapons. “(Chinese President) Xi Jinping recently declared that information warfare is the main battlefield,” Mr Ben Efraim noted. “China and Russia have the skills and scale.”

He emphasised a strong need for more visibility and transparency “We need full, real-time access to what happens on platforms, Western and foreign. If we can’t see the threat, we can’t counter it.” Mr Ben Efraim also called for scrutiny of platform algorithms and AI systems, citing evidence of systemic bias in large models. “Freedom of speech was given to people, not to bots. Not to AI agents.”

Xi Jinping recently declared that information warfare is the main battlefield. – Rami Ben Efraim, head of the BNF Group

Warning about AI’s future impact, he said a teenager in 2035 might interact mostly through biased AI assistants and how young generations will see the world through the lens of large language models (LLMs). Reflecting on the recent US presidential elections, he said, “We were to examine whether a major LLM was subtly manipulating political content. We came back with this disturbing answer: yes, the LLM is biased and systematically puts Republican figures at a disadvantage – not just Trump, but all Republicans.” He concluded that LLMs can discriminate and, in the hands of malicious actors, shape minds without users even knowing.

Old game, new weapons

From fake news to “fake people”, the tools of interference have evolved. MEMO 98 is a Slovakia-based non-profit with almost a quarter of a century worth of experience in analysing pre-election behaviour of both traditional and social media. Its Executive Director Rasťo Kužel recounted a personal story from 1999. As a media analyst observing elections, he once spotted an exact replica of a trusted local newspaper, same layout as usual, but with one key difference: the changed content praised the sitting president. Half a million fake copies circulated among the public. “That was my first encounter with disinformation,” Mr Kužel says. “Today, generative AI allows the same tactic, but faster, cheaper, and with much greater precision.”

Generative AI is now formally part of the election manipulation toolkit. Rasťo Kužel, Executive Director of MEMO 98

Mr Kužel warned that synthetic voice cloning, real-time impersonation, and AI-generated propaganda are now commonplace, citing analysis of over 800,000 social media posts from Slovakia and Czechia. The findings showed widespread AI-generated disinformation targeting journalism and (mostly left-wing) activism on issues like the increasing presence of women in politics.

His conclusion was stark: “In 2024, over 80 per cent of countries with competitive elections experienced AI-related manipulation. Generative AI is now formally part of the election manipulation toolkit.”

You might be interested

The disinformation business

The hearing illustrated how financial and algorithmic incentives from hosting platforms fueled the use of AI. Victoire Rio, Executive Director of What to Fix (a tech policy non-profit focused on internet integrity), argued that EU laws like the Digital Services Act (DSA) and Digital Markets Act (DMA) are out of sync with how platforms have evolved. “A lot has changed in (the past) five years,” she said, noting that platforms now monetise content without creator consent and distribute royalties without oversight.

Ms Rio highlighted key changes driving this shift. These include the rise of social media monetisation schemes, widespread inauthentic automation, and a redefinition of advertising models. Over the past three years, every major social media company has started redistributing revenue, with platforms now sharing more than €20bn, likely even more in 2025. Regarding inauthentic automation, she revealed, “It is now possible to order a complete phone farm on TikTok, delivered to your door anywhere in Europe for just a little over 100 euros.”

The term phone farm denotes a location, often a physical space, where multiple mobile phones perform automated tasks, typically for fraudulent or manipulative purposes (generating fake clicks on advertisements, creating artificial engagement on social media, or manipulating app store ratings and reviews). Meanwhile, platforms are exploring ways to replace human creators with AI-generated content in order to eliminate royalties. The effect risks further squeezing out authentic voices.

Oversight of monetisation

To address these issues, Ms Rio called for stronger regulation. “Possibly through the upcoming digital fairness act, we need to go after automation farms by making the sale of inauthentic automation products and services illegal. At the same time, there must be solid oversight of monetisation services. We need it to mitigate abuse and guarantee the viability of authentic content production,” she said.

It is now possible to order a complete phone farm on TikTok, delivered to your door anywhere in Europe for just a little over 100 euros. – Victoire Rio, Executive Director of What to Fix

Ms Rio stressed the need to treat monetisation as a core platform service. She advised to close loopholes that let social media companies separate their ad and content businesses to avoid DMA enforcement. The use of the DSA can achieve some of this, especially if commissioned content is similar to commercial communication.

Algorithmic ideology

Polish consulting analyst Grzegorz Lewicki warned of the deeper cultural and psychological effects of generative AI. “Teenagers today are forming emotional bonds with chatbots. They treat chatbots not only as a window to the internet but as partners, someone you can trust,” he said. He also described AI as an “identity-shaping power.”

These systems are not neutral, he argued. Their ideological calibration can shape users’ mindsets. The systems’ owners, driven by political goals or business interests, can leverage these profiles for influence. He described the “dead internet” phenomenon, where bot farms create fake dialogues that mimic consensus. “When humans see others agreeing, they tend to agree too,” he warned, fueling opinion bubbles and distorting perception.

“Teenagers today are forming emotional bonds with chatbots” – Grzegorz Lewicki, consulting analyst

Mr Lewicki cautioned that the convergence of misinformation, monetisation, and machine-driven interaction is already eroding public trust. “Like in the Middle Ages, the distinction between fake and real is becoming less important. People just seek experiences that confirm their values. We are entering a kind of digital feudalism,” he said. He urged lawmakers to scrutinise who defines the “Overton window” of AI, he shifting boundary of acceptable beliefs.

To counter these risks, Mr Lewicki proposed regulating the ideological calibration of generative AI, banning AI personas for users under 15, and developing offline services and digital literacy tools.

Enforcement, not just regulation

Panellists emphasised that legal frameworks like the DSA and AI Act are only effective if enforced robustly. Memo 98’s Mr Kužel outlined five urgent measures: Mandatory watermarking of AI-generated content, real-time data access, protections for vulnerable groups, investment in civic infrastructure, and coordinated cross-border enforcement.

What to Fix’s Ms Rio advised to tackle monetisation systems directly. “The platforms’ financial incentives are driving harm,” she said, calling for a ban on the sale of inauthentic automation tools and tighter oversight of revenue-sharing schemes. She stressed that monetisation itself should be regulated as a core platform function.

In the same tone, Planet 9’s Mr Ben Efraim pushed for a “digital Iron Dome” for Europe, a defence system for democracy. “This isn’t censorship,” he said. MEPs echoed the urgency, pointing to real-world harms, from opinion manipulation to AI chatbots encouraging self-harm.

Can Democracy Survive the Algorithm?

The hearing closed on a sobering note. Generative AI becomes both a tool of manipulation and a source of identity and truth for younger generations. This raises hard questions. Is it still possible to protect democratic processes in a world where fact and fiction are indistinguishable?

MEPs debated AI bans on chatbots like X Grok and whether AI could be used to detect illegal content. How to prevent AI from shaping minds without accountability was another topic. Mr Lewicki summed it up: “We are entering a kind of digital feudalism. People no longer ask what is true, they ask what feels coherent with their values.”