As Europe heads into a packed election year, a new report reveals how synthetic video tools could upend efforts to protect democratic debate. A NewsGuard investigation found that Sora 2, OpenAI’s latest text-to-video generator, produced realistic videos advancing false claims in 80 per cent of cases. With just a few words, users can create clips that mimic news reports, eyewitness footage or official statements, all based on fabricated content.
For its analysis, NewsGuard tested Sora with 20 false claims taken from its False Claims Fingerprints database. Their continuous record of viral misinformation. All prompts used in the test were deliberately based on falsehoods. Nevertheless, the model produced realistic videos for 16 of them. This includes fabricated news reports of Moldovan election officials destroying ballots, a toddler detained by U.S. border agents, and a fake corporate announcement from Coca-Cola.
False claims with details added
The researchers observed that Sora not only recreated false claims but sometimes elaborated on them. As Sofia Rubinson, NewsGuard researcher, told EU Perspectives, all prompts were drawn from NewsGuard’s database, but the model “did sometimes add details that we did not include in the prompt”. For example, when asked to generate a video based on the false claim that Moldovan election workers were destroying ballots for pro-Russian parties, the model produced a video that named a specific town where the ballots were allegedly destroyed.
This creative extrapolation, familiar from text-based AI ’hallucinations’, gives visual misinformation an extra layer of plausibility. However, for OpenAI, the release of Sora was done with responsibility. They included visible moving watermarks and C2PA metadata meant to identify AI-generated content. But, in practice, those protections are weak. NewsGuard found that the watermark could be removed “in approximately four minutes”, using free online tools.
How to detect deepfakes?
OpenAI’s model remains in limited release, available only with an access code. Yet the implications for European democracies are immediate. The NewsGuard researcher warned that “the free model can produce videos in under five minutes, and appears to have limited guardrails to curb the use of its model to generate videos advancing provably false claims”.
The researcher also noted that “…there are many tools that can obscure a Sora video’s AI-origin to an unsuspecting viewer”. Nevertheless, viewers can still spot ’tell-tale signs’ such as garbled on-screen text or blurred edges where the watermark sits. These minor artefacts, however, are unlikely to deter mass audiences on social media.
You might be interested
A real example happened during the recent Irish presidential election campaign. A deepfake video of the candidate Catherine Connolly circulated across social media. It falsely showed the candidate announcing her ’withdrawal’ from the race. The clip spread widely on social media and highlighted how realistic synthetic videos can distort public understanding in real time.
The free model can produce videos in under five minutes, and appears to have limited guardrails to curb the use of its model to generate videos advancing provably false claims. – Sofia Rubinson, reseracher at the NewsGuard
Such incidents raise questions about the EU’s digital rulebook capacity to tackle such democratic challenges. Eva Simon, Head of Tech & Rights at Liberties, said that the EU has two key legislative instruments, the Digital Services Act (DSA) and the AI Act. Those “could address the challenges posed by the ‘Sora situation’”. But she warned that their impact depends on actual enforcement.
Under the DSA, Ms Simon explained, that “Very Large Online Platforms (VLOPs) are obligated to mitigate systemic risks. Including the manipulation of democratic processes, including the dissemination of illegal disinformation. Election periods are especially vulnerable. The Irish example underscores a critical gap in enforcement and cooperation between technology companies and regulatory authorities”.
Ms Simon also points to transparency as one of the main missing safeguards. “Transparency for researchers is also a vital safeguard”, she said. “The DSA mandates genuine, timely, and sufficient data access for vetted researchers to examine the spread and impact of AI-generated political disinformation. However, civil society organizations including ours continue to face significant barriers to accessing this data”.
Those barriers are already visible. The Commission recently found Meta and TikTok in breach of the DSA for restricting independent researchers’ access to platform data.
The 2026 election year as a test of resilience
Going further, the AI Act means to anticipate these kinds of threats. It introduces transparency requirements for general-purpose AI systems, including models capable of generating realistic video, audio, or text. But for now, those safeguards remain on paper. As Ms Simon put it, “the AI Act, along with its delegated acts, also emphasizes transparency. However relevant requirements are not in effect yet. The Act will require performing Fundamental Rights Impact Assessment and to implement measures such as watermarking and clear identification of synthetic content”.
In practice, that means that AI-generated videos are already circulating, but without clear accountability. Ms Simon warned that “the immediate priority should be to ensure effective enforcement, particularly concerning VLOPs and general-purpose AI models”. She added that “the European Commission must urgently finalize and enforce technical standards for content markings (e.g., metadata, digital watermarks) to ensure synthetic content is easily verifiable by platforms and users.”
Deepfakes during election periods should be classified as a ‘high-risk context’, triggering robust risk management and transparency obligations for all stakeholders involved. – Eva Simon, Head of Tech & Rights at Liberties
For Ms Simon and others in civil society, such incidents are early signs of a new information battlefield that urges accountability. “Deepfakes during election periods should be classified as a ‘high-risk context’, triggering robust risk management and transparency obligations for all stakeholders involved”, she said.
That call comes as a long list of EU countries prepare for national elections in 2026. Disinformation has long been part of digital campaigning. However, AI-generated video is changing both the speed and the psychology of it. A clip that looks like a candidate, sounds like a candidate, and spreads quickly can change the course of elections.
