Artificial intelligence companies OpenAI, xAI, and Mistral have received a stark warning from the Dutch data protection authority. The Autoriteit Persoonsgegevens is concerned their chatbots are potential sources of biased political advice ahead of Wednesday’s parliamentary elections in the Netherlands.
The news follows the publication of a special report examining the risks associated with the use of chatbots as voting aids. The AP conducted research using four prominent chatbots—ChatGPT (by OpenAI), Grok (xAI), le Chat (Mistral), and Gemini (Google). The findings revealed that these chatbots provided recommendations that overly favoured certain political parties. “The advice is strongly biased toward a small number of parties,” stated the AP, which raises concerns about the influence such systems could exert on democratic processes. With elections on the horizon, the report’s timing emphasises the urgency for compliance with European Union regulations.
Significantly, all four model providers are signatories to the EU’s code of practice for general-purpose AI models. This code commits them to address systemic risks such as harmful manipulation and to ensure that their systems do not undermine democratic processes. Three of the models studied—Grok 4 Fast, Mistral 3.1 Medium, and GPT-5—were released after August 2, continuing a path toward increased scrutiny under the AI Act. This legislation mandates compliance with stringent guidelines for AI systems identified as high-risk.
The testing programme
As the AP’s report indicates, the behaviour of these chatbots raises substantial concerns. “We found evidence of a ‘vacuum cleaner effect’. Profiles in the left‑progressive corner largely went to GroenLinks-PvdA. Those in the right‑conservative corner to the PVV,” the report stated. With parties in the political centre rarely recommended, this trend poses significant risks to users seeking impartial electoral guidance. The nature of these recommendations suggests that voters may receive advice that does not align with their actual political beliefs.
Furthermore, the study revealed that less than five per cent of the parties featured as first-choice suggestion. GroenLinks-PvdA and the PVV dominated the top three recommendations in the chatbots’ outputs. This accounts for 55 per cent of total recommendations. Some parties, such as the SGP and Denk, received barely any representation. Such bias can distort political representation and complicate the already challenging landscape of Dutch parliamentary elections.
You might be interested
We found evidence of a ‘vacuum cleaner effect,’ where profiles in the left‑progressive corner were largely assigned to GroenLinks-PvdA and those in the right‑conservative corner to the PVV. — Autoriteit Persoonsgegevens report
The report serves as an early warning not only for the chatbot producers but also for AI firms more broadly. The lack of transparency in how chatbots generate recommendations further complicates the issue. Language models can contain inconspicuous biases due to the vast datasets on which they train. The AP emphasised that the action of producing biased voting advice could undermine trust in democratic processes. “The distortion and lack of transparency confirm that AI-chatbots are unsuitable for voting advice,” the authority concluded.
Distorted recommendations
Under the EU AI Act, chatbots classified as high-risk are subject to strict regulations. While enforcement powers of the AI Act will not take effect until next year, the AP’s findings could open avenues for private litigation.
Any failure to comply with the new standards raises potential liabilities for these companies. It remains unclear whether the chatbot providers meet the computational thresholds necessary to be classified as high-risk models, as none have disclosed this data publicly.
In response to the findings, the AP reiterated the necessity for AI-chatbot producers to implement measures preventing their systems from dispensing voting advice. The authority cautioned users aware of the potential inaccuracies they could encounter. “Voters often receive incorrect advice without understanding why,” the report highlights. It urges the public to utilise verified sources such as established voting aids, reputable news outlets, and official party platforms.
High‑risk under the EU AI Act
The report further underscores that the AI models must conform to the code of practice established by the EU. This includes viewing AI as having the potential to cause harm if not correctly regulated. As AI technology evolves, the expectation is that companies will work to refine their products and processes to maintain alignment with new regulations.
The implications of these findings extend beyond the Netherlands. They may serve as a precedent for other jurisdictions that are examining the ethical implications of AI technology in political contexts. As AI-capable systems become increasingly pervasive, the focus on ensuring their responsible use will likely accelerate.
The upcoming elections serve as a crucial litmus test for these technologies and their ability to operate ethically within political parameters — Spokesman for Autoriteit Persoonsgegevens
The research aligns with broader scrutiny of AI technologies. Governments and regulatory bodies worldwide are grappling with how to approach the rapidly evolving landscape of artificial intelligence. With the rise of chatbots as a source of information, their role in disseminating electoral advice will continue to face intense examination. AI developers must navigate this landscape carefully to ensure compliance with emerging legislation.
Transparency and liability
OpenAI, xAI, and Mistral now find themselves at a crossroads. With a compliance deadline ahead, they must act swiftly to address the concerns raised by the AP. “The upcoming elections serve as a crucial litmus test for these technologies and their ability to operate ethically within political parameters,” an AP spokesperson noted.
The robustness of these compliance processes will determine the future of AI chatbots in political contexts, either cementing their role as trustworthy tools or undermining their credibility entirely. As AI technologies increasingly interact with democratic processes, the requirement for transparency and accountability remains paramount. Failure to meet these standards could lead to significant reputational and financial repercussions in the long term.
In conclusion, the recent findings from the Dutch data protection authority highlight the pressing need for AI technologies, particularly chatbots, to operate responsibly within political circles. As the 2025 elections approach, the focus will remain on how these systems will evolve and their implications for democratic engagement. The coming years will likely determine the trajectory of AI technologies in relation to voter assistance and broader electoral processes across Europe and beyond.
