As AI chatbots enter the daily lives of millions of Europeans who use them for different reasons, from giving dating advice, voting recommendations or offering mental-health support, the risks of manipulation and dependency are growing. With that, debates in the EU are intensifying around the topic, especially the dangers of interactions with children and teenagers.
As lawmakers are seeking clearer accountability and ethical boundaries, MEP Sergey Lagodinsky, Vice-Chair of the Greens/EFA Group, spoke with EU Perspectives about this issue, and other digital questions, such as the AI Act, online fairness, and where Europe should draw its red line.
You’ve joined a call for international “red lines” on harmful AI. What should those red lines look like?
There is already a global AI treaty, the Council of Europe’s 2024 Framework Convention on AI, which calls, among many things, for ensuring that activities within the lifecycle of artificial intelligence systems are consistent with obligations to protect human rights. The USA, Canada, and Japan are among the many signatories. We would like to see this convention strengthened, and the EU’s AI Act provides the global gold standard to do so. In particular, the AI Act’s Article 5 prohibitions offer a model for defining these red lines.
The AI Act provides a global blueprint for strengthening the 2024 Convention. The European Commission now needs to reaffirm its leadership by ensuring the Act is properly implemented. Unfortunately, this is not happening. Standardisation for high-risk AI systems is moving too slowly; the GPAI Code of Practice is being misapplied, I’ve even lodged a complaint with the ombudsman, and member states have failed to designate competent authorities. The Commission should already have initiated infringement procedures.
You might be interested
Responsibility
You recently hosted the event “Fake Friend – Who Is Responsible When Chatbots Turn Against Users.” With chatbots influencing vulnerable people, even leading to suicide, what tools does the EU still lack to prevent harm?
We currently lack agreed rules to evaluate AI’s responsibility in mimicking pseudo-human relationships and clarity on who bears that responsibility in the human world. Europe needs what I call an algorithmic duty of care for conversational systems, a binding obligation for providers to anticipate, prevent, and take responsibility for foreseeable harms. We need to reintroduce the AI Liability Directive, at least for chatbots, and define and codify conversational liability.
That means setting out the extent of a chatbot’s duty of care based on the relational risks posed by these half-synthetic interactions; defining obligations that take into account the vulnerability of the user, especially minors or people in crisis; establishing attribution principles that assign responsibility to developers, deployers, and other relevant actors; and ensuring shared liability so that while victim responsibility is recognised, chatbot providers cannot escape their co-responsibility.
We need an honest discussion about these half-synthetic relationships and their risks – MEP Sergey Lagodinsky
Also, we need an honest discussion about these half-synthetic relationships and their risks. Defining and regulating conversational liability, and AI liability more broadly, is essential, even if it goes against today’s deregulatory trend. It is necessary to prevent further catastrophic harms, particularly among teenagers and other vulnerable groups.
ChatGPT: Very Large Online Search Engine?
The Commission is assessing to classify ChatGPT as a Very Large Online Search Engine under the DSA. Do you see this step as essential for more effective chatbot regulation in Europe?
Yes, this step is crucial. Generative AI models should be classified as VLOSEs under the DSA to trigger audit and risk-management obligations. The AI Act mainly imposes transparency requirements for downstream uses, while the Code of Practice remains voluntary until the Commission issues a standardisation request, which has not yet been done.
Given that OpenAI recently revealed that 1.2m people weekly express suicidal intent in ChatGPT conversations, and that “in rare cases, the model may not behave as intended,” risk assessments beyond the AI Act rules on GPAI model providers in this domain are vital. The DSA framework is therefore indispensable and I urge the Commission to clarify that OpenAI is a VLOSE under the DSA.
California has become the first US state to adopt laws regulating AI chatbots, which now require chatbots to verify users’ ages and display suicide warnings. Should Europe follow?
Not really. I believe we must go further and address these manipulative design features directly.
Age-verification measures are, at this stage, ineffective, privacy-intrusive, and easily circumvented — and the same applies to age-estimation techniques. Warning messages are a positive idea, though I doubt they will make a major difference. I still support them, along with suicide-prevention protocols. But the real game-changer could be the upcoming Digital Fairness Act, which should tackle addictive and manipulative chatbot design.
The real game-changer could be the upcoming Digital Fairness Act, which should tackle addictive and manipulative chatbot design – MEP Sergey Lagodinsky
Chatbots today are not just sources of information; they have become sources of relationships. They simulate emotional and conversational bonds that feel authentic. This illusion, when mishandled, exploited, or neglected, can cause real-world harm.
I call this a half-synthetic relationship: human on one side, synthetic on the other. Market pressure, science-fiction fantasies, and the human longing for connection often blur that line, making the artificial seem real.
As Luiza Jarovsky has shown, chatbots create this illusion through persistent memory that stores personal details, anthropomorphic empathy that mirrors user emotions, agreeability that avoids moral friction, and engagement loops that encourage long interactions. We need to address these manipulative design features directly.
Democratic discourse at threat
You’ve recently warned the commission about algorithmic manipulation. From TikTok’s “MAGA algorithm” to Grokepedia’s alleged bias. How do such models threaten democratic discourse in Europe?
These AI models can be deliberately tweaked to promote one version of truth inside societies. So what happens if Europe becomes dependent on foreign providers whose systems shape our social and informational infrastructure? If their outputs are skewed by external manipulation, we risk systemic, post-truth distortion.
As AI grows more powerful, it won’t just process information. It will produce reality. When every conversation, image, or “fact” can be synthetic, manipulation becomes systemic. The result is collective disorientation and an information collapse, where truth must compete with convincingly generated lies.
As AI grows more powerful, it won’t just process information. It will produce reality – MEP Sergey Lagodinsky
AI-driven disinformation is a growing threat. Generative systems can foster dependency, echo chambers, and covert censorship disguised as free speech. We already see examples: China’s DeepSeek shows authoritarian exploitation of AI, while U.S. providers face political pressure to “stop woke AI.”
As these systems become embedded in daily life, they create structural vulnerabilities. Malicious actors — as we’ve seen in Russian disinformation campaigns — will inevitably exploit them. If we rely on such models for decision-making, information, or governance, their bias and manipulation become national-security risks. This also raises questions about the future of journalism, as AI increasingly competes with traditional media.
The LIBE Committee votes this week on expanding Europol’s role against migrant smuggling. Would using AI tools like facial recognition or predictive analytics cross your red lines?
Yes, absolutely — this is one of the red lines. As I mentioned earlier, the AI Act’s Article 5 prohibitions are clear. Biometric categorisation that infers sensitive attributes such as race, political opinions, or religion should not be used except in lawful cases. Profiling-based assessments of criminal risk are only acceptable when they support human assessments grounded in objective facts. Compiling facial-recognition databases from untargeted image scraping must be banned.
And when it comes to real-time remote biometric identification in public spaces, the exceptions must remain narrow — limited to searches for missing or trafficked persons, the prevention of imminent life-threatening or terrorist acts, or the identification of suspects in serious crimes such as murder, rape, or organised crime.
We also need to ensure that the AI Act’s national-security exception is not abused. We’ve already seen worrying precedents, such as the Pegasus spyware scandal, where national security was invoked to bypass EU oversight and human-rights obligations. The Act’s exemption applies only to systems developed purely for national-security purposes — not to dual-use technologies operating outside that context. Strict supervision by both the Commission and national authorities is essential to guarantee uniform enforcement and the protection of fundamental rights.