Artificial intelligence (AI) is no longer a distant issue. It is shaping democracies, influencing elections, and challenging the rule of law. The EU addresses these risks with a new international treaty, aiming to set global rules for how the technology is developed and used. “AI can pose a real threat not only to individuals, but to democracy itself,” Paulo Cunha, the European Parliament’s rapporteur, says in an interview.

The Council of Europe Framework Convention on AI—the first legally binding agreement of its kind—introduces standards on transparency, risk management and oversight to protect fundamental rights, democracy and the rule of law. It was negotiated under the Council of Europe since 2022 with partners including the United States, Japan and Canada.

EU Perspectives spoke with MEP Paulo Cunha (EPP/PRT), co-rapporteur on the file, about how the convention complements the AI Act, its potential to shape global standards, and how it addresses risks such as electoral manipulation and disinformation.

A global layer on top of the AI Act

The EU already has the AI Act. Why does it need another regulation – and what does this convention actually add?

It brings several advantages that we consider highly relevant. The first is that it gives a global dimension to what has so far been essentially a European instrument. The AI Act is, by nature, limited to the European Union’s regulatory space. But artificial intelligence is not confined to Europe—it has a global impact, and therefore requires a regulatory response of a different magnitude.

The second difference lies in the nature of the instruments. The AI Act establishes a concrete set of rules, with specific obligations and standards. The convention, by contrast, has a broader and more structural role. I often describe it as a kind of ‘constitution for artificial intelligence’. It does not go into the same level of detail, nor does it define rights and duties in a strict sense. Instead, it creates a general normative framework—an umbrella under which different regulatory approaches can develop, whether they are territorial or thematic.

You might be interested

The third major difference is its scope. The convention addresses artificial intelligence across three fundamental domains: fundamental rights, democracy, and the rule of law. This is particularly innovative. Existing legislation, including the AI Act, focuses primarily on the protection of rights, which is essential. But democracy and the rule of law have not traditionally been at the centre of AI regulation. This convention places them on the same level.

That is crucial, because while we must protect individual rights from potential AI-related violations, we also need to safeguard the functioning of our democratic systems and legal order as a whole.

The recent case in Romania, with alleged external interference in an electoral process that led to the annulment of an election, is a clear warning. It shows that AI can pose a real threat not only to individuals, but to democracy itself.

Democracy, disinformation and real-world risks

When it comes to electoral manipulation, what does this convention actually change in practice? And what ensures it is not just a political declaration?

This is not merely a political declaration. It is a binding instrument. That said, it is important to understand what ‘binding’ means in this context.

It does not automatically imply sanctions such as fines or prison sentences. We are dealing with an agreement between states, which operates differently from national legal systems.

The convention establishes a normative framework that states are required to follow. That, in itself, is significant. Law often acts as a last resort—it comes into play when social norms, ethics and established practices are not sufficient to address a problem.

In relations between states, enforcement works differently. Sanctions are not applied in the same way as within a national legal system. That does not make the instrument weaker—it simply reflects the nature of international law.

AI is already being used in conflicts and disinformation campaigns. What can this convention realistically do in that context?

It is true that artificial intelligence is already being used as a tool of aggression, including in disinformation campaigns. But if we start immediately from a purely sanction-based approach, we risk discouraging participation altogether.

We are dealing with a highly complex and still evolving field. Even distinguishing between false and true information is not always straightforward. In many cases, the answer is not simply black or white—there are many grey areas.

The EU does not aim only to regulate within its own borders. There is always a broader ambition: not just to act, but to encourage others to act in the same direction. This reflects a long-standing idea of European leadership.

The only way to address this effectively is through cooperation, including with those who develop the technology. Engineers, developers and companies must be part of the solution.

Ideally, we should aim for systems that incorporate safeguards by design—mechanisms that prevent harmful outcomes. If we can achieve that, it would be a major step forward. The real problem arises when systems are developed solely for scale and engagement, without considering who is affected—whether it is a child or an adult.

Could anything still block this at EU level? Do you expect resistance from member states in the Council?

I do not, honestly. The negotiations between the European Commission, Parliament and Council have been well aligned.

If we look at the level of political support, it becomes clear that this is not an ideological issue. It is not a matter of left or right. For that reason, I find it unlikely that any government would choose to block this process.

Other countries on board

This is a European initiative—so why should countries outside the EU pay attention?

Unlike the AI Act, this convention has an explicitly international scope. It is not by chance that countries such as the United States and Japan are already involved, and that others are in the process of joining.

The EU does not aim only to regulate within its own borders. There is always a broader ambition: not just to act, but to encourage others to act in the same direction. This reflects a long-standing idea of European leadership—from the industrial revolution to the development of democratic systems and human rights standards.

In that sense, the EU is not simply joining the convention for its own sake. It is using it as a tool to persuade other countries to follow. That is the real strategic objective. And that process is already underway. With this step, it is likely to gain further momentum.

The United States is on board for now. But do you really expect it to sign up in the end?

I believe so, because what is at stake here is not about limiting innovation or targeting any particular country. It is about moderating and balancing a powerful technology with the need to protect fundamental principles.

The United States also has an interest in protecting its democracy, its rule of law and its citizens. It is difficult to imagine any administration being willing to accept—through action or inaction—that fundamental rights such as data protection could be undermined in this context.

This is not a geopolitical issue, nor is it a question of East versus West, or left versus right. The level of support in the European Parliament clearly shows that this is a cross-party matter. There may be different interpretations at the margins, but at its core, this is a shared concern. For that reason, I am confident that the United States will eventually take this step.

How will this convention actually work? And who will make sure countries follow it?

There are several stages in this process. We have just completed the approval of the EU’s accession. The next phase is dissemination—promoting best practices and encouraging other countries to join. After that comes implementation.

But it is important to be realistic: this convention is not a panacea. It does not provide immediate answers to all the challenges we face today, nor to those we will identify in the future. Artificial intelligence is evolving rapidly, and there are many risks that we still not fully understand.

The idea is to create what I previously described as an ‘umbrella effect’. In other words, a framework that raises awareness and builds a shared understanding of the challenges. We cannot solve a problem if we do not first recognise that it exists.

The European Union wants to be at the forefront, but it cannot solve this issue alone. This is a global phenomenon.

There is still a significant gap in terms of awareness, both among citizens and institutions. If you go out into the street or into a company today, the level of understanding of AI risks is often far from what we are discussing here. There is a long way to go in terms of perception, maturity and preparedness.

The European Union wants to be at the forefront, but it cannot solve this issue alone. This is a global phenomenon. Citizens may be protected within the EU, but that protection can disappear as soon as they cross a border. That is precisely why we need a broader, ideally global, regulatory approach.

At the moment, there is no truly effective global governance structure. The United Nations comes closest, but recent events have shown its limitations. So this will require sustained efforts in international cooperation, negotiation and awareness-building. What we have done so far is only the beginning.

AI Act delay

How does this convention relate to the delay of AI Act rules for high-risk systems? Is there a risk of inconsistency by leaving citizens more exposed?

There is no direct contradiction, because we are dealing with instruments of a different nature.

The AI Act is a concrete regulatory framework, with specific deadlines, technical requirements and operational obligations. The convention, on the other hand, is broader and more structural. It establishes guiding principles that are valid independently of the AI Act’s implementation timeline.

The adjustment of deadlines in the AI Act reflects the need to ensure that the necessary technical and regulatory conditions are in place. If rules are too demanding without the capacity to implement them properly, they risk becoming ineffective in practice. So this is not about weakening regulation, but about making it applicable and credible.

At the same time, the convention plays a different role. It creates a global framework of principles that can guide different legal systems. These two approaches are complementary rather than contradictory.

As for citizens’ protection, it is important to stress that the delay does not mean a regulatory vacuum. There are already other legal instruments in force, such as General Data Protection Regulation (GDPR) and existing EU legislation, which continue to provide safeguards.

The objective is to ensure that when the AI Act rules fully enter into force, they do so under conditions that guarantee their effectiveness.