Europe’s health systems are embracing artificial intelligence at unprecedented speed. But the basic legal and ethical infrastructure needed to protect patients, clinicians and public trust is still missing. That is the central warning of the WHO Europe reports, offering a picture to date of how 50 countries are deploying and governing AI in health care.

The findings, drawn from the 2024–25 Survey on AI for Health, land at a moment when hospitals, ministries and public insurers across the European Region are rolling out tools for diagnostics, triage, population surveillance and patient communication. Yet as AI reaches the clinical frontline, the governance to match its risks remains fragmented, and in many cases absent.

“AI is already a reality for millions of health workers and patients”, said Dr Hans Henri P. Kluge, WHO Regional Director for Europe. “But without clear strategies, data privacy, legal guardrails and investment in AI literacy, we risk deepening inequities rather than reducing them”.

Health-specific AI strategies are scarce

The reports reveal a striking mismatch between adoption and preparedness. Only four countries, just eight per cent, have issued a national health-specific AI strategy. Another seven are still developing one. Meanwhile, almost two-thirds of countries are already using AI-assisted diagnostics, and half have deployed AI chatbots for patient engagement. This uneven landscape shows that while countries like Estonia, Finland and Spain are building unified data platforms, training programmes and AI-ready governance systems, others lack institutional capacity.

“We stand at a fork in the road”, said Dr Natasha Azzopardi-Muscat, Director of Health Systems at WHO/Europe. “Either AI will be used to improve people’s health and well-being, reduce the burden on our exhausted health workers and bring down health-care costs, or it could undermine patient safety, compromise privacy and entrench inequalities in care. The choice is ours.”

You might be interested

Legal vacuum as the biggest barrier

Across the Region, the most significant obstacle is the absence of clear legal frameworks. Eighty-six per cent of countries say that legal uncertainty is the primary barrier to adopting AI in health care. Despite this, only a small minority, eight per cent, have introduced liability standards to determine who is responsible when an AI system causes harm.

The lack of regulation extends across the entire lifecycle of AI in health. Fewer than half of the states have undertaken any effort to assess gaps in existing laws. A little over half have designated regulatory agencies able to assess and approve AI systems. Far fewer have mechanisms to monitor how those systems behave after they are deployed. And only three countries have introduced legal requirements that specifically address generative AI in health settings.

Without clear legal standards, clinicians may be reluctant to rely on AI tools, and patients may have no path for recourse if something goes wrong. – Dr David Novillo Ortiz, WHO’s Regional Advisor on Data, AI and Digital Health

Health professionals face blurred lines of responsibility. When an algorithm makes mistakes, liability is unclear, and both clinicians and patients are exposed to risk. “Without clear legal standards, clinicians may be reluctant to rely on AI tools, and patients may have no path for recourse if something goes wrong”, warned Dr David Novillo Ortiz, WHO’s Regional Advisor on Data, AI and Digital Health.

Investment lags behing promises

While governments across the Region are racing to adopt AI systems, their policy and budgetary frameworks remain behind. Nearly two-thirds of countries already use AI for diagnostics, half use AI chatbots, and more than half have identified priority domains for health-care AI. But only a quarter of countries have actually allocated funding to implement those priorities.

Almost all countries say their primary reason for adopting AI is to improve patient care, followed closely by relieving workforce pressure and boosting efficiency. Yet with insufficient funding and incomplete governance structures, many initiatives risk remaining stuck in pilots.

“Patients must remain at the centre of decision”

For the general public, WHO identifies three core risks: patient safety, fairness of care and digital privacy. AI systems are only as strong as the data they are trained on. Biased or incomplete datasets can lead directly to unequal outcomes, such as missed diagnoses to inconsistent treatment recommendations. “AI is on the verge of revolutionising health care, but its promise will only be realised if people and patients remain at the centre of every decision”, Dr Kluge concluded. 

AI is on the verge of revolutionising health care, but its promise will only be realised if people and patients remain at the centre of every decision. – Dr Hans Henri Kluge, WHO Regional Director for Europe

The challenges are emerging while the EU is reconsidering some of the key safeguards meant to govern them. The EU AI Act is currently in debate, under the omnibus proposal, to delay the rollout of its high-risk provisions. These are the rules designed to apply to sensitive uses such as biometric identification, policing and critical infrastructure. Originally due to take effect in August 2026, these obligations may now be pushed to December 2027. The Commission has also floated removing the requirement for companies to register self-assessed high-risk systems.