Artificial intelligence is already part of European healthcare, used to analyse scans, flag diseases, and support diagnoses. Most of these systems are treated as medical devices under the EU’s Medical Devices Regulation or In Vitro Diagnostic Regulation. Because of the influence such tools can have on patients’ lives, they can be classified as ’high risk’ under the European Artificial Intelligence Act.

The risks are not theoretical. A Civio investigation in Spain found that Quantus Skin, a melanoma-detection tool, failed to identify one in three cancers. Trained mainly on images of white patients, it performed even worse on darker skin. The case highlighted the risks of AI tools in medicine and discriminatory outcomes that cut against the very rights the EU’s rules are meant to protect.

What does ’high risk’ mean for health devices?

The AI Act works on a scale principle. The higher the potential harm to society, the stricter the rules. At the very top of this ladder are ’high-risk’ systems. These must comply with tougher requirements on data quality, transparency, human oversight and risk management before they can be used.

Healthcare is one of the fields where those rules apply the hardest. The Act defines AI broadly, as “a machine-based system” that “infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

In practice, that definition captures many forms of medical software that can be used for healthcare purposes. According to guidance jointly issued by the AIB (Artificial Intelligence Board) and the Medical Device Coordination Group (MDCG), a medical device software is any programme “intended, either alone or in combination, to fulfil a medical purpose”.

You might be interested

Nevertheless, not every medical software is automatically treated as high-risk by the AI Act. “If it qualifies as a medical device under the Medical Devices Regulation, and this regulation requires a notified body to be involved in the certification process for its commercialisation (CE marking), then we are also dealing with a high-risk AI system according to the AI Act”, said Guillermo Lazcoz, a postdoctoral researcher at the University of the Basque Country.

The AIB–MDCG guidance makes the same point. “The Medical Devices Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) requirements address risks related to medical device software, however, they do not explicitly address risks specific to AI systems. The AIA [Artificial Intelligence Act] complements the MDR/IVDR by introducing requirements to address hazards and risks for health, safety and fundamental rights specific to AI systems”.

Data, bias, and fundamental rights

If a medical tool falls under the high-risk system, it means stricter obligations for data. Training datasets for high-risk systems must be free from biases that could endanger health or undermine fundamental rights. Lazcoz is clear: “Article 10 includes data governance as one of the mandatory requirements for high-risk systems. This requirement includes, among other things, the adoption of measures to detect, prevent and mitigate potential biases”.

This requirement includes, among other things, the adoption of measures to detect, prevent and mitigate potential biases – Guillermo Lazcoz, postdoctoral researcher

A Commission official echoed the same point. “Part of these requirements is a risk management system which includes the identification, estimation, evaluation and management of risks to health, safety and fundamental rights”, they said. “Furthermore, the EU AI Act specifically requires that the datasets used for high-risk AI systems detect, prevent and mitigate “possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law””.

A recent Civio investigation showed why this layered regulation matters. They reported that a diagnostic algorithm used in the Basque Country and designed to detect melanomas failed to correctly identify one in three cases of skin cancer. More troubling is the fact that the tool in question, Quantus Skin, had been trained mainly with images of white patients, effectively “erasing” those with darker skin tones from its database.

An algorithm that misses cancers already poses a safety risk. One that performs worse on darker skin adds a second layer of discrimination, cutting to the fundamental rights the AI Act explicitly aims to protect. And Quantus Skin is not an experimental tool. According to the manufacturer’s website, it is a CE-marked medical device.

Who checks and who is accountable?

Still, legislation from Brussels will only be as effective as its enforcement across the Union. Here, member states retain a crucial role. “National and regional legislators have the power, especially in the field of public healthcare, to increase controls on AI systems used in different hospitals”, Lazcoz noted. He pointed out that Spanish authorities, for example, might have a role. They could verify with their own patient databases that algorithms deployed in public hospitals do not result in discriminatory outcomes.

National and regional legislators have the power, especially in the field of public healthcare, to increase controls on AI systems used in different hospitals. – Guillermo Lazcoz, postdoctoral researcher

Such powers will become even more relevant as the Act’s next provisions phase in. The obligations for high-risk systems apply from August 2026 in some cases, and from August 2027 in others. For now, there is a grey zone. Patients already face risks, while regulators cannot yet enforce the Act’s stricter safeguards.

Still, the enforcement of new provisions under the AI Act remains contested in political debate. Mario Draghi has called for a pause in rolling out the next stage of the legislation. He argued: “But the next stage—covering high-risk AI systems in areas like critical infrastructure and health—must be proportionate and support innovation and development. In my view, implementation of this stage should be paused until we better understand the drawbacks”.

Innovation burdens: smaller firms suffer

For developers, especially smaller firms, the regulatory landscape is starting to feel like a minefield. Already navigating the demands of the MDR and the IVDR, they now face the added layer of the AI Act.

MedTech Europe has warned that “over 70 per cent of IVD and MD manufacturers had to allocate more resources to regulatory compliance efforts”. It stresses that the “layering requirements on top of other requirements” creates an overall cumulative effect, which can be devastating, especially for SMEs.

On the same lines, the Market Research Report on AI for diagnosis shows how difficult it can be for smaller players to keep pace. Their study notes that “smaller healthcare facilities often find it difficult to allocate resources… leading to unequal adoption across regions”. The same report highlights regulation as an obstacle. According to it, the “regulatory environment in France presents one of the most significant challenges to the adoption of AI diagnostics”. Unlike traditional diagnosis technologies, AI systems must undergo extensive evaluations to ensure reliability, transparency, and patient safety.

Regulators insist they are trying to ease the load. Guidance from AIB–MDCG suggests AI manufacturers “may include the elements of the quality management system provided by the AIA as part of the existing quality management system provided by the MDR and IVDR”. This point intends to “avoid unnecessary administrative burden”.

The human factor

Patient groups and watchdogs take the opposite. For them regulation is not a brake, but a safeguard. The European Patients’ Forum has warned that “biased and unrepresentative data” in medical AI could lead to further perpetuate healthcare disparities, discrimination, unequal treatment, and unequal access to healthcare”. The WHO has issued a similar warning. They warn that “the data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity and inclusiveness”.

The consequences are visible. Civio revealed that these risks patients with cancer being send home with a false reassurance of false negatives. These types of situations present real risks for health. It can lead to delaying treatment, which might have potentially fatal outcomes for those whose cancers went undetected.

Other recent tests of AI in clinical settings point to the same tension. In Shanghai, a public competition pitted radiologists against an AI system to analyse chest X-rays. The algorithm delivered results faster than the doctors. Nevertheless, it overlooked certain diagnoses and produced reports that lacked the empathy of human assessments. Doctors, by contrast, spotted cases the AI missed and produced warmer reports.