The European Union’s AI Act places stringent demands on businesses worldwide. It requires conformity assessments, documented risk management, bias detection, and human oversight. Harsh fines await laggards. For small companies, this might just be a blessing in disguise, Centre for European Policy claims.
Small firms the world over have begun to fret about a Brussels rulebook they did not write. The AI Act, in force since August 2024 and rolling out in phases, places stiff demands on SMEs. Potential fines of up to €15m, or three per cent of global turnover, may double in case of breaches of outright prohibitions. That scale, write Anselm Küsters and Ben Waber of the Centre for European Policy, means that “what were once simple efficiency tools have become potential liabilities”.
The report, based on he authors‘ work published in Harvard Business Review, warns that small and medium-sized enterprises face the sharpest squeeze. Budgets are tight; compliance officers scarce. Yet many smaller firms rely on off-the-shelf AI for chores such as hiring, customer service and credit scoring.
Under the new law even a resume-screening model counts as “high-risk” and must obey the full rigmarole from August 2026. If SMEs dodge the effort by outsourcing to costly intermediaries, they risk losing speed and autonomy. Delay is no easier: thanks to the “Brussels effect” buyers outside Europe now ask for “AI-Act-ready” assurances.
You might be interested
Three essentials
The authors advise preparation on three fronts. First, understand the rules as they evolve. The act already bans systems deemed “unacceptable”; it places strict transparency duties on providers of general-purpose models that pass the computational threshold of 10²⁴ FLOPS, and even harsher duties on those above 10²⁵ FLOPS. “The exact computation thresholds are still the subject of intense debate and are likely to change repeatedly,” they note; nonetheless, managers should plan as if the 2026 deadline stays.
Second, audit exposure and budget for cost. Setting up an AI quality-management system may cost €193k-330k, with annual upkeep of about €71,400. SMEs must inventory all models, classify each against the act’s risk tiers, and register high-risk systems in a new EU database. Providers that fine-tune an existing model beyond one-third of the original compute suddenly become GPAI providers themselves, with fresh duties to publish data summaries and, where risk is “systemic”, to run adversarial tests and report serious incidents.
The exact computation thresholds are still the subject of intense debate and are likely to change repeatedly. — Anselm Küsters, Ben Waber, Centre for European Policy
Legal uncertainty makes the task harder. The Commission has yet to publish guidelines on classifying high-risk systems or on overlaps with product-safety rules such as the Machinery Regulation. “Industry is unsure what constitutes ‘manipulation’ and ‘significant harm’,” the authors report. Appeals to “intended purpose” are flimsy when a chatbot can answer a hundred different queries. Mario Draghi’s recent competitiveness review lamented that more than half of European SMEs cite red tape as their biggest headache; AI rules deepen that ache.
Sandboxes are working
Third, turn compliance into advantage. Smaller firms, argue Messrs Küsters and Waber, “are not powerless”. They can move faster than incumbents and exploit measures baked into the act itself. Member states must give SMEs priority access to regulatory sandboxes—testing environments where participants gain immunity from fines while they experiment under a supervisor’s gaze. Evidence from Britain’s fintech sandbox suggests such schemes raise the odds of funding by 50 per cent and boost capital raised by 15 per cent. If AI sandboxes deliver even a fraction of that benefit, early entrants will gain.
Collective muscle helps, too. SMEs can pool costs by forming consortia to run joint bias tests and craft shared technical files. Helsinki’s Saidot and Silo AI show the appeal: both started as boutique governance shops, offered compliance tools to peers and attracted heavyweight clients—Deloitte in one case and AMD in the other. In Germany a health-tech outfit, AIcendence, used a regional digital-innovation hub to navigate rules and land public funding for diagnostic software.
If everyone must comply, how can an SME stand out? — Anselm Küsters, Ben Waber
Building compliance features from the outset saves money. The authors estimate that proactive data-governance measures cut breach costs by just over $3m. Map use-cases to the act’s risk tiers, gather diverse training data and log every prompt, weight and revision.
No mistakes forgiven
Human-in-the-loop controls stop a model spitting out biased shortlistings; thorough documentation doubles as marketing collateral. “If everyone must comply, how can an SME stand out?” they ask. One answer is to publish model cards—plain-English sheets describing scope, training data and limitations—that reassure buyers and regulators alike.
Ethics can woo customers. Consumer trust in AI has slipped from 61 per cent to 53 per cent in five years; fairness and transparency lift it. The authors quote Deloitte research showing that faulty decisions “most often impact multiple stakeholder groups”. For a small manufacturer dependent on repeat orders, one scandalous algorithmic snub could poison decades of goodwill. Proactive disclosure, by contrast, speeds procurement and may unlock premium pricing once the current hype ebbs.
Europe, for its part, should polish its tools. Messrs Küsters and Waber call on regulators to keep SME templates up to date, streamline overlaps between the AI Act and data-protection law, and subsidise open-source compliance software. They single out Small Business Standards, an EU-funded association that could bankroll attendance at standards meetings. Regulators, they argue, should “stop the clock” only if harmonised standards fall unacceptably late; otherwise clarity beats delay.
A healthier market?
The biggest danger is policy drift. Lobbyists have urged the Commission to postpone the toughest requirements, citing fears that American regulators will go light on their own tech champions. The AI-office in Brussels must show staying power. Without it, the burden will fall unevenly: thorough firms will shoulder costs while rivals cut corners until the next scandal forces action.
The idea that only Big Tech can survive is exaggerated. — Anselm Küsters, Ben Waber
The office must holds its nerve. Should SMEs seize the tools on offer, the act could yet breed a healthier market. Early movers will master conformity assessments, curate bias-free datasets and log every decision. That will not only help them dodge fines; they will also advertise reliability. “The idea that only Big Tech can survive is exaggerated,” the authors conclude.
That verdict may prove optimistic. But in regulatory Europe, optimism often brings its own reward: the right to keep trading while slower competitors scramble. For thousands of small firms, that might just be opportunity enough.