Europe May Have Just Regulated Itself Into Irrelevance
Image Generated with Chat-GPT, 2025
By Suzanna Stepanyan
04/04/2025
While OpenAI stirs talk about GPT-5 and China deploys AI-driven content farms at the industrial scale, the European Union has chosen a different path: regulate now, innovate... eventually.
Earlier this year, Brussels passed the AI Act, a first of its kind law meant to set global standards for artificial intelligence. On paper, it’s a bold move…a continent staking its claim as the moral steward of a technology that could transform everything from war to weather forecasts. However, in reality the startup ecosystem is anxious…and nowhere is that anxiety clearer than at Mistral AI — the French company once accredited as Europe’s answer to OpenAI.
Despite raising over $600 million, building powerful open-source language models, and receiving glowing coverage from global media…Mistral is struggling. Not because of a lack of talent or ambition, but because the EU is making it almost impossible to build world class AI within its borders.
Image generated with Canva, 2025
Ethics on Paper, Innovation in Practice
The AI Act classifies AI systems into four categories — unacceptable, high-risk, limited, and minimal — based on their potential to cause harm. Systems that score or rank citizens socially are banned completely. Biometric surveillance, while heavily restricted, is still permitted under vague "exceptional circumstances" like public security. High risk systems, such as AI used in education, healthcare, or law enforcement, must undergo rigorous compliance testing, documentation, and human oversight.
At first glance, this seems like a measured approach to frontier technology; however, in practice, it’s a labyrinth. One that threatens to paralyze innovation in the very places the EU claims it wants to protect.
Europe’s most promising startups, including Mistral AI in France and Aleph Alpha in Germany, have already begun sounding the alarm. Mistral, which recently released one of the most powerful open source large language models in the world, has warned that the current regulatory environment may force it to relocate research abroad. Aleph Alpha has likewise expressed concern that the cost of compliance under the AI Act is disproportionately burdensome for small-to-mid-sized companies, potentially driving AI development to the United States or Asia.
The irony is brutal: a law designed to protect European sovereignty could end up outsourcing it.
Mistral AI: Europe’s knight in coded armor
Founded in 2023, Mistral AI was a source of national pride. French President Emmanuel Macron championed it as a pillar of “technological sovereignty”. Its open source models were fast, multilingual, and competitive with early GPT variants. Unlike U.S companies guarded by proprietary walls, Mistral stood for transparency.
However, now, it finds itself trapped between its ideals and its geography.
CEO Arthur Mensch has been blunt: the AI Act is “far from ideal”. His biggest concern? That the law overregulates open-source models, imposing documentation and testing requirements that are impossible for small firms to meet, especially when their models can be repurposed downstream by others.
While Mistral fills out compliance paperwork, OpenAI launches plug-ins, fine tunes enterprise tools, and leads the race to AGI. In China, DeepSeek and iFlyTek are mass deploying large language models across sectors, often with direct state backing. Europe, in the meanwhile, is busy debating footnotes.
Talk the Talk
The EU loves to claim the “Brussels Effect”- its ability to set global norms through regulation. The General Data Protection Regulation (GDPR) was the textbook case where privacy laws passed in Brussels were adopted around the world.
Some may claim that AI isn’t like data privacy. You can’t enforce ethical norms on systems you don’t build. You can’t set technical standards if all the technical breakthroughs are happening in San Francisco or Shenzhen…
But that’s the deeper problem here.
The EU is attempting to regulate its way into leadership in a domain where it holds no dominant players, no homegrown giants, and no sovereign chips. What it does have is ambition — and an increasingly burdensome rulebook.
The contradictions don’t end there though…
Even more troubling is the Act’s hypocrisy. While private companies face massive compliance costs and legal liabilities, EU member states have carved out exceptions for themselves. For example, although the Act restricts real-time facial recognition in public spaces, it allows national governments to deploy it under “strictly necessary” scenarios… a phrase vague enough to drive a surveillance van through.
France has already capitalized on this loophole. In preparation for the 2024 Paris Olympics, French authorities have greenlit an AI-powered video surveillance system to monitor large crowds. The move sparked backlash from civil rights groups, but under the AI Act’s flexible state exceptions, it’s perfectly legal.
So the message is clear: private firms get entangled in red tape, while states sidestep scrutiny under the banner of national security.
If Brussels tries to set the rules of the AI age without building the tools, it risks becoming a regulatory island, ultimately irrelevant to the very companies and systems it seeks to influence.
The Infrastructure Problem
Even if Mistral survives the regulatory constraints, another problem persists.
Training competitive large language models requires massive GPU clusters and optimized data centers. In the U.S, firms like Anthropic and OpenAI benefit from partnerships with Microsoft, Amazon, and Nvidia. In China, the state is building exascale compute clusters and subsidizing AI infrastructure.
Mistral? It’s planning to build its own data center — an expensive, years-long effort that underscores the absence of a coordinated European AI strategy. The EU wants to lead in ethical AI, but it hasn’t built the scaffolding to lead in AI at all.
Regulating AI is not a fringe concern as it is powerful and can be misused to discriminate, surveil, and deceive. In recent months, the technology has shown it can wreak havoc. In the lead up to the Slovak elections, AI generated deep fake audio of a candidate plotting election fraud went viral. Fact checkers scrambled to respond, but the damage was done. The clip played directly into populist narratives and likely influenced voter sentiment.
Meanwhile, Russian disinformation campaigns increasingly use generative AI to mass produce false news stories and spread them through fake social media accounts in the Baltics, Poland, and the Balkans. In the U.S, a deepfake robocall mimicking Joe Biden's voice was deployed in New Hampshire to suppress voter turnout. The line between real and fake is crumbling faster than regulators can react.
In this context, Europe’s instinct to establish guardrails is not misplaced. But the AI Act’s method of enforcement — a rules-first, innovation-later approach — is out of sync with the pace and nature of the technology itself. Regulation must be proportionate, flexible, and grounded in technical literacy. The AI Act, as it stands, is often none of those things. It burdens innovators while failing to fund them. It risks turning Europe into a place where AI is talked about more than it is built.
You Can’t Regulate What You Can’t Build
Europe’s ambition to lead in ethical AI is admirable. But ethics don’t materialize in a vacuum. They are embedded in the systems we build, the datasets we curate, the applications we prioritize. If the only cutting-edge AI available to European citizens comes from American or Chinese platforms, then ethical regulation becomes meaningless. It's like trying to teach someone else’s robot how to behave.
The AI Act may be the first of its kind, but unless it’s matched by strategic investment, regulatory humility, and a willingness to evolve, it risks becoming a cautionary tale, as in the AI age, the future won’t be written by those who write the most laws…It will be written by those who write the code.
SOURCES:
“Ai Act Enters into Force.” European Commission, 1 Aug. 2024, commission.europa.eu/news/ai-act-enters-force-2024-08-01_en.
“Ai Act.” Shaping Europe’s Digital Future, European Commission , 18 Feb. 2025, digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.
Bradford, Anu. “The Brussels Effect: How the European Union Rules the World.” Scholarship Archive, scholarship.law.columbia.edu/books/232/. Accessed 3 Apr. 2025.
Cadena SER, Agencia EFE. “Macron Anuncia Que Francia Invertirá 109.000 Millones de Euros En Inteligencia Artificial Durante Los Próximos Años.” Cadena SER, 9 Feb. 2025, cadenaser.com/nacional/2025/02/09/macron-anuncia-que-francia-invertira-109000-millones-de-euros-en-inteligencia-artificial-durante-los-proximos-anos-cadena-ser/?utm_source=chatgpt.com.
“Disinfo Bulletin – Issue n. 2.” Election Disinformation in Slovakia – DISINFO Bulletin 9 April, European Commission , 2024, ec.europa.eu/newsroom/edmo/newsletter-archives/52231.
“The EU Artificial Intelligence Act.” EU Artificial Intelligence Act, Future of Life Institute, artificialintelligenceact.eu/?utm_source=chatgpt.com. Accessed 3 Apr. 2025.
Loeve, Florence. “French Startup Mistral Rolls out App in Escalating AI Race | Reuters.” Reuters , 6 Feb. 2025, www.reuters.com/technology/artificial-intelligence/french-startup-mistral-rolls-out-app-escalating-ai-race-2025-02-06/.
Moens, Barbara, and Melissa Heikkilä . EU Lawmakers Warn against “dangerous” Moves to Water down AI Rules, Financial Times, 25 Mar. 2025, www.ft.com/content/9051af42-ce3f-4de1-9e68-4e0c1d1de5b5?utm_source=chatgpt.com.
Sterling, Toby. “EU Commission Looks to Cut Overlap in Tech Directives -Virkkunen | Reuters.” Reuters , 27 Mar. 2025, www.reuters.com/technology/eu-commission-looks-cut-overlap-tech-directives-virkkunen-2025-03-27/.
Stifed. “Mistral AI to Invest Billions Building Data Centre in France.” BeBeez International, 10 Feb. 2025, bebeez.eu/2025/02/10/mistral-ai-to-invest-billions-building-data-centre-in-france/?utm_source=chatgpt.com.
Do you like this page?