Is Europe Censoring Too Much? Or Just Enough?

Canva Generated Image, 2025

DISCLAIMER: all opinions in this article reflect solely the views of the author, not the position of Stand Up For Europe.

By Suzanna Stepanyan   

28/02/2025 

 

Imagine scrolling through your favorite social media feed, about to comment on a hot political debate, only to find out your post has been removed. Not because it was hateful or illegal…but because it didn’t fit the platform’s "community guidelines."

Now, picture this happening at a national level. Imagine that this is not just the policy of a private social media company,but a government-enforced mandate that determines what can and cannot be said in the digital public space.

 

This concern was highlighted by the U.S. Vice President J.D. Vance at the Munich Security Conference, where he warned that European nations were actively suppressing free speech under the claim of combating misinformation. His claim was striking—he argued that European censorship poses a greater threat to democracy than external adversaries such as Russia or China. While some dismissed his remarks as an exaggeration, they ignited a broader debate about the balance between regulating harmful content and preserving the fundamental right to free expression. In an era where misinformation spreads at unprecedented speeds and AI-generated content can deceive millions, the tension between regulation and freedom has never been more relevant.

 

Is Europe truly overstepping by limiting speech, or are these measures a necessary defense against digital chaos? 

 

Unlike the United States, where the First Amendment provides nearly absolute protection for free speech, many European nations have long believed that some forms of expression pose legitimate dangers to public order and democratic stability. As a result, their laws have been more proactive in restricting certain types of speech, particularly hate speech, threats of violence, and misinformation that could undermine democratic institutions. This approach has gained even more traction in recent years with the rise of social media platforms that have the power to amplify false narratives within seconds.

 

One of the most ambitious legislative efforts at the EU level to combat digital misinformation is the Digital Services Act (DSA), which came into full effect in February 2024. As explicitly stated by the Commission, the DSA aims to regulate online intermediaries and platforms such as social networks, content-sharing platforms, marketplaces, and app stores. The primary objective is to prevent the illegal and harmful use of these platforms as well as the spread of disinformation. 

 

Despite these restrictions, the Commission claims that the DSA “fosters innovation, growth and competitiveness” by rebalancing the digital ecosystem “according to European values”. Under the DSA, major platforms such as Facebook, X, Instagram, and TikTok are required to identify and remove hate speech and misinformation, hold themselves accountable for the content they allow, and implement safeguards to create a more controlled digital environment.

 

The Case for Digital Regulation 

Supporters of the DSA argue that these regulations are not just necessary but long overdue. The past decade has shown how unchecked misinformation can interfere with elections, fuel social unrest, and distort public discourse. Especially now, where AI-generated content can fabricate realistic videos and images, even cautious citizens may struggle to distinguish fact from fiction. Proponents of strict digital regulation argue that governments must ensure that democracy is not diminished by the weaponization of false information. 

 

According to an impact assessment prepared by the European Commission services, the DSA plays a crucial role when creating a public oversight mechanism for online platforms, especially for Very Large Online Platforms (VLOPs) which tend to reach more than 10% of the EU's population. Additionally, through the use of “trusted flaggers”, the DSA requires online platforms to ensure measures that counter illegal services, content, or harmful activities, making the reporting process for illegal content much easier. 

 

Moreover, the DSA includes protections beyond misinformation regulations. It bans targeted advertisements for minors on online platforms, as well as profiling users based on categories of personal data such as someone’s sexual orientation, ethnicity, ideology, or race. These provisions contribute to a safer overall digital world. 

 

However, critics of the DSA and similar policies warn that such measures, while well-intentioned, can easily overreach. If regulations are not carefully implemented, they could end up censoring not only misinformation and harmful content but also legitimate speech that challenges mainstream narratives. To create a safer online space, are European governments risking the suppression of lawful opinions?

 

When Censorship Becomes Overreach

Critics argue that Europe’s crackdown on online content doesn’t just target fake news…it sweeps up legal speech too.

 

While the intention behind media regulations like the DSA is to prevent the spread of dangerous content, studies suggest that these laws often result in the suppression of lawful speech. A 2024 study by The Future of Free Speech at Vanderbilt University analyzed removed comments from major Facebook pages and YouTube channels in Germany, France, and Sweden. It found that between 87.5% and 99.7% of deleted comments were legally permissible. Moreover, approximately 56% of the removed comments contained general expressions of personal opinion rather than hate speech, illegal content, or spam.

The Future of Free Speech, 2024

Senior research fellow Ioanna Alkiviadou, who led the study, noted, “Not only legal speech make up most of the deleted comments, but a majority of those comments contain opinions on important issues. The internet cannot remain a bastion of open discourse when only the most innocuous speech passes through moderators.” This raises concerns that overzealous content moderation may suppress meaningful discussions that challenge dominant viewpoints.

Moreover, one of the main reasons European governments have pushed for stricter regulations is the growing influence of artificial intelligence in creating and spreading disinformation. AI-generated content, from deep fake videos to entirely fabricated news articles, has made it increasingly difficult to distinguish between reality and AI. Unlike traditional misinformation, which often relies on human manipulation of facts, AI can produce false information with alarming realism.

Ironically enough, though AI is seen as a threat, oftentimes AI-based moderation tools are employed rather than human moderators, as they suffer from poor working conditions and extended exposure to traumatic content, and although they are efficient, they are very flawed as struggle greatly with context, especially when identifying humor or satire. Resulting in content being taken down unnecessarily, especially in regards to genocides or war crimes, as AI tends to identify it as harmful or explicit content, according to a 2022 US Federal Trade Commission Report to Congress as mentioned in the Parliamentary Assembly regarding the regulation of content moderation in December of 2024. 

AI- Generated Image, 2025 (ChatGPT)

 

Another issue is the vague language used in the DSA. While the legislation seeks to protect against "harmful" content, it lacks a clear definition of what qualifies as such. This ambiguity grants VLOPs significant discretion in determining what is considered "harmful" or "illegal," effectively pressuring platforms to over-censor content to avoid penalties—fines that can reach up to 6% of global annual revenue, according to reports by The Alliance Defending Freedom International and The Institute of International and European Affairs.

Though it may seem harmless, and if anything helpful, at its surface level, the DSA essentially gives power to those who already are influential such as very large online platform (VLOPs) holders and those in charge of media outlets, to shape the content that is seen by EU citizens into what essentially benefits them, claiming anything else to be ‘dangerous’ or ‘harmful’. 

To quote the Commission, one of the key goals of the DSA is to help rebalance “according to European values”. This raises concerns as it seeks to form a unified form of ideology that suits the interests of those in power, masked as the want for ‘democratic stability’. To create a set of ‘European values’, is to favor a set of beliefs or opinions that coincide with the majority view, not only decreasing competitiveness, but also reducing the voices of minority viewpoints, or those that are not entirely favored by the majority. 

Besides the favoring of viewpoints, this also means that large platforms and companies are now to hold the power to decide what fits under the ‘European values’ umbrella, and what does not..ultimately helping shape the definition and structure of said “European values”. 

Though I do believe that content that is hateful, such as bullying, or content that is posted spitefully to fuel harmful behaviors should be monitored and censored to a certain degree…I also believe in self autonomy. I believe that it's important to have access to all viewpoints and opinions… not just those that fit “European values” and are deemed as politically acceptable by VLOPs or those in power. Though some may be radical, it should be up to the user to consume all ‘sides of the story’ and then themselves decide where they stand. 

To continue, Vice President Vance has been one of the most vocal critics of European media regulation, warning that excessive censorship creates an environment where only politically approved viewpoints can be expressed. He has argued that Europe is on the path to creating what he calls a “dictatorship of liberals who are not liberal at all,” where speech is restricted not based on legality, but on political acceptability. This, he claims, is a greater danger to democracy than external threats, as it undermines the ability of citizens to engage in open and fruitful debate.

Free speech advocates share similar concerns: that the line between “moderation” and “censorship” is increasingly blurry. They argue that while removing genuinely harmful content is justifiable, regulations that silence unpopular opinions or inconvenient discoveries set a dangerous precedent. Governments and tech platforms should not be the ultimate arbiters of what is considered true or acceptable. In a democratic society, the public should have the right to engage in debate, even if that debate includes controversial or offensive viewpoints.

Historically, radical ideas—such as same-sex relationships or the abolition of slavery—were once deemed too controversial to be discussed openly. Yet, progress was made precisely because individuals were able to challenge prevailing norms. Today, if governments or corporations dictate which viewpoints are permissible, society risks repeating past mistakes by suppressing voices that may one day be recognized as crucial to progress.

Overall, though the intentions of the DSA were genuine, I believe that it does overreach, and opens the door to more extensive future censorship, disregarding citizens’ freedoms. To have a strategically carved out stream of information is to give up our ability to determine right from wrong, and our ability to voice the minority experience. The means in achieving democratic stability, in my opinion, should not sacrifice the ability to engage in discourse, and more importantly, the fundamental right to free expression. 






Sources

Allen, Seamus. “The Digital Services Act : Censorship Risks for Europe.” The Institute of International and European Affairs , Dec. 2024, www.iiea.com/images/uploads/resources/The_Digital_Services_Act.pdf. 

Bose, Nandita, and Donia Chiacu . “In Munich, Vance Accuses European Politicians of Censoring Free Speech | Reuters.” Reuters , 14 Feb. 2025, www.reuters.com/world/europe/vance-uses-munich-speech-criticize-europe-censoring-free-speech-2025-02-14/. 

Edited by Ms Valentina Grippo, Regulating Content Moderation on Social Media to Safeguard ..., Council of Europe, 4 Dec. 2024, rm.coe.int/as-cult-regulating-content-moderation-on-social-media-to-safeguard-fre/1680b2b162. 

“EU Doubles down on Social Media Censorship That ‘will Not Be Confined to Europe’ Following Concerns about Musk’s Free Speech Policy on x.” ADF International, 24 Jan. 2025, adfinternational.org/news/eu-social-media-censorship. 

“The EU’s Digital Services Act.” European Commission, commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en. Accessed 28 Feb. 2025. 

Hayes, Justin. “Report: ‘Staggering Percentage’ of Legal Content Removed from Social Platforms in France, Germany and Sweden.” The Future of Free Speech, 28 May 2024, futurefreespeech.org/report-staggering-percentage-of-legal-content-removed-from-social-platforms-in-france-germany-and-sweden/. 

“The Impact of the Digital Services Act on Digital Platforms.” Shaping Europe’s Digital Future, European Commission , 12 Feb. 2025, digital-strategy.ec.europa.eu/en/policies/dsa-impact-platforms. 

Portaru, Adina. “How the EU Digital Services Act (DSA) Affects Online Free Speech in 2025.” ADF International, 28 Jan. 2025, adfinternational.org/commentary/eu-digital-services-act-one-year.