With roughly one moderator for tens of thousands of users, social media platforms rely heavily on automated systems to remove millions of posts each year. Yet most moderation decisions go unchallenged—appeals account for well below one per cent across most platforms. Newly released DSA transparency reports, covering the second half of 2025, lay bare both the scale of enforcement and its limits.

The data, published by Meta, TikTok, X, LinkedIn, and Snapchat, covers the second half of 2025 and and falls under the Digital Services Act. The law requires platforms with more than 45 million monthly EU users to disclose detailed moderation data twice a year.

The reports also point to broader patterns in how Europe’s moderation ecosystem works. Enforcement is increasingly driven by algorithms, and users rarely push back against decisions made about their content.

Different platforms, different problems

The type of harmful content varies significantly by platform, reflecting how each service is used. On Facebook and Instagram, many complaints arise from disputes between users—reports frequently concern defamation or harmful speech linked to personal conflicts or public discussions.

On X, speech-related issues are even more dominant. Reports of illegal or harmful speech, particularly hate speech, make up the largest share of complaints, reflecting its role as a forum for political debate and commentary.

You might be interested

On LinkedIn, the most common reports concern intellectual property violations, especially copyright disputes. This reflects the professional nature of the platform.

TikTok, by contrast, receives a broader mix of complaints related to public security risks, privacy violations, and the protection of minors, reflecting its younger audience. On Snapchat, the most frequently reported category is cyber violence, followed by scams and fraud.

The rise of automated moderation

Content moderation is increasingly shifting from manual review toward automated filtering. Across the industry, automation has become essential for managing the volume of content uploaded every day. Platforms rely on machine-learning systems to detect patterns associated with harmful speech, spam behaviour, prohibited imagery, or coordinated manipulation—often before human moderators ever see the content.

TikTok offers the starkest illustration. The platform reported removing around 112 million pieces of policy-violating content between July and December 2025—93.8 per cent of them automatically, according to its transparency report. It also reported a precision rate of 97.6 per cent, meaning nearly all automated decisions were confirmed as correct.

They may raise more questions than they provide answers.
—Asha Allen, Secretary General, CDT Europe

But these figures do not necessarily settle the question of how well such systems perform in practice. As Asha Allen, Secretary General at CDT Europe, a digital rights organisation, puts it, “these findings are useful and allow researchers to dig deeper, as the DSA intends. However, they may raise more questions than they provide answers”.

The reliance on automation reflects the scale of moderation, she notes, but “the effectiveness of these systems should still be probed further. They often perform poorly when accounting for context. Significant resource gaps also remain for minority languages.”

The human side of moderation

Despite the growing role of automation, human moderators remain central to enforcement. The size of a platform’s moderation workforce does not necessarily correspond to the volume of complaints it receives.

LinkedIn reported more moderators than X despite receiving far fewer user notices of illegal content—around 1,460 compared with 1,059 on X. Meta reported the largest moderation infrastructure, with 7,704 moderators covering EU languages across Facebook and Instagram combined. TikTok followed with 3,674, although the vast majority are external contractors.

Uneven language coverage

One of the concerns behind the DSA transparency requirements is whether platforms moderate content equally across Europe’s many languages. Regulators have long argued that uneven language coverage could create systemic risks, particularly where harmful content spreads in less widely spoken languages.

All very large platforms report moderators with linguistic expertise covering EU languages. The data suggests that resources remain concentrated in major languages such as English, French, German, and Spanish.

In smaller linguistic markets, moderators frequently handle multiple languages simultaneously. Because automated systems typically train first on datasets dominated by major languages, smaller linguistic communities may face slower or less accurate moderation.

A low appeals rate

Despite the large volume of moderation actions, appeals account for well below one per cent of decisions across most platforms. Under the DSA, very large platforms must provide internal complaint mechanisms allowing users to appeal once content is removed. When appeals do occur, platforms may uphold, modify, or reverse the original enforcement action.

Governments increasingly use social media platforms not just to remove illegal content, but to identify individuals involved in potential offences. National authorities can issue orders requiring platforms either to take down content or to hand over user information.

Across the platforms analysed, requests for user information significantly outnumber orders to remove content. Snapchat received the largest number of information orders—18,400—followed by TikTok and X. However, a substantial share of appealed decisions are overturned. On LinkedIn, nearly 69% of appeals resulted in a reversal. A high number compared with around 46% on TikTok, 40% on Instagram and 30% on Facebook. While these figures apply only to appealed cases, they suggest that users who challenge moderation decisions often succeed.

Yet these figures account for a tiny share of total moderation actions. This means that most decisions are never contested. As Ms Allen notes, “given the percentage of overturned decisions for those who do appeal, it leads to the question of how many of the overall removals may in fact be potentially erroneous or down to inadequate automated decision making”.

The picture is further complicated on X, where around 71 per cent of appeals did not lead to a decision. Among the cases that were resolved, roughly one in five resulted in a reversal.

Civil society groups also raised concerns about the scale of government data requests. Ms Allen describes the finding that requests for user information outnumber content removal orders as “very concerning”. She warns of “the potential for government overreach in the enforcement of the DSA” and calls for greater transparency and oversight.