28 September 2021

Facebook suspends accounts of German Covid-19-deniers

Can „Coordinated Social Harm“ be a justification for limiting freedom of expression?

On September 16th 2021, Facebook suspended more than 150 German “Pages and Groups operated by individuals associated with the Querdenken movement in Germany” because of “coordinated social harm” (the whole announcement can be found here). These accounts were, undoubtedly, spreading misinformation about the Covid-19-pandemic, denying the existence of the virus and encouraging other users to resist the government. However, this type of removal has no legal basis other than Facebook’s Community Standards. Hence, I argue that it constitutes a great example of how we (still) apply double standards when discussing content moderation. Moreover, it shows that, from a legal perspective, we need to think beyond traditional categories and expand the horizontal effect doctrine, but not solely to the advantage of the users affected by the removal.

“Taking Action Against a Querdenken-Linked Network in Germany”

Facebook describes the targeted users as follows: “The people behind this activity used authentic and duplicate accounts to post and amplify violating content, primarily focused on promoting the conspiracy that the German government’s COVID-19 restrictions are part of a larger plan to strip citizens of their freedoms and basic rights.” Indeed, the Querdenker movement is notorious for denying the pandemic and their calls for action have led to violence against the press and different-minded people. They were placed under state surveillance last December, for its growing radicalization and closeness to the far-right. Indeed, Querdenker-members mostly use social media platforms and private messaging apps to communicate and propagate their ideology. Most recently, a man working at a gas station was shot in the head and murdered by a customer he repeatedly asked to wear a mask inside the shop. This tragic event illustrates, once again, that the effects of spreading conspiracies online can lead to offline violence.

The world’s largest social media platform decided to act against the Querdenker movement collectively because they constitute “a tightly organized group, working together to amplify their members’ harmful behavior and repeatedly violate our content policies”. Facebook defines coordinated social harm campaigns as “networks of primarily authentic users who organize to systematically violate our policies to cause harm on or off our platform”. So far, Facebook had been acting against inauthentic coordinated behavior only. This change in policies brings us to the heart of the question: what are legitimate grounds to remove content that might be protected by freedom of expression and is deemed lawful under German law? Can the horizontal effect of fundamental rights limit the principle of contractual freedom?

Lawful, but unwanted speech – a recurrent question

Unlike the First Amendment in the US, Article 5 paragraph 2 of the German Basic Law allows speech-restricting laws against categories such as defamation, incitement to violence and more. The prominent NetzDG (Network Enforcement Act) from 2017 requires social media platforms to implement a complaint tool for their users to flag unlawful content under a number of relevant penal law provisions (the questionability of this law has been much discussed elsewhere, e.g., in this paper). What is noteworthy in this context is that the spreading of false information or the proclamation of conspiracy ideologies are not forbidden by law, hence not included in the NetzDG. Hence, it is at the platforms’ discretion whether to remove such content or not. They aren’t bound by the NetzDG – neither with regard to the procedures for a prompt removal nor to any form of legal definition.

Regulatory responses to mis- and disinformation are much discussed these days, but experts conclude that the state should not interfere by means of statutory law in concerns of truth, as long as the statement in doubt does not infringe legally protected rights. Content-based regulation of speech is generally rejected because it pre-supposes an ex ante verdict, for instance, about the veracity of a statement of facts. When the latter is mixed with the expression of viewpoints, it is almost impossible to distinguish between protected and non-protected statements. Moreover, countering conspiracy ideologies by criminal law is not only a violation of freedom of expression but most probably also highly ineffective because it doesn’t help solve the actual problem. When German lawmakers planned on amending the NetzDG last year, they also looked into the possibility of re-introducing section 88a of the German Penal Code, to counteract the spread of aggressive opinions and calls for violence. This law sanctioned the ‘anti-constitutional endorsement of crime’. As I have explained here, this type of law leads nowhere and constitutes a slippery slope in terms of an unconstitutional “Denkverbot” (prohibition of free opinion formation, sic). This has, of course, to do with the coercive nature of state action and does not reduce the threat emerging from such content. When assessing the necessity of a speech-restricting law, we always have to carefully gauge its potential for abuse of power and its long-term consequences on the deliberative character of the public sphere. Because the latter must be protected by the state, the risks for freedom of expression generally outweigh other considerations.

Now, again, we are torn between the prohibition of harmful content and the protection of freedom of expression. This question isn’t new, of course, but when it comes to content moderation on social media, doctrines and traditional categories have their boundaries. As I have argued previously, the goal should not be to uphold the ‘free marketplace of ideas’, no matter what, but to protect the societal goals enshrined in freedom of speech. And these goals might include the safety of others as well as maintaining the integrity of deliberative spaces.

Applying principles designed for state action

So, according to which standards should we review Facebook’s action against the Querdenker-network? Legality, necessity, adequacy, and proportionality are principles we refer to when reviewing the violation of fundamental rights by the state. Social media platforms are corporations, not state actors. Nevertheless, courts might require private actors to follow such principles before sanctioning a customer, due the so-called indirect horizontal effect of fundamental rights, as the German Federal Constitutional Court (BVerfG) introduced in the 2018 Stadionverbot-case. This ruling was also mentioned in a recent decision by the German Federal Court of Justice (BGH) on whether Facebook was allowed to remove user-generated content and suspend accounts. The indirect horizontal effect doctrine (“mittelbare Drittwirkung”) unfolds within the general clauses of civil law through a balancing of conflicting fundamental rights, which the civil judge must undertake if fundamental rights are to be taken into account.

Applied to the present case, one could argue that the prohibition of coordinated social harm did not exist as a category, neither in law nor terms of service, until Facebook acted against the Querdenker-networks. Subsequently, the lack of a legal basis could constitute an infringement to the principle of legality (if, again, there is a horizontal effect of constitutional rights). However, to be fair, the movement had already violated other grounds for removal in Facebook’s terms of service, such as through incitement to violence, bullying and harassment, or harmful health misinformation. „Coordinated Social Harm“ merges these policies. Similarly, the suspension of a limited number of accounts hints at a rather proportionate measure. Facebook observed the accounts targeted for a while and reached a differentiated decision (“While we aren’t banning all Querdenken content, […]”). With respect to the application of collective sanctions, however, we enter a questionable area that highlights another issue. To which degree do we expect platforms to moderate content individually and not to ban users only because they are part of a network? Again, according to standards commonly applied to state actors, prohibiting a critical viewpoint expressed within a group would require an act of support, sufficient for a sanction. Such considerations would require case-by-case decisions for the category of „Coordinated Social Harm“, instead of suspending whole groups and connected accounts.

Regarding the principle of necessity, there is probably no ideal moment in time for such a measure, but as the pandemic has slowed down and the government is easing restrictions, the Querdenker-movement has actually become less popular. Just a few days prior to the German federal elections, one might wonder if Facebook wanted to make a good impression or if there was a real need for action. Journalist Lisa Hegemann argues that Facebook’s reaction comes, as always, too late and raises the question we have pondering the last years: who may legitimately make the rules for speech online.

A gradual approach to “mittelbare Drittwirkung

This brings us back to the indirect horizontal effect doctrine and how the BVerfG’s jurisprudence so far could be interpreted. Indeed, the Court has delivered several rulings since 2011 that can serve as grounds for an expansion of the doctrine (i.e., Fraport, Bierdosen-Flashmob, Stadionverbot, Der III. Weg). In Fraport, the FCC acknowledged that “private persons [could be] burdened similarly or to exactly the same degree through the indirect application of the fundamental rights, irrespective of their own fundamental rights, in particular if they come to acquire in practice comparable positions as duty holders or guarantors as the state”. While some understand this as an invitation to apply the right to freedom of expression directly between platforms and their users, I am convinced that the court is actually referring to the degree of protection, which requires courts to hold platforms to a higher standard, when balancing the users’ freedom of expression with the platforms’ rights. In other words, platforms need to be careful when moderating lawful content, observe procedural standards (see supra), and pay particular attention to the very high value of freedom of expression. However, this does not exclude a right to moderate harmful content per se. In “Der III. Weg”, the BVerfG listed several criteria to consider, including the platform’s own profile (“Ausrichtung”). Other criteria enumerated in this ruling were: the degree of market dominance, the degree of dependency on the platform, and the interests of the platform operators and other third parties concerned. With regard to the profile, the BGH found that Facebook had a right to moderate content that would deter people from using its platform. This leads to the conclusion that the right of platforms to set a framework for permissible speech is protected – as long as it matches the platform’s profile. If Facebook claims to “bring the world closer together“ and to be accessible to everyone, it will take more to justify the removal of lawful content than if there was a narrower pre-defined profile, like professional communication on LinkedIn.

All in all, such a gradual approach in measuring the need to apply fundamental rights indirectly between social media platforms and their users forms an alternative to the categorical approaches discussed so far. It requires platforms to observe higher standards when sanctioning lawful content, in line with their responsibility as the infrastructure of the digital public sphere.


Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
Covid deniers, Facebook, Freedom of Expression, content moderation


Other posts about this region:
Deutschland, Germany