08 February 2022
The moderation of extremist content is prone to error, causing real-world harm
Policies intended to limit the ability of terrorist groups to organize, recruit, and incite — as well as for individuals to praise such groups — have been expanded in recent years via content moderation efforts online, and often result in the erasure of not only extremist expression, but human rights documentation, counterspeech, and art. Continue reading >>
0
28 September 2021
Facebook suspends accounts of German Covid-19-deniers
On 16 September 2021, Facebook suspended more than 150 “Pages and Groups operated by individuals associated with the Querdenken movement in Germany” because of “coordinated social harm”. These accounts were, undoubtedly, spreading misinformation about the Covid-19-pandemic, denying the existence of the virus and encouraging other users to resist the government. However, this type of removal has no legal basis other than Facebook’s Community Standards. Hence, this constitutes a great example of how we (still) apply double standards in content moderation and that, from a legal perspective, we need to think beyond traditional categories and expand the horizontal effect doctrine, but not solely to the advantage of the users affected by the removal. Continue reading >>
0
07 June 2021
Fighting Platforms and the People, not the Pandemic
To control social media-driven criticism against its handling of the COVID-19 crisis, the Indian government, led by Prime Minister Narendra Modi, can now take advantage of new powers via the Information Technology Rules 2021. These Rules empower the Modi government to counter disinformation, whose definition seems to have been stretched to include content that portrays the government negatively. How Big Tech platforms react will have a domino effect on users’ freedom of expression and right to privacy across the world. Continue reading >>
0
01 June 2021
India’s New Intermediary Guidelines
On 26 May, the Indian Intermediaries Guidelines and Digital Media Ethics Code 2021 (IT Rules) took effect. These new rules vest more power over speech online with the executive, who may misuse these powers to quell dissent. Continue reading >>
0
18 May 2021
The UK’s Online Safety Bill: Safe, Harmful, Unworkable?
On 12 May 2021, the UK Government published the long-awaited Online Safety Bill. While the UK Government aims to show “global leadership with our groundbreaking laws to usher in a new age of accountability for tech and bring fairness and accountability to the online world”, this claim is more than doubtful. Continue reading >>
0
11 May 2021
Trump’s Indefinite Ban
After months of waiting, the Facebook Oversight Board has upheld Facebook’s ban of former President Donald Trump. Beyond the merits, the decision underlines a trend showing how the FOB is applying protections of free speech. The FOB’s increasing reliance on the principle of proportionality and transparency is a paradigmatic example of an ever-growing distance to the First Amendment dogma characterising US constitutionalism and the proximity to the European (digital) constitutional approach. Continue reading >>24 February 2021
Twitter’s Modi Operandi
India is not only the world’s largest democracy, it also accounts for the largest number of internet shutdowns and take down requests to social media companies globally. The recent stand-off between Twitter and the Government of India (GoI) over suspending more than a thousand accounts supportive of farmers’ protests ended with Twitter falling in line with the GoI’s demands. This may set a dangerous precedent for digital platforms enabling other democratic governments to stifle online dissent. Continue reading >>16 February 2021
The Facebook Oversight Board and ‘Context’
The standout conclusion of the Facebook Oversight Board's two hate speech decisions is that the Board's assessment of content removal heavily relies on context. This is only reasonable, as any speech issue is context-dependent. But the FOB’s context-assessment is incomplete, just as its decisions further highlight Facebook’s content moderation flaws, which likewise fail to consider context. Continue reading >>
0
05 February 2021
Shedding Light on the Darkness of Content Moderation
With the Facebook Oversight Board, we face a new age of private adjudication of online content, which promises an alternative system to enforce human rights on a global scale, while marginalising and hybridising constitutional values and democratic safeguards. Digital constitutionalism offers a framework to look at this new form of private adjudication of online content and its challenges. A look at the FOB’s first cases is an opportunity peek behind the scenes of content moderation, as well as a laboratory to study the transnational challenges which the information society has raised to global (digital) constitutionalism. Continue reading >>
0
12 January 2021