07 September 2021

Eyes Wide Open

Adapting the Digital Services Act to the Realities of Intermediary Service Provision

Intermediary service providers act like spiders in the web of the internet: They build the infrastructure of the internet as we know it, they bridge the divide between content provider and user – and yes, they feast financially on anything that gets caught in-between. In 2000, the EU enacted the eCommerce Directive with the aim to protect intermediary service providers from liability for third party content and to thereby bolster the budding industry. Viewed through this lens, the project was a success: Within the last two decades, new services developed rapidly, and some intermediary service providers have gained an influence on the economy, public debate and our lives in ways that seemed unfathomable only twenty years ago.

Nowadays, the question is how to harness that power: Should powerful intermediaries be bound by fundamental rights in a similar way as state actors? How to ascribe responsibility for third party content without bolstering the power of very large intermediaries? Anyone trying to regulate intermediary services must not only answer those questions. They must also confront a gordian knot of fundamental rights (freedom of speech, human dignity and safety, economic rights etc.) and public interest (fair elections, public health, protection of minors etc.) with respect to various actors: users, content providers, intermediary service providers, states and other parties affected.  In light of the technological developments and the legal uncertainty regarding some of the current rules, the European Commission’s Proposal for a Digital Services Act (DSA-proposal) is a welcome initiative. While the political debate about the regulation’s policy rules has only just begun, it is of utmost importance that the new regulation both considers the current reality of intermediary service provision and provides enough flexibility for future technological developments. The proposal currently falls short of this aim. In the following, I will highlight five matters which require a better attunement to reality (please feel free to comment on other aspects!).

Scope of the rules

My first critique is that the scope of the proposed regulation is not tailored to the broad spectrum of internet services which are available on the market. The proposed rules apply to “providers of intermediary services”, and an important subset of those rules only addresses “online platforms”. Thus, the scope of the regulation hinges upon the definitions of “intermediary service” and “online platform”.

Under Art. 2 (f) DSA-proposal, only services of “mere conduit”, “caching” or “hosting” are considered intermediary services. This is too restrictive. There is no convincing reason to limit the scope of the regulation to services that can be qualified as one the services currently listed in Art. 12 to 14 eCommerce Directive – in particular, since those distinctions reflect the state of the internet at the turn of the millennium! While recital 27 acknowledges a range of other information society services, these services are only supposed to be covered by the proposal if they qualify as conduit, caching or hosting “as the case may be” (sic!). This distinction is neither supported by a clear policy goal, nor does it contribute to legal clarity. Most importantly, as is, the definition of “intermediary service” excludes services that automatically compile hyperlinks and snippets, such as search engines, directories and other aggregators. This would exclude one, if not the most important information service from the scope of the Digital Services Act (hello, Google!).

The obligations prescribed in chapter III, section 3 and 4 of the DSA-proposal only pertain to online platforms (section 3) or very large online platforms (section 4). According to Art. 2 (h) DSA-proposal, an online platform is the “provider of a hosting service which, at the request of a recipient of the service, stores and disseminates to the public information“ (except for minor and purely ancillary features). Again, this definition is problematic, because it encompasses most host providers, including web hosting services, but excludes many services that are considered platform services under Art. 2 (2) of the European Commission’s proposal for a Digital Markets Act. I have two issues with this: First, it would seem reasonable to use a consistent definition in both regulations. More importantly, however, the rules of section 3 and 4 DSA-proposal need to be tailored towards the correct addressees.

This is particularly true for section 4, which contains obligations for large service providers (in particular, the obligation to implement a risk management system). The Commission’s decision to ascribe particular responsibility to big players is sensible: The additional responsibilities reflect the economic power and societal influence of such large players. Smaller providers, on the other hand, might not be able to lift the economic burden of additional responsibility and might be driven out of markets that are already non-competitive. But again, the definition of providers with systemic relevance should reflect reality. The currently proposed threshold for a very large service providers lies at 45 million users and is much too high: There are only four member states of the European Union with a population larger that number! Furthermore, there is no reason why risk management obligations should only apply to “online platforms”, that is, host providers, and not to other providers of systemic relevance (again: hello, Google!).

Business models focusing on illegal content

Another reality check is needed with respect to the proposed rules shielding intermediaries from liability (Art. 3 et seq. DSA-proposal). These rules operate on the assumption that the intermediary service provider carries legal and illegal third-party content alike and does not seek to specifically foster illegal content. While this assumption is true for the better-known and most powerful intermediary services, the Commission ignores that there is a niche market for providers whose business model relies upon the transmission of illegal content. The liability shields do not account for intermediaries that specialize on the mediation of illegal content, or that condone the illicit intentions of a majority of their users. While recitals 18 and 20 exempt from the liability shield an intermediary service that “plays an active role” or “deliberately collaborates with a recipient of the services in order to undertake illegal activities”, this exception should be incorporated in the operative provisions of the regulation, as recitals do not possess a positive operation of their own.

Also, recital 18 perpetuates the misguided definition of “active role” already contained in recital 42 of the eCommerce-Directive: “an active role of such a kind as to give [the service provider] knowledge of, or control over, that information”. Since every host provider has control over the data it stores for its users, this definition fails to contribute to legal clarity. As a result, it is unclear whether automatic ad placement, indexing, recommender systems and other services lead to an “active role” of the service provider. The European Court of Justice’s case law hinges upon the facts of individual cases and is not always conclusive and/or convincing. The Digital Services Act should react to these realities and give a list of indications that exclude the reliance on the liability exemptions.

Review

Intermediary service providers exert considerable power through their decisions to block and to delete illegal or harmful content or to suspend user accounts. While the proposal provides for a review of content moderation decisions, those rules only scratch the surface of the problem. Art. 17 and 18 DSA-proposal require online platforms to establish internal complaint-handling mechanisms, and to provide for alternative dispute settlement regarding the removal of content and the suspension of user accounts. The Commission’s faith in out-of-court settlements is quite endearing, but nonetheless unwarranted. The trend to outsource government functions only contributes to the private power of intermediary service providers. A rule providing for judicial redress would therefore be welcome and is needed with respect to all service providers (Art. 15 (1)(f) DSA-proposal solely entails an information obligation and is only directed towards host providers). Judicial redress is particularly important for parties that are not in a contractual relationship with the provider, such as content providers who are faced with a blocking decision by an access provider.

Furthermore, the proposed review process in Art. 17 and 18 is lop-sided, as it only allows for review in cases in which platform users have been sanctioned. If, on the other hand, the platform provider has failed to take action upon a notification of illegal content (or content prohibited by terms of use), the person that flagged the content is not protected under Art. 17 and 18. The problem with such lopsided access to review is that it allows platforms to discriminate by virtue of an arbitrary enforcement of their rules. Thus, the provisions might even enhance, not limit, the providers’ ability to steer the public debate. Also, Art. 17 and 18 ignore the reality of the intermediaries’ elaborate sanction system. Removal and suspensions are certainly not the only avenues for a platform to sanction their users. It is much more subtle to downgrade a person’s content in recommender systems and timelines. It is much more effective to discontinue advertising revenue for specific content (and Art. 27 (1)(b) DSA-proposal even envisages such measures). Communication may also be stifled by closing down groups. The Commission should reconsider Art. 17 and 18 in the light of these facts.

Delegation of duty with respect to misinformation and other harmful content

The proposal also does not directly confront the fact that most of the content banned via content moderation practices is so-called “harmful content”. For the purposes of this post, the term harmful content is used to describe content which is legal, but may for whatever reason be considered unethical and problematic (misinformation, nudity and pornography, depictions of violence, racism, xenophobia etc.). Under the proposal, intermediary service providers are free not to carry specific content they consider harmful. Any such restrictions must rely upon clearly worded terms of use, according to Art. 12 DSA-proposal, and may give rise to the complaint mechanisms in Art. 17 and 18 DSA-proposal. While I generally agree with the premise of this rule, there is no denying that the proposal enables intermediaries with market power to repress legal content according to their terms of use. Basically, governments are thereby delegating the task of setting adequate rules for the online communication process to the intermediaries.

At the same time, Art. 26 (1) (b) and (c) of the proposal require very large online platforms to introduce a risk management system which addresses negative effects of their services on fundamental rights and the intentional manipulation of their service with an actual or foreseeable negative effect on the protection of public health, minors, civic discourse, or actual or foreseeable effects related to electoral processes and public security. Thus, very large intermediaries are not only entitled to define permissible content, they may also be pressured to suppress certain content, even though the content is legal.

In my view, the EU and Member States should take a stronger stance on harmful content, such as misinformation. This includes rules prohibiting individuals from spreading misinformation and from coordinating misinformation by virtue of inauthentic uses of intermediary services. As long as lying and misleading the public is not illegal, intermediary services are in principle entitled to carry such content – and their actions to limit the spread of such content may arguably constitute an infringement of free speech. The proposal should also specifically require service providers to undertake steps to combat misinformation, for example, by monitoring groups that practice malicious compliance, by setting disincentives to harmful content, by suspending users and content across platforms, by detecting new registrations of suspended users and via efforts to detect inauthentic use. Art. 28b of the Audiovisual Media Service Directive might function as a prompt, as could the EU’s Assessment of the Code of Practice on Disinformation. At the very least, and in light of Art. 17 and 18 DSA-proposal, the proposal should clarify that service providers are entitled to remove content and suspend users for the purpose of combatting coordinated harmful use, even if the specific individual content is not illegal.

The adverse incentives of advertising and recommendation systems

Speaking of misinformation and harmful content: As Zeynep Tufekci has correctly noted, “we’re building a dystopia just to make people click on ads”. Internet users have become used to expecting services without financial remuneration. This leads intermediary service providers to focus on user engagement and data hoarding, in order to sell more ads and finance their seemingly gratuitous services. Unfortunately, user engagement is driven to a significant extent by misinformation, extremist content and general outrage. Recommender systems that rely heavily on user engagement thus push problematic content. One way to break this vicious circle is to address revenue streams and the platforms’ focus on user engagement. While the proposal takes some steps in this direction, I do not think those steps go far enough.

Art. 24 DSA-proposal requires online platforms to guarantee some advertisement transparency to the ad recipient, while Art. 30 DSA-proposal commands very large online platforms to create repositories, which reveal information about the advertisements they display (content, time period, users targeted, person on whose behalf it was displayed). In my view, this is putting the cart before the horse. Yes, advertisements may be misleading, but the Unfair Commercial Practices Directive provides an avenue to deal with such ads. To combat misinformation, it is necessary to look at the content which is financed by virtue of these ads (both illegal and harmful content). Particularly Facebook and Google have managed to convince advertisers – without any real proof – that their data troves allow them to efficiently target consumers. Transparency for advertisers regarding the environment in which their ads are displayed is sorely lacking, and advertisers are resorting to brand safety companies to receive valiant information. A mere database of advertisements will not help solve this problem, nor will the information requirements suggested in Art. 5 (g) and 6 (g) DMA-proposal. Rather, large online platforms should inform both advertisers and the public on the context in which specific ads are displayed and on the trustworthiness of the sponsored content. At least then it wouldn’t come as a surprise to advertisers if they found themselves financing extremist content and misinformation – and the public would have an avenue to lobby companies for responsible marketing strategies.

With respect to recommendation systems, Art. 29 DSA-proposal requires very large online platforms to provide their users with some information regarding the functionalities of the recommendation system and with recommendation-options not based on profiling. However, as experience with the GDPR has shown, one cannot expect individual users to solve systemic problems. Apart from Art. 29 DSA-proposal, the proposal solely relies upon platform risk management to address recommendation systems (Art. 26 (2), 27 (1) (a) DSA-proposal). This is insufficient. The proposal should contain a rule requiring recommendation systems of very large online platforms to not focus on user engagement alone and to prioritize quality content. Also, real-time information regarding the content which was most recommended, displayed and shared via the intermediary service is needed. Currently, tools which allow such insights are either reverse engineered, such as the project Citizen Browser, or offer limited insights on a voluntary basis only (i.e. Facebook’s Crowd Tangle).

Conclusion

As I have argued above, the institutions of the European Union need to take a closer look at the realities of intermediary service provision before enacting the Digital Services Act. This concerns the roles different types of service providers play in the information age, the definition of providers with systemic relevance and the different ways content is promoted, ranked and paid for. Also, the proposal features a remarkable retreat from traditional state functions: Instead of granting affected parties a right to judicial redress, the proposal only provides service users with a right to complaints mechanisms and to alternative dispute resolution systems. Private actors with considerable market power are allowed to define what constitutes harmful use and thus to banish legal content from the public debate. In a way, the proposal treats powerful providers as mini-governments and allows them to cement their influence on the public debate.

But just as it is easier to destroy than to build, it is easier to criticize legislation than to draft it. There are no easy fixes for illegal and harmful content online. Unsurprisingly, the European Commission’s proposal is not a sword which cuts the gordian knot of fundamental rights in the information age. And while I do believe that the proposal should be further adapted to the realities of intermediary service provision, the Commission is to be commended for at least taking a stab at that gordian knot.


Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
DSA, Digital Services Act, Intermediary Liability


Other posts about this region:
Europa