General and specific monitoring obligations in the Digital Services Act
Observations regarding machine filters from a private lawyer’s perspective
Platform regulation can be seen as a re-assertion of public power over private actors. Self-regulation leaves power with private actors, whereas legal regulation creates publicly defined boundaries and influences behaviour. The Digital Services Act (DSA) contains regulation that does not directly interfere with platforms’ freedom to operate but indirectly creates incentives for their handling of risk-aware behaviour, for example, towards personality right violations. Within the context of general and specific monitoring obligations in the DSA, in particular, indirect regulation can encourage innovative and pragmatic decision-making, although further guardrails are necessary.
I. Platforms as a challenge for the law, liability as indirect regulation
Digital platforms mediate transactions using extensive data analysis and automated decision making. Often, one or two sides of the transaction are private parties, and quite often, for them, using the platform is free of charge – that is, they pay with data instead of money. These personal data-driven business models bring about many societal and legal challenges, not only concerning the autonomy of platform users but also regarding the protection of fundamental and civil rights of both users and third parties.
Among the areas of law concerned with regulating digital platforms are not only areas of direct regulation, but also competition law (see Podszun), data privacy law or, not to forget, liability law. Indirect regulation, where feasible, provides several advantages: Foremost, it relies on decentralised decision-making, and therefore, it allows for more flexibility and demands less knowledge on the side of the legislator. Both aspects are advantageous in rapidly developing areas of society, especially where changes are technology-driven. And the digital transformation is maybe the single-most important technology-driven societal change of the present.
Liability law is a classic example of indirect regulation, creating incentives for liable parties to choose certain risk levels. However, some of the risks posed by using digital technology (and digital platforms in particular) are less clear-cut than risks for traditional interests like live and limb. This is especially true for privacy risks, addressed by data protection law, and other personality risks. Therefore, liability rules concerning the infringement on personality rights are of particular interest when it comes to regulatory efforts in the digital sphere.
A fundamental problem in this context is the intermediary status of digital platforms: They do not commit infringements in a direct manner but enable infringing acts of third parties by providing the platform infrastructure. Digital platforms, in this respect, act as intermediaries enabling infringing acts by other parties. This problem is addressed in the Directive on electronic commerce and is currently part of the DSA proposal. The general aim is, of course, to strike a perfect balance between the advantages of using digital platforms, especially for the freedom of expression and information, and the mitigation of the associated risks.
II. General and specific monitoring obligations and the role of automated filters
The question of how to best strike a balance between the protection of personality rights and the fundamental rights of third parties forms background of this debate. When it comes to personality rights, online platforms create a vastly enhanced risk of violation. For one, infringers can hide behind a pseudonymous or anonymous account and thus avoid legal enforcement. For another, the infringements are of a greater intensity because they can reach a very large audience. At the same time, however, digital platforms also provide a considerably enhanced space for the exercise of the freedoms of expression and information.
An important aspect of the DSA proposal is the continuation of the rules on monitoring obligations, already developed under the Directive on electronic commerce. General monitoring describes a process whereby an intermediary is obliged to introduce technological measures which monitor all user activity on its services. Such general monitoring obligations remain illicit according to Article 7 of the DSA proposal (which contains the same rule as Article 15 of the Directive on electronic commerce). However, the CJEU differentiates between general monitoring obligations and monitoring obligations in specific cases, which may be ordered by national authorities. Recital 28 of the proposal now expressly upholds this distinction.
When it comes to removal obligations of platform providers, the CJEU distinguishes between three categories of unlawful content: content uploaded already, identical content uploaded in the future, and equivalent content. The obligations to remove unlawful content (“take down”) and identical content (“stay down”) are undisputed. Controversies arise with the category of equivalent content, which is not syntactically identical but semantically similar. Not allowing injunctions barring semantically similar, equivalent content invites circumvention. Allowing them might, in effect, lead to a quasi-general monitoring obligation. The CJEU took the first position and decided that an “injunction must be able to extend to information, the content of which, whilst essentially conveying the same message, is worded slightly differently, because of the words used or their combination, compared with the information whose content was declared to be illegal”.
The reason for the CJEU to allow specific monitoring obligations in Glawischnig-Piesczek is that the Court believes them to be practically feasible. The Court established a concept of “specific elements” which must be identified in the injunction. The order, in turn, must be “limited to information conveying a message the content of which remains essentially unchanged compared with the content which gave rise to the finding of illegality and containing the elements specified in the injunction”. The key passage of the judgement, however, is about automation: The protection, according to the CJEU,
“is not provided by means of an excessive obligation being imposed on the host provider, in so far as the monitoring of and search for information which it requires are limited to information containing the elements specified in the injunction, and its defamatory content of an equivalent nature does not require the host provider to carry out an independent assessment, since the latter has recourse to automated search tools and technologies”.
In effect, the CJEU limits monitoring obligations to such that are feasible with the help of machine tools.
III. Automated filters and the balancing of interests
Is the insistence on automated filters good news for the balancing of personality protection and third parties’ fundamental rights? When discussing this question, factual and normative problems must be discerned.
A central problem with automated filters is that they have problems with semantics. This problem is aggravated by the fact that personality right violations are largely dependent on context, unlike, for example, copyright violations where context is only relevant in special cases, like with parodies. But the debate coincides with a rapid development in artificial intelligence, which increasingly enables semantically sound decisions. Therefore, as far as it is practically possible to enclose relevant context as a specific element in the injunction order, automated filtering might be feasible in the long-term.
This leaves the second problem: The influence of machines on fundamental rights (cf. Daphne Keller, GRUR Int. 2020, 616). Concerning personality protection, it seemingly makes a lot of sense that what contributed to an enlarged risk (i.e. information technology, whether online platforms or automated filters) should also be used to minimise it as much as possible. From a fundamental rights perspective, the use of automated filters would be the less restrictive measure, if the (only) alternative was to ban high-risk online platforms entirely. Platforms serve as the central forums for the exchange of views and ideas in a digital society – shutting them down can only be considered as the ultima ratio. Still, technology cannot be the solution to all technologically-generated problems. Evgeny Morozov warned against the “folly of technological solutionism”. However, there is no reason against using digital technology – like any other technology – as a tool for specific legally sanctioned purposes. In fact, the point of indirect regulation is to create incentives for innovative solutions that promote legally accepted objectives.
Of course, indirect regulation does not mean the complete absence of regulation. Rather, it is about encouraging innovative and pragmatic decision-making within a clear legal framework. Where guardrails are necessary, legal rules must be introduced. Therefore, with respect to monitoring obligations, put-back claims are an important additional feature (cf. Specht-Riemenschneider, at 51-69). In the area of liability for copyright infringement, such a feature (or at least something similar) has already been introduced by the German legislator. Section 18 (4) UrhDaG (Urheberrechts-Diensteanbieter-Gesetz, Copyright Service Provider Act, for an overview see Hofmann), which entered into force on 1 August 2021, stipulates that “after an abusive blocking request with regard to works in the public domain or those whose free use is permitted by everyone, the service provider must ensure […] that these works are not blocked again”. Regarding the enforcement of put-back claims, internal platform complaint mechanisms are becoming increasingly important. Only recently, the German legislator amended the Network Enforcement Act (Netzwerkdurchsetzungsgesetz, NetzDG): According to section 3b NetzDG, providers of social networks must provide an effective and transparent procedure with which removal decisions can be reviewed. A similar “internal complaint-handling system” is also included in the DSA proposal (Art. 17).
Moreover, the systemic risks of machine filters must be addressed separately, as done by the proposal for very large online platforms in Article 26 of the DSA proposal. This is an (important) attempt to address another risk of digital platforms which is even more amorphous than privacy risks: autonomy risks. Systemic risks might be too important for society as a whole and too difficult being addressed by creating incentives for individuals, so that the legislator does not want to rely on indirect regulation. However, in the context of digital regulation, it is always important to consider what function the instrument of indirect regulation can fulfil.
The Max Planck Institute for Innovation and Competition is committed to fundamental legal and economic research on processes of innovation and competition and their regulation. As an independent research institution, the Institute provides evidence-based research results to academia, policymakers, the private sector as well as the general public.