Workers vs Algorithms
What can the new Spanish provision on artificial intelligence and employment achieve?
Spain has overtaken the European Union in regulating artificial intelligence in the employment field. While the EU set the base for the regulation of algorithms with the Proposal for a Regulation on artificial intelligence (PAIR), the norm is not expected to be applicable until, foreseeably, at least 2023. On 11 May, however, Spain passed a new provision that regulates algorithmic transparency in the employment field. This new norm gives workers the right to be informed about the parameters, rules and instructions via which algorithms or artificial intelligence (AI) systems impact their working conditions and determine access to employment.
The provision, for its novelty, appears to be ambitious at first sight. However, this may be only a first step. Its potential limitations and practical consequences will determine the extent to which it will effectively be able to aid workers to tackle the algorithmic conundrum in employment.
The law: What is it and where does it come from?
The provision is ground-breaking, as it is the first attempt by any European State to regulate algorithmic transparency in the employment field. It is also, however, a limited norm that will need to be developed in the future in order to achieve its goal: to provide workers a certain degree of control over the AI systems that affect them.
The provision states that:
“[The Council of Workers [of a company] shall have the right, at the appropriate interval, to:]
Be informed by the company of the parameters, rules, and instructions on which algorithms or artificial intelligence systems that affect any decision-making that may have an impact on working conditions, access to and maintenance of employment are based, including profiling.“
The Decree that implemented this new provision was initially developed to establish a presumption that delivery workers are not self-employed but employees. Case law had repeatedly concluded so, and the matter ended in the Spanish Supreme Court in a case against Glovo. Although this presumption only makes reference to riders (“people who work on delivery and distribution”), the door is most likely open to applying the same logic to other digital platform workers, if any similar case is taken to court.
The social debate that led to this change of legislation has its roots in the flourishing of the gig economy. It promised a new and flexible employment model that would bestow upon workers control over their work-life. This could have been possible in a context where workers are valued as individuals with specific skills. However, when workers are valued only as an interchangeable part of the production chain, flexibility turns into temporality, instability and precariousness.
On these new digital platforms, as well as in big companies in which workforce is mainly hired for basic and repetitive tasks, such as Amazon, workers are deprived of their inherent human worth and are treated as robots. Worse, they may themselves be controlled by robots: automated systems now decide over the fate and conditions of workers without taking into account any human factor. Workers are, in addition, considered to be self-employed, so risks, costs, and downfalls of the activity are borne by them, while they are deprived of even their most basic labour rights. They have no protection against irregular dismissals or in case of a work accident. Should a worker have any kind of problem that prevented them from reaching the minimum rate imposed by the system, or a misunderstanding with a client, they could see their job terminated, as the platform would just kick them out, without any chance of explaining themselves or challenging the decision.
How does AI affect workers in practice?
The necessity of the new provision is underscored by sprouting examples of algorithmic effects on workers. In the UK, several workers from the Postal Service lost their jobs – and worse – due to a mistake in an artificial intelligence system, that falsely determined that they had stolen money from the company. In the Netherlands, Uber drivers sued the company after an algorithm suspended their accounts for allegedly committing fraud. The Court rejected their claim, as it was not considered to be fully automated decision-making under the GDPR (General Data Protection Regulation). In effect, workers were left without any protection. Contrastively, in Italy, a court ordered Deliveroo to disclose their algorithm and to supress the elements that made it discriminatory, since it did not take into account factors regulated in the employment law, such as sick leaves or strike rights.
In a different manner, Amazon workers – both warehouse and delivery workers – are also affected by algorithms, since scoring systems determine and control the performance rate they are required to achieve, which drives them to a state of perpetual anxiety and fear. Human needs and conditions are, of course, not taken into account by the system. Workers are not able go to the bathroom if they want to meet the system’s demands (not even women on their periods).
Present prospects and future consequences
Advances in algorithmic regulation may help tackle the whole problem. The Spanish Ministry of Employment announced that the new norm seeks to make algorithms serve the workers, and ensure that they take into account not only business objectives, but also human and labour rights. Towards this end, the norm aims to provide workers with control over otherwise obscure mechanisms that may affect their working conditions or the employment relationship itself, and to allow them to act when algorithms do not take into consideration the obligations that the labour law imposes on employers.
However, all that glitters is not gold. The norm does not provide any mechanism to effectively exercise any control over the algorithm, but only bestows a right to information on how workers are affected. Information is the first step to achieve control, but next steps are missing.
What action workers can take against the company if the algorithm proves to be harmful is not clear. Regular legal redress against, for example, discrimination or non-compliance with employment rights looks like the most feasible option. However, the lack of clarity invites uncertainty on how the requirements and conditions will be interpreted, and how the norm will be enforced in practice.
Additionally, it is not workers themselves who are entitled to the right, but the Council of Workers in a company – it is a collective right. However, such Councils only exist in companies or workplaces with 50 or more workers, and hence, smaller companies remain unprotected. Furthermore, the Council has a duty of secrecy and confidentiality on the information they receive from the company, and cannot use it for different purposes, which may pose an important disadvantage for public debate and hinder research on the matter.
On the one hand, leaving such an important right in the hands of a Council rather than directly on workers’ hands leaves margin to problems that may prevent the adequate exercise of the right. On the other hand, the fact that the Council of Workers is entitled to exercise the legal defence of workers may be beneficial: algorithmic systems affect a big number of workers, so collective action seems a better option than individual action. But the central challenge remains: it is unclear how the information obligation towards the Councils will work in reality. Firstly, because the norm obliges the company to provide information periodically, but does not provide a defined period. Similar provisions on rights to information mention 6-month and yearly intervals, which seem like very long times compared to the speed at which technology advances. Secondly, because the norm does not further specify or define any of the elements that it mentions – although its ultimate intention appears to be clear enough for interpretation.
How will the provision be developed?
Leaving practical application open to interpretation may be dangerous. In Germany, where algorithmic transparency for credit scoring was implemented, courts settled on a very basic right that only included the general logic behind the algorithm and the description of the parameters used, while considering the algorithm a trade secret that could not be further disclosed. This resulted in a limited information right that was not useful in practice, as the subject had no possibility of really getting to know the functioning of the algorithm and how it had reached its conclusions.
The Spanish Government intends to create an expert group to analyse algorithmic and AI effects in employment, seeking to “move towards a fair and rights-based technology transition”, in their own words. Fortunately, this seems to mean that the shortcomings of the provision will be solved. In this sense, one of the biggest Spanish Unions has already suggested to broaden the scope and conditions of the new norm, creating an algorithm register, a wider explainability right, a liability system, and audit rights – similarly to the provisions of the new EU-level PAIR. This could even be directly applied when the AI Regulation comes into force, as the PAIR labels systems that are “intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships” as high-risk systems, that are subject to a broad scope of obligations, including transparency, requirements for data training, and explainability.
How does all this relate to EU law?
Although coinciding with the PAIR, it does not look like the Spanish lawmaker was inspired by it. No reference was made to the proposal, to the European landscape or debate. However, some similarities can be found – unsurprisingly, since the EU proposal likewise reflects the public and academic debate of the past few years.
Overarchingly, the new Spanish right interlinks with existing EU law, and can be strengthened by it. Foremost, an additional protection mechanism can be drawn from Article 22 GDPR, which grants a right to oppose automated decision-making when it has a legal or similar effect on a subject whose personal data are used for the processing. The GDPR also allows Member States to broaden such protection, which has been proven to be insufficient in the employment field. In the same way, national norms could be developed to introduce some of the requirements from the PAIR before the it is enacted and enters into force.
Even if the new Spanish right may resemble Article 22 GDPR, the GDPR bestows an individual, partly limited right, as opposed to the broader, collective right to information now recognized in Spain. Notwithstanding, given the lack of a redress mechanism, Article 22 GDPR may be an instrument to stop the processing of workers’ data, following the information provided by the company.
However, this method may be insufficient in relationships with two unequal sides, where the employer holds power vis-à-vis the worker. This is even more blatant on big platforms such as Uber or Amazon, where workers are dispensable. In the latter case, the employer does not merely wield power but absolute power, reinforced by intense control and surveillance, facilitated by AI systems aimed at subduing and extracting the last bit of performance from the worker.
Just a first step
The only way of rebalancing the scale is by protecting workers from algorithmic influence. Information rights on the functioning of such systems, as now provided by Spanish law, are key. They are, however, just a first step. Additional and targeted mechanisms that take into account the nature and particularities of these systems are required. Hopefully, this recent legislative change will just be a first stone for a path that will continue to be built by courts, if they chose to interpret the norm by its intention and not by what it lacks, and by lawmakers, that may continue to develop this ground-breaking provision along with the to-be new European Artificial Intelligence Regulation.