European Union regulatory initiatives in the field of algorithmic management and artificial intelligence – Labour Law perspective

European Union regulatory initiatives in the field of algorithmic management and artificial intelligence – Labour Law perspective

(Article published on Lexology, Feb 28 2024 by Elianna Maratou, Associate and Dr. Dimitris Chatzimichael, Partner)

 

The expeditious technological evolutions of the ongoing fourth industrial revolution after the introduction of artificial intelligence (hereinafter referred to as AI) and the algorithmic management – which although emerged in the ”gig economy”, rapidly spread to the conventional economy – raise groundbreaking issues, among others, in the field of Labour Law. This contemporary phenomenon entails many dangers for employees’ rights to the extent that horizontal legislative intervention at European Union level (hereinafter referred to as the EU) has been deemed necessary.

As part of its digital strategy, the EU is regulating AI to ensure better conditions for the development and use of innovative technology. The EU’s overarching objective is for AI tools to be seen as complementing and enhancing employees in their work rather than displacing or controlling them.

In particular, some pioneering regulatory proposals of the EU in an attempt to tackle the above challenges are the following:

– The Proposal for a Directive on improving working conditions on digital platforms, which aims to increase transparency in the use of algorithms by platforms.

– The Proposal for a Regulation on Artificial Intelligence (“AI Act”), which, although not directly involving Labour Law rules, affects algorithmic management through the classification of AI systems used in the employment and management of employees into “high-risk” categories.

– The Proposal for a Directive of the European Parliament and the Council on the adaptation of the rules on non-contractual liability to ΑΙ (“AI Liability Directive”).

The first Proposal is the initial legislation at EU level regulating algorithmic work management. This Proposal is structured around two policy pillars: a) the transfer and adaptation of individual rights already enshrined in the GDPR to the workplace on or through a digital platform and the recognition of new rights and b) the promotion of human intervention both at the stage of monitoring the implementation and impact assessment of automated decisions and at the stage of their review and revision. As AI is a fast-evolving technology, the proposal has a future-proof approach, allowing rules to adapt to technological change.

The 8th article of this Proposed Directive can be considered the most crucial. It enshrines the right of an employee in a digital platform to receive an explanation from the platform for any decision taken or supported by an automated system that significantly affects their working conditions, as well as the right to a re-examination of such decisions by humans. This explicit recognition of a right to explanations is an innovation beyond the provision in par. 1 of Article 22 of the GDPR, which merely establishes a general prohibition on decisions based on automated processing. However, this prohibition, which applies as a general rule, is subject to the exceptions mentioned in par. 2 of the same article, one of which concerns the permissibility of a decision taken by automated processing if it is necessary «for entering into, or performance of, a contract». As far as the employment contract is concerned, this exception could be interpreted in the direction that an entirely automated decision can be taken when there is a need to conclude or perform work, a condition which is fulfilled when, for instance, the interview procedure would be extremely time-consuming because of the number of candidates and there would be an excessive accumulation of work. Paragraph 3 of the same Article also provides for certain safeguards in the case where one of the exceptions applies, which is at least the right of the employee «to obtain human intervention by the controller, to express his or her point of view and to challenge the decision».

The above paragraph does not make an explicit reference to data subject’s right to an explanation and it is therefore disputed whether this is included in the preventive measures. From one perspective, this right is implicitly recognized, as the right to dispute necessarily requires the receipt of explanations and the necessary informational material. Another view, however, claims that the legal basis of the right is found in par. 1 of Article 15 of the GDPR and in particular in the data subject’s right to obtain substantial information about the logic followed in cases where automated decision-making and profiling are allowed under the exceptions of Article 22 of the GDPR. Finally, it is positive that in this Proposal, the right of information for employees does not only cover significant decisions taken exclusively on the basis of automated processing systems, as referred to in Article 22 of the GDPR, but more generally important decisions supported by such systems.

The “AI Act” is a proposed Regulation by the European Commission that aims to establish a common regulatory framework for AI in the EU and has cross-sectoral application. Ιt is the first comprehensive AI law in the world, which aspires to become a global standard. This Act adopts a risk-based approach and establishes obligations according to the level of risk that AI may cause. More specifically, the regulatory framework defines four levels of risk in AI: unacceptable risk, high risk, limited risk and minimal or no risk.

High-risk systems are those which have a significant detrimental effect on the health, safety and fundamental rights of persons in the EU. Annex III of the Regulation enumerates AI systems belonging to eight (8) thematic categories, which are automatically classified as high-risk. This includes, as a single category, employment, employee management and access to self-employment. The general category of artificial intelligence systems used in employment is divided into two subcategories of systems: a) systems used for recruitment, advertising and assessment of candidates and b) systems used for making decisions about the promotion or termination of employment relationships, for assigning tasks based on individual behavior or personal characteristics and for monitoring and evaluating the performance and behavior of individuals. Recital 36 of the preamble to the revised proposal for this Regulation explains the reasons for including these systems in the category of high-risk systems. Emphasis is placed on the fact that their use entails risks of discrimination against particularly vulnerable groups and risks to data protection and privacy. However, it is now possible to provide for exceptions allowing these systems not to be considered as high risk if their use is merely ancillary to the action or decision to be taken and therefore not likely to lead to a significant risk to health, safety, or fundamental rights.

Major points of concern are, on the one hand, the fact that if AI systems comply with the specifications of the Regulation, a presumption of compliance is created which makes them free to be installed, operated and used in the EU without any further requirements and, on the other hand, the provision for internal self-assessment by the provider itself, without the involvement of external regulators. In addition, article 13 proposes measures to ensure transparency and information for users of high-risk IT systems, i.e. employers and not employees and their representatives, as the concept of users according to article 3 par. 4 includes employers and not the employees and their representatives. Nevertheless, these rights should be extended to the persons actually affected, i.e. the employees[1].

The “AI Liability Directive” deals with claims for damages caused by AI systems or the use of AI, adapting the rules on non-contractual liability to AI. The necessity of this Directive arose from the weakness of national liability rules, which set strict conditions for establishing tort liability, mainly due to the particularly increased evidentiary difficulties in the context of AI. In particular, the specific characteristics of AI, including its complexity, autonomy and opacity, make it difficult or prohibitively expensive for the injured party to prove the conditions for a successful liability claim. In employment contracts, this problem is further compounded by the wide imbalance of power between the two parties. The proposed Directive introduces therefore, among other rules, provisions that alleviate the burden of proof for the claimant injured by an AI scheme when claiming compensation.

More specifically, the AI Liability Directive complements the AI law by introducing a new liability regime that ensures legal certainty, enhances consumer confidence in AI and safeguards consumers’ liability claims for damages caused by AI products and services. In addition, this proposed Directive is functionally linked to “AI Act” through the concept of liability and for this reason the Directive adopts many of its conceptual definitions (such as the concept of provider and user, the concept of artificial intelligence system, etc.)[2].

In conclusion, although these ΕU initiatives undoubtedly constitute a positive step forward, for the aforementioned and other reasons, including the fact that platform employees are ultimately guaranteed with a higher level of protection against algorithmic management than that afforded to employees in traditional businesses (as the proposed relevant Directive contains more favorable provisions compared to the AI ACT, the G.P.D.R. and several national laws), a distinct, coherent and ad hoc regulation on AI in employment, without inconsistencies, differences in the level of protection and omissions, is considered indispensable. In any case, however, the “AI Act” is considered insufficient to effectively address the multiple hazards to employees’ rights arising from the use of AI systems and action should be taken to revise it in a variety of areas.

 

[1]. M. Giannakourou, «The regulation of algorithmic labour administration in the EU legislative initiatives under preparation: quo vadis, Europa?», Labour Law Review, volume 82, issue 6, 2023 p. 645-664. (M. Γιαννακούρου, «Η ρύθμιση της αλγοριθμικής διοίκησης της εργασίας στις υπό επεξεργασία νομοθετικές πρωτοβουλίες της Ε.Ε.: Quo vadis, Europa?», Επιθεώρησις Εργατικού Δικαίου, τόμος 82ος, τεύχος 6, 2023 σελ. 645-664).

[2]. V. Sp. Douka, «Algorithmic administration, protection of employees’ personal data, employer’s civil liability: statutory law, weaknesses and gaps – EU legislation subject to vote», Labour Law Review, volume 82, issue 6, 2023, p. 802-806. (Β. Σπ. Δούκα, «Αλγοριθμική διοίκηση, προστασία των προσωπικών δεδομένων των εργαζομένων, αστική ευθύνη του εργοδότη: Θετό δίκαιο, αδυναμίες και κενά – Υπό ψήφιση ενωσιακή νομοθεσία, Επιθεώρησις Εργατικού Δικαίου, τόμος 82ος, τεύχος 6, 2023, σελ. 802-806).

Link in Lexology: European Union regulatory initiatives in the field of algorithmic management and artificial intelligence – Labour Law perspective.pdf

Related Posts