The AI Act shall say ‘goodbye’ to “GDPR Jigsaw” on Automated Decision-Making

Despite the intention to safeguard data subjects from Automated Decision-Making (ADM), Article 22 of the GDPR would need to be scientifically reliable, less ambiguous and with no obsolescence in addressing the threats posed by AI. To date, data protection, competitiveness and innovation have no clear legal path forward in sight. Consequently, Title XII of the proposed AI Act should “debug” this regime through repeal.

An article by Diogo Morgado Rebelo

AI processes historical patterns such as content, predictions or recommendations that may be deemed privacy unfriendly or invasive. The way it uses personal data can raise both advantages and serious troubles for individuals. However, the regime currently in place may not be aligned with the latest engineering practices, potentially leading to non-compliance with the General Data Protection Regulation (GDPR). Regardless of the new proposal for an AI Act presented on 21 April 2021, Article 22 of the GDPR still vaguely and open-ended prohibits controllers from using AI-based models to make decisions that affect data subjects’ legal spheres or have a similar significant impact on them. Under the excuse of legal neutrality, this regime lacks attention to crucial issues such as control dilemma, explainability and indirect discrimination, causing the ineffectiveness of the GDPR on ADM.

Let us take a brief look into some of the shortcomings:

  1. Firstly, most human interventions, either ex-ante or ex-post, may pragmatically become redundant or dysfunctional token gestures. From the control standpoint, AI-based systems tend most often to represent decisions that fall under the “solely (…) automated processing” referred to in Article 22(1) of the GDPR. Having humans overseeing those processes does not ensure the outcome is partially automated. Nor should it be seen as a sufficient safeguard. Even the increasing interoperability does not stop AI-based models from being more in charge than ever before, often leaving humans out of the decision-making loop. Nowadays, conscious agents face Big Data scenarios with unsteady attention levels, lack of expertise, and reaction time. So, it is premature to assert that the present tools for Human-Computer Interaction can effectively trace the machine-made decision-making in real-time, making it more difficult to prevent failures, opacity, or biases, as pretended by Recital 71. As it is now clear, GDPR Article 22(3) portrays an organisational measure of ideological convenience that decreases output’s accuracy, therefore, implying an infringement of the principle laid down in Article 5(1)(d) of the GDPR. Moreover, regarding the high-risk systems, it is also not foreseeable how the EU policymaking proposed in Articles 14 and 15 of the proposed AI Act will interrelate without mutual prejudice.
  1. Secondly, it is unclear whether data subjects have the right to explanation after a decision has been taken. This debate was triggered by the inclusion of the wording “explanation” in Recital 71 of the GDPR, not expressly mentioned in Article 22(3) thereof. Without getting much deep into the discussion, the following findings can be highlighted:
  • Legal scholars are still unable to provide a clear understanding of whether and what sort of information, both ex-ante ex-post, controllers are required to deliver to data subjects. Concerning AI-based credit scoring in particular, how lawful is it for controllers to provide applicants with personalised recommendations computationally founded on suggestive interpretability – i.e., understand how some input changes generate overrides?
  • It is likewise uncertain how the right to access – envisaged in Article 15(1)(h) of the GDPR – would not “adversely affect the (…) [controllers], including [their] trade secrets or intellectual property and, in particular, the copyright protecting the software” (see Recital 63 of the GDPR). From the perspective of law, “which set of rules should take precedence in case of a conflict between trade secrets and data subjects’ rights“?”
  • Also, the assumption of the right to explanation in the GDPR could potentially violate the principle of ne bis in idem, a fundamental right protected by several European laws and conventions. To put things as clear as possible: on the one hand, if the functioning of an AI system fails to comply with the legal requirement of explainability, it could lead to non-compliance with the principle of transparency, enshrined in Article 5(1)(a) of the GDPR; on the other, such omission can simultaneously result in breaching the rights provided for in Article 12 and the adequate safeguard envisaged in Article 22(3), both from the GDPR – the last, when or if interpreted considering the wording of its Recital 71. So, the ne bis in idem principle, also known as the double jeopardy standard, prohibits controllers from being ‘punished’ twice for the same offence – i.e., the non-existence of an explanation – as could be committed to them under Article 83(5)(a)(b) of the GDPR. Surprisingly, the fines of the GDPR appear to allow for sanctions to be imposed on controllers in violation of their own procedural rights.
  1. Thirdly, Article 22 of the GDPR can disproportionately harm the freedom to conduct business, as stated in Article 16 of the CFR (Charter of Fundamental Rights of the European Union). Since this regime is perceived as imposing too ambiguous obligations, it becomes more difficult for controllers using AI-based systems to comply with its requirements. Controllers have yet to learn the rules of the game. They do not understand which measures they shall apply in domains like Explainable AI or Privacy-preserving ML and, above all, how to ensure their interoperability. As a result, small and medium-sized businesses may have to face harsh fines for such GDPR dead-ends.
  1. Lastly, AI poses a major challenge for identifying such outcomes that may violate the fairness principle outlined in Article 5(1)(a) of the GDPR. Data subjects can expect to receive significantly unequal treatment due to non-intuitive and unverifiable inferences. As a result, there is an increased likelihood of indirect discrimination, which is hard to detect in respect to non-protected groups. Therefore, it is crucial to statistically review the criterion applicable, such as conditional demographic disparity, to find any relevant discriminatory effects in the field of data protection, thus, identifying biases and striving to design systems that are as fair and transparent as possible. Failing to do so can lead, once again, to unfeasible policies.

In conclusion, we can deduce that even if the rationale for ‘reforming’ the Directive 95/46/EC was to provide individuals with an adequate level of protection against ADM, Article 22 of the GDPR is obsolescent and not well-suited to counteract the dangers posed by AI. Once and for all, the new field of Engineering Law of AI should be built on more technological and objective matrices. Additional work is still required “in parallel to provide normative guidelines and practical mechanisms for putting the new rights and responsibilities into practice”. Consequently, future versions of Title XII of the proposed AI Act, of 21 April 2021, might repeal the regime provided for in Article 22 of the GDPR.

In Portugal, legal experts, including myself, have proposed adding an amendment to the Constitution of the Portuguese Republic to create an Ombudsman for Artificial Intelligence. This authority would be supported by ‘Algorithmic Multi-agents’ using counterfactuals to perform supervision at an artificial level. If deployed, such a self-control mechanism could enable a human-agent to automatically determine the circumstances under which controllers or processors should be subject to the GDPR administrative fines, and to measure the amount.


Published under licence CC BY-NC-ND. 

This post is based on the author’s book, “Inteligência Artificial e Scoring no Crédito ao Consumo” recently offered for sale by Grupo Almedina, Portugal’s major legal publisher.

This Blogpost was written by

Author

  • Diogo Morgado Rebelo

    Diogo obtained the Master’s Degree in Law and Informatics at the University of Minho, School of Law, in October 2022. Since 2020, he has been a Non-PhD Integrated Researcher of JusGov R&D Centre, and Member of the "The European AI Alliance – FUTURIUM Forum”. Diogo was also a Fellow Research Student at ISLab – ALGORITMI, R&D Centre, School of Engineering, University of Minho, from March to November 2020.

Diogo obtained the Master’s Degree in Law and Informatics at the University of Minho, School of Law, in October 2022. Since 2020, he has been a Non-PhD Integrated Researcher of JusGov R&D Centre, and Member of the "The European AI Alliance – FUTURIUM Forum”. Diogo was also a Fellow Research Student at ISLab – ALGORITMI, R&D Centre, School of Engineering, University of Minho, from March to November 2020.