Right to explanation – What does the GDPR leave for Art. 86 AI Act?

The right to explanation of automated decisions has been intensively discussed in the context of the GDPR. The adoption of the AI Act has to some extent reignited the debate, raising the question of how the right to explanation under Art. 86 AI Act relates to the right to information and the right of access under the GDPR. According to the recent CJEU decision in case C-203/22 (Dun & Bradstreet Austria), there is a right to an explanation of specific automated decisions under the GDPR. This raises pressing questions about the content of such a right and its interaction with the AI Act, which are examined below.

An Article by Tristan Radtke

The right to explanation refers to an individual right to an explanation of an automated decision (e.g., AI-made decision), which substantially affects a natural person. This explanation does not necessarily have to be generated by the AI or decision-making system that makes the decision (see the discussion on the interpretability of decisions).

Policy reasoning: The approaches of the AI Act and the GDPR to automated decision-making

First, it is necessary to identify the difference in objectives and policy rationale between the AI Act and the GDPR with respect to automated decision-making (including AI-based decision-making).

The AI Act aims for “human-centric and trustworthy” AI and to protect health, safety, and fundamental rights (Art. 1(1) AI Act). To achieve this goal, the AI Act addresses the design of AI decision-making through obligations for technical and organizational measures (e.g., through data governance, documentation requirements, and human oversight obligations). Thus, the AI Act accepts the concept of AI-based decision-making in general but takes the influence on decisions as a reason to regulate the process (cf. Art. 6(3) AI Act). This approach arguably addresses the risk that the autonomy of AI systems may overstep the boundaries drawn by fundamental rights and other important interests in general.

The right of access pursuant to Art. 15 GDPR, as well as the information obligations under Art. 13, 14 GDPR, require the controller in the case of decision-making based on the automated processing of personal data to provide “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject”. This provision must be read in conjunction with Art. 22 GDPR. Art. 22 GDPR provides for a general prohibition of automated decision-making with some exceptions. However, in most cases, data subjects are entitled to a new human-made decision replacing the AI-made decision (cf. Art. 22(3) GDPR). In this respect, the GDPR primarily addresses concerns about human dignity, including being subject to a decision that cannot be fully understood and effectively challenged. Complementary, the right under Art. 15 GDPR aims at reducing those concerns, but ultimately the GDPR leaves the way open for a new human-made decision.

Art. 86 AI Act and Art. 15 GDPR

This is where Art. 86 AI Act comes into play. According to its title, Art. 86 AI Act provides for a right to explanation for the automated decisions described above. This right includes “to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken”. However, Art. 86(3) AI Act gives priority to other Union law such as the GDPR to the extent that the GDPR already provides for such a right.

The scope of both provisions is without any significant deviations; the requirement of a decision based on automated processing is in almost any case fulfilled for AI decision‑making. This is because, in principle, every decision that affects a specific person, including the steps involved in the decision, can be traced back to that specific person and is therefore based on the processing of personal data.

An interpretation of Art. 15 GDPR, taking into account the meaning and purpose as well as the structure of the GDPR, leads to the conclusion that a comprehensive explanation of AI-based decisions is already required in accordance with Art. 15(1)(h) GDPR. In this respect, the priority clause in Art. 86(3) AI Act affords limited scope for the right to an explanation under the AI Act.

It has been argued that the use of the same wording in Art. 13-15 GDPR indicates that the information provided beforehand (Art. 13(2)(f), 14(2)(g) GDPR) is identical to the information provided after a decision and upon request (Art. 15(1)(h) GDPR). Thus, abstract explanations that only consider the decision mechanism, but not the specific decision regarding the data subject, would suffice.

However, the CJEU acknowledged in Österreichische Post I “that, unlike Articles 13 and 14 of the GDPR, […], Article 15 of the GDPR lays down a genuine right of access for the data subject” and thus requires information to be more specific than under Art. 13, 14 GDPR (margin 36). This is supported by the systematic between Art. 13, 14 for general information and Art. 15 GDPR for specific information upon request in an individual case. While some categories of information overlap in Art. 13, 14 GDPR, there would be no added value of an access request if not only the same abstract categories of data recipients but also the same abstract decision explanations were to be provided to the data subject. At the same time, the controller is in a better position to provide information on the specific decisions after they have been taken.

From a policy perspective, the risk of an intransparent decision leaving the human as an object of a machine would best be addressed by providing the data subject with specific information (e.g., potentially relevant criteria for the individual decision or counterfactual explanations). It is only in the second step that it has to be examined whether factual reasons make a comprehensible, plausible explanation impossible. This potential impossibility has been assessed with regard to recipients in the Österreichische Post I judgment of the CJEU.

Judgement of the Court of Justice of the European Union in Case C-203/22

This argument for an actual right to explanation under the GDPR has only recently been confirmed by the CJEU. In the case C‑203/22, a data subject was refused a mobile phone subscription on the basis of an automated assessment of insufficient financial creditworthiness.

The CJEU discusses the wording and the function of Art. 15(1)(h) GDPR in the context of a right to explanation. While the wording in the different language versions suggests a reference either to the “functionality” or to the “relevance” of the information, the transparency requirements under Art. 12 GDPR and the other rights (e.g., contesting the decision), require the information provided under Art. 15(1)(h) GDPR to be “relevant” and specific to the decision at hand. Interestingly, the Court’s wording also indicates that Art. 22(3) GDPR provides individual rights to data subjects (e.g. the right to contest a decision), and not just an obligation for the controller to implement such mechanisms.

With regard to the content of such information, the CJEU clarifies that disclosure of the algorithm underlying a decision is neither necessary nor sufficiently transparent in the light of Art. 15(1)(h) GDPR. To the extent that knowledge of the algorithms is necessary to assess the information provided by the controller, the CJEU requires national courts to balance the interests when assessing the disclosure of the algorithms. This requires national courts to consider limited disclosure and safeguards (e.g. in-camera mechanisms where the other party does not have access to the algorithm).

However, the CJEU sees so-called counterfactual explanations as an adequate way to meet the challenges posed by the black box phenomenon. In other words: Art. 15(1)(h) GDPR may require the controller to “inform the data subject of the extent to which a variation in the personal data taken into account would have led to a different result” (margin 62; so-called counterfactuals). This statement has to be seen in the context of the specific case in which a – possibly incorrect – high score had led to a negative decision. However, counterfactuals may also become suitable tools for controllers to fulfil their information obligations beyond this specific case.

Implications for Art. 86 AI Act

The judgement leaves Art. 15(1)(h) GDPR with a broad scope for the “procedure and the main elements of the decision taken” within the meaning of Art. 86(1) AI Act (i.e., a right to explanation). Accordingly, there is less room for Art. 86(1) AI Act, but the provision could play a particularly important role through the “explanations of the role of the AI system in the decision-making procedure”. Information on the interaction between different AI systems may enable affected persons to exercise rights under other legal acts (e.g., product liability) successfully (see Recital 171 AI Act).

Summary

The CJEU puts an end to the discussion about a right to explanation under the GDPR for the time being: There is a right to explanation under the GDPR, which neither can nor has to be fulfilled by simply disclosing the decision algorithm. Rather, other means, such as counterfactual explanations or even the provision of limited access to the decision-making tool, can serve as efficient means to illuminate AI black boxes. While this leaves less room for the scope of Art. 86(1) AI Act, the provision remains important for the role of the AI system in the decision-making process. This interplay may prove to be a promising way to address transparency issues in AI decision-making.

However, other open questions regarding automated decisions remain to be answered in the future. These include the nature and requirements of an individual’s right to contest an automated decision under the GDPR.

A detailed, German version of this blog post has been published as Das Recht auf Erklärung unter der DSGVO und der KI-VO, in: Dregelies/Henke/Kumkar (Hrsg.), Artificial Intelligence: Rechtsfragen und Regulierung künstlicher Intelligenz im Europäischen Binnenmarkt, 9. Tagung GRUR Junge Wissenschaft, Nomos. An analysis of policy reasoning for automated decision-making and human oversight under the GDPR and the AI Act in English is forthcoming as contribution in Trier Studies for Digital Law Conferences Proceedings.

Published under licence CC BY-NC-ND. 

  • Tristan Radtke, LL.M. (NYU), is a postdoc at Technical University of Munich, School of Social Sciences and Technology, Department of Governance. His research focuses on the implications of the digital transformation, including AI, for civil, data and intellectual property law.

This Blogpost was written by

Author

  • Tristan Radtke, LL.M. (NYU), is a postdoc at Technical University of Munich, School of Social Sciences and Technology, Department of Governance. His research focuses on the implications of the digital transformation, including AI, for civil, data and intellectual property law.

    View all posts

Tristan Radtke Written by:

Tristan Radtke, LL.M. (NYU), is a postdoc at Technical University of Munich, School of Social Sciences and Technology, Department of Governance. His research focuses on the implications of the digital transformation, including AI, for civil, data and intellectual property law.