Operability of the Consent Rule in Artificial Intelligence

Artificial Intelligence (AI) technologies offer many benefits to individuals, professionals, public and private sectors and to the society in general, due to their timely and efficient problem-solving capabilities requiring a few or no manual programming by human developers. Their complex and autonomous nature enabling processing of a big amount of (often personal) data, however, raises concerns over identifying the responsible persons deploying and operating such technologies. This article discusses the question whether it would be possible for data controllers and processors engaging with digital services based on Artificial Intelligence (AI) to fulfill their obligations regarding the consent rule one of the central legal bases to process personal data under the General Data Protection Regulation (GDPR).

An article by Gizem Gültekin Várkonyi

The GDPR assigns several responsibilities and obligations to the data controllers who process personal data of individuals (so called data subjects). Personal data could be considered the “blood” of AI systems. Any individual or legal entity, such as companies or public institutions, may fall under the category of a data controller. For example, the tech-giants such as Facebook, Google, Amazon, Alibaba, IBM etc., are the data controllers during the course of processing personal data in order to deliver their services (such as video recommendation service of Google’s YouTube or predictive diagnostic services offered by IBM’s Watson).

The GDPR further lists several legal bases for the data controllers to legally process the data subjects’ data: one of them is obtaining the individual’s consent. Consent, in practice, is one of the most used legal bases by tech companies to value the individual’s data to develop their business for several purposes. In case the service provider deploys AI based solutions targeting personal services, the fulfillment of the requirements for obtaining a valid consent might be at stake due to three aspects related to AI and the GDPR: technical, legal, and practical.

Technical aspects

In order for consent to be valid, the purposes of data processing shall be communicated to the data subject before or during the processing activity takes place. The purposes statement, besides other information to be provided, could provide the grounds for data processing which then could help the individual to self-determine whether to share the data with the data controller.

In a virtual and connected world with more and more potent capacities of Big-Data-Analysis lay many great opportunities for AI engineers to make an unpredictable to a predictable personal behavior, thanks to Machine Learning techniques that could easily gain and drain a big amount of data available everywhere on the Internet. An AI algorithm – the “brain” of an AI system – could constantly be fed with a vast amount of new data and thus assist to gain outputs that are almost impossible to foresee from the beginning of the whole processing activity. Even though the basic credentials of the system are decided by the data engineers, a duty of an algorithm by design is to find out unknown patterns in the data in an ongoing sense. Indeed, a responsible data controller would not attempt to make use of any new output (or repurpose the data at hand) without the consent of the individuals. However, studies show that the initially indicated purposes are neither in line with the real practices, nor, as the Cambridge Analytica case well-shown, data controllers may explicitly communicate new purposes to the individuals.

Furthermore, some of the Machine Learning techniques (e.g., Reinforcement Learning) enable individuals to actively contribute to the learning process of AI systems, so they could personalize the outputs efficiently. Such a technique is often implemented by AI engineers designing social robots or socially interactive AI systems for delivering personal healthcare, education or even for fulfilling general household tasks. The technique helps the AI algorithm to understand the personality, including the emotional status, of the individuals: Their feedbacks count as a positive or negative attitude to the system which then could learn which output to generate and which not to. Such outputs deriving from sensitive data of individuals, once again, may not be specifically known to the developers of the system at the beginning of the system development. Even though the question, whether the feedbacks that the individuals give to an AI system would be counted as a personal data, can be answered differently according to the individual case (e. g. depending on whether efficient anonymity techniques are put in place): The outcomes of such systems revealing sensitive information on individuals must be priory communicated to the individuals which is a hard task to fulfill. This is because the outcomes also are personal data as the input data, since they reveal a new information on the person subjected to the algorithmic evaluation.

Practical aspects

Data controllers should fulfill their information obligations pursuant mainly to the Articles 12 and 13 of the GDPR, by presenting information to the individuals on how their personal data would be processed, besides indicating the limits of the processing purposes in privacy notices. The so-called consent boxes or privacy statements that often pop up on our laptops or smartphones are to serve this purpose. However, it is still being argued that people do not take time to read or understand that information presented, rather, they prefer to “click and go” in order to immediately use the services in subject. The situation has not changed much even after the GDPR entered into force. A study conducted in June 2019 shows that 47% of the respondents only partially read and 40% never read the privacy policies, giving as a reason that they either find them too long to read or unclear or difficult to understand. Moreover, data controllers prefer presenting such standard statements even about complex technologies, such as the case of cookies shows that an average user may not always find herself in a position to understand a specific technology to make a free decision, therefore to give her consent freely. AI is one of those complex technologies that may not be easily and entirely understood by every individual based on a standard statement – especially if the data controller does not assess whether the individual belongs to a vulnerable group, is a technology expert or whether she is not interested in technical details at all. The only different practice to be observed is a situation where the individual is a child – the main reasons probably being that the data controllers are obliged to take this into account based on the Art. 8 GDPR (child’s consent).

Moreover, data controllers carrying out activities of profiling and algorithmic decision-making (ADM) based on personal data should– according to the Article 22 (c) of the GDPR – deliver information on these activities in a meaningful way, if such a processing is based on data subjects’ consent. Since the concept of the meaningful information in this sense is not clear neither in the GDPR, nor in the guidelines prepared by the related authorities (Article 29WP, EDPS, National Supervisory Authorities, etc.), the ordinary practices of data controllers will go on, unless the first official interpretation is delivered or a first court will interpret the legal clauses. Unless the information is not designed as personalized to the individuals’ level of understanding, following it to be read and understood by them, no consent might be counted as a valid consent.

Legal aspects

Referring back to the practices related to the child’s consent, it could be stated that legal rules explicitly stated in legislation (such as Art. 8 GDPR) might be more carefully applied in comparison to the blank rules referred in the Articles 12 and 13 of the GDPR, as well as the Article 22 of the GDPR addressing new technologies. It is often being said that the GDPR is a technology neutral legislation based also on the Recital 15 of the GDPR. Hence, much of the current questions related to data protection in AI-systems could have been left out by the time of drafting the GDPR. Indeed, the EU had no AI definition back then, a comprehensive AI definition was delivered only after the GDPR entered into force – by the High-Level Expert Group on AI set up by the European Commission in June 2018. Technical and practical issues referred in this article could be taken into account and turned to explicit rules in frame of the GDPR’s interpretation or within the new AI regulation if, at least, it would be possible to explain the terms used in the GDPR within the AI context (such as the intelligible form, meaningful information, and logic involved about automated decision-making process) since their definitions are more or less available in technical terms, but are missing from a concrete definition in terms of data protection. In addition, rules that pin down exactly how to prepare an understandable information for each individual and to prove that the information was read and understood by them, thus better conceptualizing the philosophy of a valid consent, could be inserted in the EDPS and EDPB guidelines, moreover in the planned new AI regulation.

Regulation of AI is under discussion in the EU and adequacy of the information and transparency are also a part of this discussion. Taking a broad perspective on all these questions, together with considering the emerging solutions such as the ethical AI design, free public awareness raising programs, and tools enabling individuals to manage their personal data could reinforce a better application of the GDPR on AI technologies.

Published under licence CC BY-NC-ND.

This Blogpost was written by

Author

  • Gizem Gültekin Várkonyi

    Gizem Gültekin Várkonyi is a Research Assistant at the University of Szeged, Faculty of Law and Political Sciences. She accomplished her doctoral research on the applicability of the GDPR on personal household social robots.

Gizem Gültekin Várkonyi is a Research Assistant at the University of Szeged, Faculty of Law and Political Sciences. She accomplished her doctoral research on the applicability of the GDPR on personal household social robots.