AI systems: A power factor in the data protection balance of rights and interests

This post argues that we should consider AI systems as a factor in assessing a balance between the rights and interests of a data subject and parties processing personal data. That is because AI systems are a source of data power when combined with personal data processing. AI Act can be useful for such assessment because it provides a conceptual framework to evaluate risks related to using AI systems, which are not present in data protection law.

An article by Jan Czarnocki

When combined, personal data and AI systems create data power—the ability to profile and impact opinion formation, mixed with economic (market) power. Such power means an ability to impact people without regard to their will and autonomy. According to the GDPR, personal data processing must be lawful, fair, and transparent. In this short article, I argue that AI systems should be considered a power factor in the data protection context. Therefore, their existence should count when assessing a balance of rights and interests between a data subject and a controller when processing personal data. If AI systems’ existence as power factor is considered into the balance between a data subject and a controller, in that case, it is possible to introduce proper safeguards for processing to be fair and transparent. AI Act might help in a substantial balancing since its definitions can be borrowed to fill conceptual gaps and describe risks related to AI systems.

In the context of data processing, a power factor is anything that broadens a capacity to realize an agent’s goals. More data power means more knowledge and opportunities to act based on this knowledge to impact a data subject. Therefore, if personal data and privacy is to be protected effectively, the more sophisticated the AI systems capabilities data controller has, the less legitimate the processing is, and the more safeguards a controller should grant for a data subject.

The problem is that the GDPR does not provide many tools to assess the balance between the rights and interests of a data subject and controller. Balance is instead evaluated case by case. That is where the AI Act might be helpful. It can fill a conceptual gap, providing the framework to define better and more specific risks related to data power. Such aid might help assess ex ante balance between rights and interests.

Rights and interests balance in data protection

Balancing rights and interests lies not only at the heart of data protection but also at the heart of democratic constitutional order. Therefore, a broad perspective is justified whenever rights need to be balanced against each other. In the data protection context, the balance should be kept at each processing stage, regardless of the claimed legal bases for processing. Authorization to process personal data through consent or contract does not exonerate a controller from upkeeping the balance throughout the personal data processing lifecycle. The problem is that data protection requires balancing only implicitly.

Since there is a lack of explicit requirements to upkeep the balance throughout the personal data processing lifecycle, controllers usually miss this obligation. Instead, they assume that balance is only needed when personal data is processed based on a legitimate interest basis, where a balance of rights and interests is explicitly required. The requirement of balancing is often an omitted issue also because controllers usually process personal data based on consent or contractual obligation, assuming that balance exists when consent is given. So, there is no further due regard to the underlying balance. Also, even when they use a legitimate interest basis, there is usually a lack of transparent assessment of balance—balance is implicitly assumed and then justified briefly in the privacy policy (see for example Tik Tok privacy policy).

But it is doubtful whether a balance of rights and interests exists when processing personal data by quasi-monopolistic, data power-based GAFAM platforms. Such platforms possess enormous data power, which means informational power asymmetry between a controller and a data subject. Asymmetry lies in the ability to use AI systems to analyze and infer knowledge from data, combined with the often dominant market position. It means that a controller has much more sophisticated technical and organizational capabilities granting him knowledge about a data subject and the context in which processing is occurring. Such situational awareness enables manipulation and nudging if used maliciously. Such use of data power leads not only to adverse economic choices for data subjects but also to discrimination, privacy, and personal autonomy erosion, threatening human dignity.

Therefore, it is hardly correct that the balance exists between one of the GAFAM controllers and a data subject just because he or she consents to e.g. tracking and profiling. One issue is the lack of alternatives and the monopolistic character of platforms, and the second is a lack of reasonable awareness as to the long-term consequences of personal data disclosure to such entities.

Until now, such factors were hard to consider because of the open-ended nature of balancing, its context dependence, lack of substantial criteria in the GDPR, and lack of definition of precise risks. Thus, balance in data protection is often subjective and argumentative but without clear normative standards to assess risks stemming from the technology. This situation might change with the introduction of the AI Act.

AI Act’s conceptual framework as an aid in balancing

The added value of the AI Act for data protection can amount to defining AI systems and the fields of their application. It also lays in describing risks related to AI systems and their use. Moreover, AI Act introduces the concept of the intended use of AI systems and their reasonably foreseeable misuse. AI Act also prohibits certain ways of using AI systems, such as e.g. exploiting vulnerabilities to harmfully and materially distort a person’s behavior (Art. 5 (1a) and Art. 5(1b) of the draft AI Act) or practices related to social credit scoring (Art. 5 (1c) of the draft AI Act). It also introduces e.g. transparency obligations for specific AI systems, which use is risky, such as e.g. emotion recognition or biometric classficiaction (Art. 52 of the draft AI Act).

Such conceptual framework aids in describing risks related to AI systems use and what may be harmful. Therefore these concepts bring attention to what should be scrutinized when assessing the balance of rights and interests. For example, AI Act classifies AI systems according to their application field, which relates to elevated risk for fundamental rights (Annex III to the draft AI Act). Therefore, if personal data is processed using AI systems falling into the high-risk category, it is less likely that a balance exists between a controller and a data subject at the outset. Even if AI systems do not fall into the high-risk or prohibited use category, due regard should be given to their intended use and possible and foreseeable misuses. Similar concerns should be raised if a system processes biometric data and can be used for emotion recognition or biometric classification. The mere fact of presence of these concepts in the AI Act signifies a risk related to their use. It obliges a controller to consider the effects of such systems’ use on the rights and interests of a data subject when personal data is processed using such systems.

How to justify AI Act use to protect personal data

Using definitions from AI Act to assess the balance between rights and interests in the data protection context is justified by both AI Act’s and GDPR’s complementary normative objectives. This is to limit risks to fundamental rights while allowing for technology development (such development is necessarily dependent on personal data processing). Moreover, the subject matter of GDPR and the AI Act is, to some extent, functionally similar—both laws concern automated information processing machines. Data protection has its source in need to protect the privacy and personal data against automated data processing and personal data accumulation in databases.

AI Act is simply the next step in an evolution of this normative objective, however, focusing on the machines themselves rather than data. Such focus is justified by elevated risks posed by the machines themselves, which risks, however, are inherently personal data processing based. Therefore, it is relevant to consider risks from AI Act when assessing balance in the data protection context, even though there is no direct, horizontal connection between these laws. Regardless, AI Act and GDPR are both risk-based legislation, after all. At the same time, the risk is assessed not solely based on authoritative legal sources but on a factual situation, for which the description and subsumption AI Act provides a unified framework. All these arguments justify a broad use of each other’s concepts to subsume law to the given facts—in that case, to balance rights and interests when processing personal data.

Therefore, an exemplary assessment of balance in personal data protection using AI Act concepts would mean: asking whether, given X AI systems are in use in Y processing context, which systems according to AI Act are classified as posing Z level of risk and are intended to use for the purpose R, but system X can be potentially misused in a way Q, with a probability W, then given that these systems can (or not) process biometric data, categorize emotions and biometrically classify, which enhances a risk to rights T, then a balance cannot be achieved, or can be achieved provided introduction of E technical and organizational safeguards, in addition to asking for the consent of a data subject, or using other suitable legal bases such as e.g. necessity stemming from the contract, or legitimate interest.

Use of the template above would mean a much more substantial and detailed assessment, able to be subject to contestation and democratic deliberation. Such an assessment contrasts with vague and broad privacy policies usually in use, which do not mention a balance of rights and do not justify the legitimacy of processing to a sufficient extent (see Tik Tok privacy policy). Such use of the AI Act would force a controller to acknowledge its data power, how it can be misused, and how likely. All this would be a desirable step forward in fundamental rights protection.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Jan Czarnocki

    Jan Czarnocki is a Doctoral Researcher and Maria Skłodowska-Curie Fellow at KU Leuven Centre for IT & IP Law. He currently researches privacy and data protection, focusing on biometric and health data in the IoT-AI context within the Privacy Matters (PriMa) ITN project. His research encompasses the intersections of law, philosophy, technology, and policy. Jan is Non-Resident Fellow at Stanford Law School Transatlantic Technology Law Forum and Affiliated Fellow at Jagiellonian University Private Law of Data Project and has been a Visiting Researcher at Julius Maximillians University of Wurzburg Human-Computer Interaction Group. He holds a Master’s degree in law from the University of Warsaw and an LL.M. degree in Comparative Law from the China University of Political Science and Law in Beijing. 

Jan Czarnocki Written by:

Jan Czarnocki is a Doctoral Researcher and Maria Skłodowska-Curie Fellow at KU Leuven Centre for IT & IP Law. He currently researches privacy and data protection, focusing on biometric and health data in the IoT-AI context within the Privacy Matters (PriMa) ITN project. His research encompasses the intersections of law, philosophy, technology, and policy. Jan is Non-Resident Fellow at Stanford Law School Transatlantic Technology Law Forum and Affiliated Fellow at Jagiellonian University Private Law of Data Project and has been a Visiting Researcher at Julius Maximillians University of Wurzburg Human-Computer Interaction Group. He holds a Master’s degree in law from the University of Warsaw and an LL.M. degree in Comparative Law from the China University of Political Science and Law in Beijing.