From Algorithmic Discrimination to Algorithmic Fairness

Artificial intelligence (AI) is expanding into ever more sectors of our society, from marketing to healthcare and transportation. A recent study even pointed out that intelligent algorithms may be better than judges at making bail decisions in criminal proceedings. The increasing deployment of AI in these fields is premised on the assumption that algorithms will make decisions in a more precise, faster, and less biased way than humans do.

An article by Dr. Philipp Hacker, LL.M. (Yale)

As I explain in an article published in the Common Market Law Review, however, recent findings on algorithmic discrimination throw this conclusion into serious doubt. In a growing number of instances, AI has been found to replicate, and even exacerbate, errors and biases made by humans. For example, in 2015, Google’s AI face recognition algorithm labeled black people as gorillas – a mistake that humans would likely not have made. Such incidents take on a truly daunting dimension in self-driving vehicles that may have to decide one day whether to run over an animal or a human being. Other highly consequential decisions concern incarceration. In the US, the COMPAS algorithm is now often used to help judges in criminal sentencing decisions by attributing a risk score to offenders. In contrast to the bail study mentioned at the outset, much-noticed studies found that the COMPAS algorithm not only has low predictive value, but also that it discriminates against black people.

Algorithmic Discrimination as a Technical and Legal Challenge

As AI technology enters ever more fields of social interaction, it is therefore imperative to understand that it is not naturally neutral or superior to human reasoning. Rather, in the construction of the algorithm, a number of things can go wrong that may lead, inter alia, to discrimination. More precisely, we have to distinguish between two different sources of algorithmic bias: the training data and proxy discrimination.

On the one hand, the data used to fine-tune the algorithmic model (so-called training data) may be biased. AI applications are typically trained on existing data sets and learn to identify factors in the data that enable them to make predictions concerning a desired characteristic. For example, AI algorithms are increasingly deployed in job selection decisions in order to screen candidates for their suitability. In one well-documented case concerning a UK medical school, the algorithm was trained on data of previously successful candidates. However, for social and historical reasons, successful candidates in the past happened to be mainly white men. The algorithm, therefore, “learned” that white men are particularly successful, and reproduced this pattern when applied to new candidates. In this way, biased training data perpetuates historical bias and outdated social norms – often without those using the algorithm noticing it.

On the other hand, even if the entire training procedure was done correctly, it may happen that the algorithm picks up on factors that not only truly predict the desired characteristic, but that also correlate with membership in a protected class (proxy discrimination). For example, it may be that the distance from a candidate’s apartment to the workplace strongly inversely correlates with job performance, but that candidates from further away (e.g. from certain suburban areas) also predominantly belong to an ethnic minority.

Given these two sources of algorithmic bias, it is important to note that while getting rid of biased training data generally increases predictive accuracy, the elimination of proxy discrimination reduces it. Therefore, it would be crucial to carefully distinguish between these sources; however, this is often difficult in the complex and opaque environments of algorithmic decision making, particularly for victims.

The Shortcomings of EU Anti-Discrimination Law

Can EU anti-discrimination law help to mitigate algorithmic discrimination? Clearly, it is not entirely ill-suited to tackle this problem: benevolently interpreted, it applies to most cases of algorithmic decision making, even though an AI system and not a human is calling the shots. In the parlance of anti-discrimination law, both biased training data and proxy discrimination will generally amount to indirect discrimination: one protected group receives unfavorable treatment because of the facially neutral algorithmic selection procedure.

However, EU anti-discrimination law suffers from two decisive shortcomings. First, treating people differently is not forbidden per se; rather, it depends on the reasons one can advance for a certain discriminatory outcome. Those using algorithmic decision-making systems, however, will often be able to easily justify their procedure by pointing to statistical data, and predictive accuracy, encoded in the algorithm. It is then incumbent on the victim to “prove the model wrong” by establishing, for example, that the seemingly high predictive value of the AI agent stems from biased training data. This distinction, however, is impossible to make without access to data and the algorithms, which victims typically lack.

Second, because of their lack of access, those affected by a biased algorithmic decision will generally not even be in a position to know, much less prove, that they were discriminated against. For instance, in cases of personalized pricing or offers, people simply do not know what scores and results other people get, and therefore cannot even establish a prima facie case of discrimination. All in all, EU anti-discrimination law therefore faces a rampant enforcement problem when it comes to algorithmic bias.

Algorithmic Fairness and Data Protection Law

To remedy these problems, I argue that we need to combine the concepts of anti-discrimination law with the enforcement tools of data protection law that have now been significantly strengthened by the General Data Protection Regulation (GDPR). First of all, algorithmic discrimination can be seen as a violation of the principle of fair data processing, triggering sanctions of up to 4% of global annual turnover – a legal consequence that should have a deterrent effect. More importantly, however, data protection authorities can make use of algorithmic audits to test algorithmic decision-making systems for bias (Art. 58(1)(b) GDPR). Similarly, data controllers (i.e., algorithmic decision makers) have to resort to data protection impact assessments (Art. 35 GDPR), before they deploy the system, if they believe bias could be an issue. In this way, the focus of regulation may switch from ex post liability (with its enforcement problems) to the ex ante prevention of algorithmic discrimination in the first place.

Perhaps most importantly, and besides the efforts to adjust legal doctrine, computer science is now increasingly furnishing the tools to detect and mitigate algorithmic bias at the code level. As Emil Wiedemann and I show in another article, so-called algorithmic fairness procedures can be adapted to many different situations and can be used to implement core legal and social values, such as non-discrimination, directly at the level of the AI model. By embracing novel technological tools developed within AI research itself, an integrated vision of anti-discrimination and data protection law may therefore successfully mitigate algorithmic bias. While “privacy-by-design” has long been known (and now been codified in Art. 25 GDPR), regulatory efforts should now seek to implement “equality-by-design”. This would contribute to an algorithmic society that is not only efficient, but also fair. In the long run, only such fair and unbiased AI systems will rightfully earn the trust of their human counterparts.

Hinweis: Der Artikel erscheint unter der Creative Common Lizenz / licensed under CC BY-NC-ND.

This Blogpost was written by

Author

  • Philipp Hacker

    Dr. Philipp Hacker, LL.M. (Yale) is A.SK Fellow at the WZB Berlin Social Science Center, Postdoctoral Fellow at Humboldt University of Berlin and Associated Member at the Centre for Law, Economics and Society, UCL.

Philipp Hacker Written by:

Dr. Philipp Hacker, LL.M. (Yale) is A.SK Fellow at the WZB Berlin Social Science Center, Postdoctoral Fellow at Humboldt University of Berlin and Associated Member at the Centre for Law, Economics and Society, UCL.