Risk regulation and tort damage in the era of AI: status quo and gaps

The draft Artificial Intelligence Act (AI Act) outlined a European framework for regulating the risks that are associated with AI. New liability rules have also been proposed. However, there is a gap between risk regulation and tort damage – risks that concern infringements of fundamental rights may not be classified as legally relevant damage in tort. The article explores the distinction between risk, harm and damage.

An article by Shu Li

Risk, harm and damage

‘Risk’ is a cost that could be externalised. That an activity is risky, however, does not necessarily mean that it should be prohibited – its expected utility may be positive. Therefore, the extent of regulation should reflect the level of risk. ‘Harm’, conversely, is a disadvantage that an individual has already experienced. Harm that has objective economic value is called ‘material’. Personal injuries, for instance, result in medical bills and lost income. Likewise, harm to property can be quantified by reference to market prices. Harm can also be non-material. For instance, mental distress, anxiety, loss of enjoyment, loss of chance and the like are subjective and difficult to quantify.

Both material and non-material harm should be fully recoverable. To that end, policymakers and private actors create multiple compensation mechanisms such as tort damages, contractual liability, insurance and no-fault compensation schemes. On the one hand, compensation may be voluntarily set by the parties to a contract or via insurance. On the other hand, in many cases, if no agreement cannot be reached ex ante, compensation would be determined by law, such as through tort damage or no-fault compensation scheme. In this latter situation, harm does not translate to tort damage automatically. Instead, it is more or less a process of legal recognition.

Whether particular harms constitute ‘relevant damage’ in tort has historically been a matter for national law. Each country determines respectively whether a specific harm could be defined as and captured by tort damage in accordance with their economic, political and social factors. This bottom-up process implies that the scope of tort damage may vary significantly across states. For instance, national rules on the recoverability and quantification of non-material harm differ considerably. Moreover, pure economic loss remains unrecoverable by tort damage in many states. Understandably, the harmonisation of tort damage across the EU would be tremendously difficult.

Risk regulation and damages liability for AI

AI systems can pose risks to safety and cybersecurity as well as to fundamental rights such as those to privacy and non-discrimination. When those risks materialise, they may cause harm to health, both physical and mental, as well as impeding access to social services. The AI Act proposes a regulatory framework with rules that are proportionate to the risks that every AI system generates. Certain AI applications, such as real-time surveillance, social scoring, and subliminal techniques, will be prohibited outright, whatever their usefulness (Art 5). In comparison, high-risk AI systems that may potentially endanger human safety or fundamental rights will be subject to comprehensive conformity assessments.

What kinds of AI-related harm can be recognised in tort? Recently, the Commission proposed a new Product Liability Directive (PLD) and an Artificial Intelligence Liability Directive (AILD) to shed light on this issue. From a tort perspective, the new PLD ensures that material harm that is caused by a ‘defective product’ shall be compensable in the same way across the EU. The new PLD expands the notion of compensable damage to novel forms of material loss, such as loss of data and medically recognised harm to psychological health (Art 4(6)). However, non-material harm remains a matter for national rules (Recital 18). It is anticipated that other regimes, such as the GDPR and the anti-discrimination rules, will address non-material harm, which may, for example, result from risks like breaches of privacy and discrimination.

In the new PLD regime, a product is ‘defective’ only if it lacks the safety that the public at large are entitled to expect (Art 6(1)). A product is not ‘defective’ if, among others, its input data are discriminatory. For example, if an AI recruitment system is found to use a discriminatory dataset, thereby excluding particular groups of individuals from employment or limiting their access to essential social services, the victims cannot base their claim on the PLD. The outcome would be the same if the system in question causes the victims to lose opportunities or to suffer mental distress, which are beyond the scope of tort damages as defined by the PLD.

The AILD is intended to harmonise procedural rules in a manner that makes it easier for victims to discharge the burden of proof in disputes about fault-based liability. In particular, the AILD requires Member States to ensure that (potential) claimants can request national courts to order providers or users of AI systems to disclose relevant evidence (Art 3). The AILD also establishes a presumption that there is a causal link between fault and the output of an AI system, given that certain conditions are met (Art 4). The AILD should therefore make it easier to claim for non-material harm, but it will not reshape or harmonise national tort-damage concepts.

The gap between risk regulation and tort damages

The foregoing discussion indicates that there may be a gap between the regulation of the risks of AI and the legal framework that governs recovery for AI-related harm. Specifically, the EU has established a risk regulation framework for AI, but the remedies that are available when risks do materialise remain unharmonised. This blogpost is not intended to argue that a harmonisation of tort damage would be a solution – in actual that is not wise for the current stage. Instead, it only discloses that this gap can disadvantage many prospective claimants, especially those who is at stake of fundamental rights violation and can suffer non-material harm.

Firstly, certain vulnerable groups, such as members of groups that are likely to suffer discrimination in the job market, may struggle to recover when they lose opportunities or when they suffer mental distress. For example, according to Directive 2000/78/EC, the primary remedy for workplace discrimination ‘may comprise the payment of compensation to the victim’. The German General Equal Treatment Act stipulates that loss of employment opportunities per se is not a type of recoverable harm. Instead, the person who suffers unfair treatment may only claim a compensation in the amount of three months’ wages (Section 15).

Secondly, some forms of non-material harm are not recognised as relevant by any laws. Many AI harms can only be identified a long time after they occur. For example, when a social media platform uses AI to manipulate the behaviour of users, the resultant loss of autonomy, taken in isolation, is unlikely to be regarded as legally relevant. Moreover, that loss of autonomy may not even be detected by the user before the imposition of effective regulatory measures. What is worse, it can generate long term detriments, increasing anti-social behaviour and hate speech. Although reshaping personal characteristics in this way would exert huge impact on the wellbeing of a person, it cannot be easily defined as harm or further recognised as tort damage for compensation.

Finally, although the new PLD and the AILD are intended to lighten the probative burden that those who suffer non-material harm must shoulder, they are likely to be ineffective in certain scenarios. AI decision-making is notoriously opaque. Accordingly, the use of AI can make discrimination more difficult to detect – proxy discrimination, in particular, may go unnoticed. Although the AILD enables ‘potential claimants’ to demand disclosure, in practice, it is unclear whether those claimants can ‘present facts and evidence sufficient to support the plausibility of a claim for damages’ (Art 3(1) para 2).

In sum, current tort law might be an inadequate means of remedying the non-material harm that results from the use of AI, which has implications for fundamental rights. There is much that the EU and its Member States can do to adapt tort rules to AI. If this proves too taxing, it is critical to establish other compensation schemes for victims who suffer non-material harm.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Shu Li

    Shu Li is Postdoctoral Researcher at Legal Tech Lab, University of Helsinki. He conducts research on ‘How to regulate AI-related damages liability in the EU?’ funded by the Academy of Finland. Shu Li is also an associated fellow at Jean Monnet Centre of Digital Governance, Erasmus University Rotterdam.

Shu Li Written by:

Shu Li is Postdoctoral Researcher at Legal Tech Lab, University of Helsinki. He conducts research on ‘How to regulate AI-related damages liability in the EU?’ funded by the Academy of Finland. Shu Li is also an associated fellow at Jean Monnet Centre of Digital Governance, Erasmus University Rotterdam.