An EU Perspective on Liability and Artificial Intelligence

Artificial intelligence (AI) is a key driver of economic development. Although the use of AI has benefits for a variety of sectors, legal challenges remain. Against this background, several initiatives have been taken by institutions of the European Union (EU). A reoccurring topic is the liability for damage caused by AI-systems. This blog evaluates some actions on liability and AI proposed at the supranational level. It will do so by analysing documents issued by the European Commission as well as by the European Parliament.

An article by Dr. Jan De Bruyne and Orian Dheu

Orientations of the European Commission

Liability and AI is clearly an area of concern for the Commission. It published a  Report on AI, new technologies and liability in November 2019. The report contains several recommendations such as the default rule of strict liability for certain operators as well as producers for defects in products or digital content incorporating emerging digital technology. The possibility to give legal personhood to AI-systems was also rejected.

The importance of liability is also highlighted in the recent White Paper on AI and its associated Report on safety and liability.

The White Paper proposes to adopt a regulatory framework for high-risk AI-systems. Such systems will have to comply with several requirements. Strangely, accountability is not mentioned as one of them. Although the White Paper does not extensively address the issue of liability and AI, it acknowledges that:

  • the legal framework could be improved to address the “uncertainty regarding the allocation of responsibilities” between different actors;
  • the features of AI-systems may challenge aspects of liability frameworks and could reduce their “effectiveness”. For instance, AI technologies’ characteristics would make it harder for victims to “trace the damage back to a person”, which can be required for fault-based liability schemes;
  • people who have been injured or suffered damage as the result of an AI-system should benefit from an equal level of protection as those having suffered harm caused by other technologies;
  • the Product Liability Directive (PLD) may need to be amended, while a targeted harmonisation of national liability rules is suggested as well.

The accompanying Report on safety and liability goes a bit further. After a brief assessment of the legal framework, it focuses on several points such as:

  • clarifying the scope of the PLD inter alia by considering to revise the notion of putting a product into circulation;
  • considering to reverse or alleviate the burden of proof required by national rules “for damage caused by the operation of AI-systems, through an appropriate EU initiative”;
  • establishing a strict liability regime for AI-systems with a “specific risk profile” (e.g. those with a high risk) and coupling it with a mandatory insurance requirement;
  • examining the question whether or not to adapt the burden of proof regarding fault and causation for other AI-systems (e.g. those with a low risk). The Commission thus considers a differentiated liability approach depending on the level of risk posed by AI-systems.

Orientations of the European Parliament

The White Paper identifies the importance and need to adopt a common approach at the EU level. Against this background, the JURI Committee of the European Parliament made available its Draft Report with Recommendations to the Commission on a Civil Liability Regime for AI. The latest draft set of harmonised rules could serve as a basis for a future legislative initiative by the Commission. As with its previous Recommendations on Civil Law Rules on Robotics of 2017, we are convinced that this draft will serve as a basis for discussion on further evolutions on liability and AI.

The key takeaways of the draft report can be summarised as follows:

  • a twofold liability regime would be created depending upon the risk of the AI-system. High-risk systems are subject to a strict liability regime in which the deployer of the system is liable without fault (article 4.1). Low-risk systems remain subject to fault-based liability, again only targeting the deployer (article 8.1). The deployer is the person “who decides on the use of the AI-system, exercises control over the associated risk and benefits from its operation” (article 3(d));
  • only damage to life, health, physical integrity or the property is covered by the proposed framework (article 2.1);
  • the Annex lists AI-systems that pose a high risk as well as critical sectors where they are being deployed (e.g. transportation). The Commission can amend the list, for instance by including news sectors;
  • a deployer of a high-risk AI-system is not able to exonerate him/herself except for force majeure (article 4.3). A liability insurance covering compensation is required for its deployers (article 4.4);
  • the liability of the deployer for high risk AI-systems would be capped up to a maximum of €10 million in the event of death or harm to a person’s health or physical integrity and of €2 million euros for damage to property (article 5.1);
  • limitations periods are provided for high-risk systems depending upon the type of damage (article 7);
  • when it comes to low-risk systems, the deployer is subject to a fault-based liability regime. The deployer cannot escape liability on the ground that the harm was caused by an autonomous activity, device or process driven by the AI system. However, the deployer can refute liability when proving that the harm or damage was caused without his/her fault, relying on the following grounds: (a) the AI system was activated without his knowledge and all reasonable and necessary measures to avoid such activation were taken or (b) if due diligence was observed by selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities and maintaining the operational reliability by regularly installing all available updates. He would not be liable in case of force majeure (article 8.2);
  • the deployer of a low-risk system will be liable for the payment of compensation if damage results from a third party that interfered with the AI-system and is untraceable or impecunious (article 8.3). This seems to refer to cybersecurity Interestingly, the deployer may request the producer to collaborate to prove that he/she acted without fault (article 8.4). National provisions on compensation and limitations periods with regard to fault liability remain applicable (article 9).
  • there are also several rules on the apportionment of liability for damages caused by AI-systems as well as recourse actions. For instance, deployers can be held jointly and severally liable (article 11).

Although we welcome this initiative, the draft report also suffers shortcomings that may require further attention.

First, it proposes a horizontal European liability framework for AI-systems based on their risk level. However, it does not seem to take into account existing (supra)national sectoral liability regimes. Each sector bears its own specificities which could warrant a more granular approach instead of a one size-fit-all framework.

Second, the report continues to refer to national law for several aspects as well as for the interpretation of concepts such as ‘force majeure’, ‘reasonable measures’ or ‘due diligence’. The aim of creating an harmonised framework seems thus undermined.

Third, the draft report seems to omit questions in relation to the PLD and does not really tackle that regime’s implementation difficulties. Several amendments to the current PLD are necessary for claims to be effective. However, the draft report seems to suggests that the current regime is more or less adequate and effective for AI-systems.

Fourth, several problems remain regarding some provisions in the report itself. For instance, the caps may in some cases be low considering the potential severity and magnitude of damages that could result from some AI-systems and the amount of parties involved in the operation of the same high-risk AI-system. Moreover, the Annex does not mention healthcare even though the White Paper mentions it as a sector where significant risks can occur. The notion of deployer is also rather extensive and at the same time unclear as to who is covered and not covered. This could create legal uncertainty and an overexposure to liability.

Conclusion

This blog gave an overview of the proposed adjustments and innovations to supranational frameworks on liability and AI. Whereas the Commission especially identified (potential) shortcomings in the PLD, the Parliament draft report focused on the liability of the deployer of AI. The initiatives show that things are moving ahead at a European level. Nevertheless, some shortcomings remain and several key issues still need to be examined in the future such as the question whether software can be seen as a ‘product’ and when exactly an AI-system will be qualified as ‘defect’.

Published under licence CC BY-NC-ND.

Authors

  • Dr. Jan De Bruyne

    Dr. Jan De Bruyne works as senior academic researcher on legal and ethical aspects of AI at the Knowledge Centre for Data & Society. He is a lecturer e-contracts and postdoctoral researcher at CiTiP. He also works as a postdoctoral researcher on AI, liability and certification at the Ghent University Faculty of Law and Criminology.

  • Orian Dheu

    Orian Dheu works as a doctoral researcher on legal aspects of AI and autonomous systems at the KUL-Centre for IT & IP Law (CiTiP). He is part of the Marie Skłodowska-Curie Actions ETN project Safer Autonomous Systems (SAS). This research has received funding from the EU’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement n° 812.788 (MSCA-ETN-SAS). The publication reflects only the author’s view, exempting the EU from any liability.

Dr. Jan De Bruyne works as senior academic researcher on legal and ethical aspects of AI at the Knowledge Centre for Data & Society. He is a lecturer e-contracts and postdoctoral researcher at CiTiP. He also works as a postdoctoral researcher on AI, liability and certification at the Ghent University Faculty of Law and Criminology.