DEFECTIVENESS OF AUTONOMOUS SYSTEMS AND DEVELOPMENT RISK DEFENCE

Modern applications of artificial intelligence (AI) based systems are autonomous and to an extent unpredictable. They are meant to learn and to develop by themselves according to the adopted technique of machine learning (ML). Such systems, especially those which are developed based on deep learning techniques pose problems of opacity and compexity. Unlike conventional products, autonomous systems do not operate under the direct control of a human being nor perform automatic behaviours previously set by the producer. Due to these unique features and rapid advancements in the area of AI&ML, whether product liability regime -among other branches of liability law- is adequate for the compensation of damages caused by autonomous systems has been questioned. This article takes a deeper look at the discussion.

An article by Cemre Polat

Product liability is to a great extent harmonized among member states of the European Union. For the sake of simplicity, we will follow the regime provided by  85/374/EEC Product Liability Directive (PLD). PLD provides a strict product liability that allows victim to receive compensation from producer if the former incurs a damage due to a defective product. Safeguarded legal rights are life, health and an item of property other than the defective product itself.

The Notion of Defectiveness

Among several others, one aspect of product liability that has been found as challenging in this regard is the notion of defectiveness. A product is defective when it does not provide the safety which a person is entitled to expect, taking all circumstances into account, including the presentation of the product; the use to which it could reasonably be expected that the product would be put; and the time when the product was put into circulation (Art. 6(1)).  PLD adopts the consumer expectations test to assess the defectiveness of a product; which is objective and normative. This test takes into account the expectations of the public at large; not those of the specific victim or user (objectivity). Instead of the actual expectations, the legitimate safety expectations of the public at large (in other words, safety expectations that the public at large are entitled) are taken into consideration (normativity).

Autonomous systems do not perform previously set behaviours but instead learn by themselves based on the dataset provided by the producer, other data collected with sensors from the environment, and previous experiences. So even though producers have a degree of influence on the outputs of the system, they do not ultimately determine them. There can be harmful outputs that cannot be predicted even by the producers. Taking into account these unique features of autonomous systems, is it possible to prove that an autonomous system is defective in the sense of PLD? If so, is it fair to hold producers liable given that these products are autonomous and unpredictable? I answer these questions affirmatively.

The first question has two aspects: establishing the level of safety that the public at large are entitled to expect and proving the product’s failure to provide this level of safety. PLD provides a strict liability regime and the consumer expectations test is objective and normative. So if an autonomous system, let’s say, a self-driving car at Level 4 (high-driving automation), despite all the testing and control process, cannot detect an object on the road and causes an accident; it is defective because the public at large is entitled to expect that such a car is safe and able to detect an object or person and refrain from the accident; unless producer has already provided warning against specific situations in which such a car should be supervised carefully. Product’s failure to achieve the required safety level can be considered self-evident in such a case (res ipsa loquitor); in such event, the victim does not have to provide scientific evidence of how the defect has occurred. 

Yet, it is not always possible to determine the legitimate safety expectations directly. It might be said that it is accepted by the public that such a system is not 100% safe. However how will the courts decide whether autonomous system’s lack of safety in a given case is acceptable or not? Autonomous systems while being very safe, fast and accurate in many aspects, may also be very brittle. They bear the risks of making such mistakes that even a reckless human-being would not do. Since such mistakes, by nature are unanticipated, it is difficult to have any safety expectation at all. In such cases, courts need supporting means to concretize the safety expectations test. Comparing the autonomous system with a reasonable human-being or reasonable alternative system does not provide promising results. I therefore suggest that the time, money and effort spent for the verification and validation of the autonomous system shall be taken into account to assess the defectiveness in cases consumer expectations regarding the safety of the autonomous system cannot be directly determined. Courts can also use their discretion with the help of expert evidence, product safety law and industry standards. Moreover, when an autonomous system is deployed for a task traditionally conducted by professionals (such as a medical software used for diagnosis or drug prescription), victims generally have a contractual claim to receive compensation; even if product liability falls short.

Given the complexity and opacity problems, proving the product’s failure to achieve required level of safety is also challenging.  Both European Parliament and Expert Group on Liability and New Technologies (Expert Group) have suggested to reverse the burden of proof regarding defectiveness. It seems as a viable solution, yet I doubt that this solution is vital. The European Court of Justice concluded that certain and irrefutable evidence of defect is not required in all cases; serious, specific and consistent evidence can also suffice. Besides, autonomous systems’ ability to restore data will also ease the burden of proof. In this regard, importance should be given to the Expert Group’s suggestion  of logging by design, which is“a duty on producers to equip technology with means of recording information about the operation of the technology (…)”.

I am of the view that it is fair to hold the producer liable even though (s)he does not have the ultimate control over the output, because having control over a product is not the sole or most important reason for the attribution of liability. Producers are in a better position to manage the product-related risks. Product liability does not only protect the actual user of the product but anyone who incurs a damage due to a defective product. For instance, innocent bystanders cannot be expected to take all the necessary precautions to prevent damage or insure themselves against the damage. However, these precautions are available to producers at least to a greater extent compared to the public at large. Besides, strict product liability is beneficial for producers as well, since it provides legal certainty and increases the trust towards use of such products.

Development Risk Defence

PLD does not provide a purely strict liability regime. Producers will not be liable in each case when a defective autonomous system causes a damage. They can be exempted from liability if they prove one of the defences set out in Article 7. Development risk defence (DRD), which is closely linked to the notion of defectiveness, is one of them and allows producers to escape from liability if (s)he proves that the state of scientific and technical knowledge at the time when (s)he put the product into circulation was not such as to enable the existence of the defect to be discovered (Art. 7(e)).

DRD bears a special importance in case of autonomous systems. This is because, unlike conventional products, autonomous systems are not complete at the time when they were put into circulation. Accordingly, the role of the producer does not end at this point. Especially testing phase needs to continue even while the autonomous system is in service, since it is not possible to formally prove the correctness of the system before the product was put into circulation. This is why the continuous testing and updating is especially important.

In this vein, Expert Group has suggested that DRD should not apply as long as the producer is still in control of updates to, or upgrades on, the technology. Also in BEUC 2020 Position Paper the abolishment of DRD has been suggested.

Concluding remarks

This article defends that the product liability regime is considerably sufficient to cope with damages caused by autonomous systems. The notion of defectiveness is still relevant and applicable in the field of autonomous systems.  Flexibility of the consumer expectations test allows the PLD regime to respond to technological novelties. However, development risk defence which is closely linked to the notion of defectiveness may not be suitable in this field considering producers’ changing role. It is because producers are expected to control and update autonomous systems after putting them into circulation given the incompleteness of such products. In turn, this causes a clash between the expectations of the public at large and the producers’ defence of development risk. In this regard, an amendment might be necessary as suggested in several reports.

As a final point, it is noteworthy to mention that while one liability ground (e.g. product liability) fails to provide a remedy, another one (e.g. insurance, fault liability) may come into application. That’s why lawmakers should be cautious before enacting brand-new rules (such as a new strict liability rule or electronic personhood) or making extensive amendments to the existing liability rules.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Cemre Polat

    Cemre Polat studied law at Koç University and subsequently gained LL.M. degree from University College London. She is a Ph.D. candidate and research assistant in the Civil Law Department of Bilkent University and a registered lawyer in Ankara Bar Association. She is currently conducting research for her Ph.D. thesis, “Autonomous Systems in the Law of Extra-Contractual Liability.” She is also researcher in the “Artificial Intelligence and Law” project funded by the Turkish National Council of Scientific and Technological Research.

Cemre Polat Written by:

Cemre Polat studied law at Koç University and subsequently gained LL.M. degree from University College London. She is a Ph.D. candidate and research assistant in the Civil Law Department of Bilkent University and a registered lawyer in Ankara Bar Association. She is currently conducting research for her Ph.D. thesis, “Autonomous Systems in the Law of Extra-Contractual Liability.” She is also researcher in the “Artificial Intelligence and Law” project funded by the Turkish National Council of Scientific and Technological Research.