Product Liability in the Age of AI – New Technologies are calling for New Concepts

On September 28, 2022 the European Commission published a Proposal for a new Directive on liability for defective products (PLD Proposal) and a Proposal for a Directive on AI liability. While discussions on the latter have been temporarily halted, awaiting progress in the trilogue negotiations on the closely related AI Act, the legislative process on the PLD Proposal has surged ahead. On June 14, 2023 the Council adopted its mandate for negotiations with the European Parliament. Recently on October 12, 2023 IMCO and JURI as joint Committee presented a Report, which has been adopted by the European Parliament at the plenary session of October 16, 2023 as its negotiation position for the upcoming trilogue.

An article by Ann-Kristin Mayrhofer

Comprehensive analyses on the two Proposals of September 28, 2022 have, for example, been conducted by Orian Dheu, Jan De Bruyne and Charlotte Ducuing, by Philipp Hacker and by Gerhard Wagner. This article focuses on the overarching concepts of product liability. It will show that the currently favoured defect-based approach has some significant limitations when it comes to emerging digital technologies such as AI (I.). Therefore, it seems reasonable to complement this system with “truly” strict liability for particularly dangerous products (II.). In addition, a “supplementary liability” of the commercial user should be introduced (III.). A broader and more in-depth analysis of this issue can be found in the author’s dissertation on non-contractual liability for autonomous systems.

I. The currently favoured defect-based concept of product liability and its limitations

Under the 1985 PLD, manufacturers are not liable for all damage caused by their products, but only for damage caused by a “defect” in a product (Art. 1, Art. 6 of the PLD). The PLD Proposal adheres to this concept (Art. 1, Art. 6 of the PLD Proposal). The Explanatory Memorandum to the Commission’s Proposal characterizes the regime as a “no-fault (strict) liability regime”. However, as Philipp Hacker pointed out, liability for defective products is not truly strict. Truly strict liability is rather triggered by the mere fact that a particular risk has materialised in a damage. German law, for example, imposes such strict liability on the holders of motor vehicles (Sec. 7 of the German Road Traffic Act). Unlike fault-based liability, truly strict liability does not require the defendant to have done anything wrong. Courts do not need to assess whether a safety standard has been breached. Defect-based liability, however, does require such an assessment. In this respect, defect-based liability and fault-based liability share common aspects. In general, a product is not defective just because it is dangerous, but only if it was wrong to put it into circulation without additional safety measures. In order to assess the defectiveness of a product, courts must determine the safety standards “which the public at large is entitled to expect” (Art. 6 of the PLD Proposal) and decide whether the product fails to meet these standards.

One difficulty of this defect-based concept is that assessing defectiveness becomes exceedingly complex when it comes to emerging digital technologies such as AI. The PLD Proposal contains more guidelines than the existing PLD (cf. Art. 6), but they are mainly declaratory and rather vague. In general, wether a product is defective can be determined in several ways: Firstly, courts may consider product safety regulations and technical standards (cf. Art. 6(1)(f) of the PLD Proposal). However, for AI-based products, such rules are still rare. Technologies are developing rapidly, making it difficult for standard-setting bodies to keep up. Secondly, courts can compare a product’s risks with other comparable risks. An autonomous car, for example, may be considered defective if it causes more accidents on average than human drivers or autonomous cars produced by competing manufacturers. However, it may not always be possible to identify such comparable risks due to the following aspects: For one thing, AI-based products tend to work very differently from humans. While humans often act intuitively, AI systems usually make decisions by computing probabilities. Both approaches have their strengths and weaknesses.

For example, an autonomous car may cause fewer accidents than human drivers in regular traffic scenarios. Nevertheless, it might encounter challenges when presented with novel situations where human intuition could offer an advantage over computational power. Moreover, individual AI systems show considerable variability and can therefore be difficult to compare – just think of the numerous new Large Language Models, which have their own set of advantages and disadvantages. In many cases, courts can only assess the defectiveness by weighing up costs and benefits of additional safety measures. However, when it comes to AI systems, this third method is equally challenging: The behaviour of such systems depends on many factors. Whether a particular measure, such as using different training data for a Machine Learning (ML) algorithm, would have provided a safety benefit that outweighed its costs, is difficult to determine. The difficulties are further compounded when the complexity of the underlying technology makes it impossible for humans to explain the system’s behaviour (“Black Box”), or when the product is interconnected with other systems.

Furthermore, even when courts can distinguish between defective and non-defective products, the defect-based approach may lead to unsatisfactory results. For example, an autonomous vehicle may be considered non-defective because it complies with all written standards, causes fewer accidents than human drivers and competing autonomous vehicles, and could not have been improved by any reasonable safety measure. As a result, the manufacturer would not be liable if, exceptionally, the car injured a pedestrian. This outcome neither seems fair, nor does it lead to the efficient prevention and distribution of damages: While many consumers and especially the manufacturer benefit from the new technology, the pedestrian becomes its victim. And while the costs of the damage fall heavily on the individual pedestrian, the manufacturer could have either avoided these costs by improving the manufacturing process or offset them through insurance or higher car prices. In this particular case, under German law, the pedestrian could have a claim against the holder of the car (Sec. 7 of the German Road Traffic Act). However, in other cases where the product – such as an autonomous machine on a smart construction site – is not subject to a specific (national) liability regime, the victim would be left without compensation.

II. Truly strict product liability as future complement to defect-based product liability

In view of these limitations, the existing product liability framework for defective products should be complemented by truly strict liability. Yet, this second track of product liability should be restricted to products that are particularly dangerous: Truly strict liability could relieve the courts of the difficult task of assessing the products’ defectiveness. At the same time, it creates additional liability costs for manufacturers and could therefore discourage from bringing new, potentially beneficial, products to the market.

Whether truly strict liability is justified will depend on the circumstances of the case. For example, in the cases of the autonomous cars and autonomous machines used on smart construction sites, the advantages of truly strict liability seem to outweigh the disadvantages: The product risks are significant, with a notable disparity between the losses experienced by individual victims (potentially high damages) and the benefits derived by manufacturers and the majority of consumers (useful products available for sale and consumption). However, strict liability may not be justified in other cases, where these conditions are not met. For example, a smart home greenhouse may occasionally make mistakes and damage some plants without being considered defective. Extending liability here may seem less appropriate and could unduly hinder product diversity and innovation. The – comparatively small – damage suffered by the victims is simply a part of their general risk of life. As a result, truly strict liability should be limited to particularly dangerous products. This differentiated concept aligns with the “risk-based approach”, likely to underpin the future AI Act. Regarding liability for AI systems, risk-based solutions have been proposed, for example, by the European Parliament in a Recommendation of 2020 (“high-risk AI-systems”) and by Philipp Hacker (“Illegitimate-harm, high-risk (and prohibited) AI models”).

Of course, such a differentiated approach raises the question of how to distinguish between products that are particularly dangerous and those that pose fewer risks. One possibility is to draw up a list of high-risk products, as favoured by the AI Act. However, such a list may not be able to keep up with the rapid development of new technologies, which could lead to inadequate and inconsistent results for strict liability. It may be better to give the courts some discretion to determine whether a product is particularly dangerous or not. Strict product liability should take the form of a general clause, possibly accompanied by guidelines for judges. General clauses of truly strict liability offer flexibility. It is arguably easier for courts to decide whether a product is particularly dangerous than to assess whether such a product is defective. In case of a general clause of truly strict liability, judges are not obligated to ascertain the safety measures that could have been possible and reasonable. Instead, they need only consider the residual risk of the product, primarily determined by its intended use.

General clauses have already been recommended outside the area of product and AI liability. For instance, Art. 5:101 of the Principles of European Tort Law includes a general clause covering all kinds of “abnormally dangerous activities”. Admittedly, general clauses also pose some challenges, particularly in terms of legal certainty and predictability. However, in the context of product liability, their advantages of flexibility and adaptability seem to outweigh these downsides. The defect-based liability regime already forces manufacturers to carefully assess the risks of their products in order to predict whether courts might find them defective. The additional uncertainty created by the proposed regime seems to be manageable.

Truly strict product liability would not need to be limited to AI-based products: Other products may be particularly dangerous, for example in the medical field. However, AI systems possess a unique characteristic that makes strict liability particularly appropriate: (technical) autonomy. ML allows AI systems to develop characteristics independently. As illustrated above, defectiveness usually implies that the manufacturer could have controlled the risk posed by the product. Autonomy, however, involves a loss of control over that risk. Autonomy increases utility but also introduces risks that manufacturers cannot fully control. Thus, in the context of AI, one may often encounter situations where products are not defective whilst being particularly dangerous – remember the autonomous vehicle that avoids many accidents but can, in exceptional cases, also cause serious damage.

III. Supplementary liability of commercial users as future way of dealing with enforcement risks

Product liability concerns (component) manufacturers and other stakeholders in the value chain (Art. 7 of the PLD Proposal). Contrary to the Proposal for a Directive on AI liability, the PLD Proposal does, however, not contain explicit provisions regarding the liability of users. The Proposal for a Directive on AI liability only addresses fault-based civil law claims for damages and does not create a new form of liability for manufacturers or users. However, in its 2020 Recommendation, the European Parliament suggested to impose strict liability for high-risk AI systems not only on the so called “backend operator” – who often overlaps with the manufacturer – but also on the “frontend operator” – who frequently corresponds to the commercial user.

In principle, strict liability for particularly dangerous products should lie with the manufacturer and not with the (commercial) user. Firstly, as Gerhard Wagner pointed out, “it is necessary to target the party that still exercises some control over the digital system”. As seen above, ML leads to a loss of control for manufacturers. However, through the choice of algorithms and training data, they typically still hold more influence than users. The extent of user control varies according to the degree of autonomy of the product in question. Ideally, users should only need to activate the AI system and trust it to make sound decisions. Secondly, manufacturers can offset liability costs through insurance or by raising prices for users. Thus, indirectly, users already pay their share of liability costs. As a result, in most cases, the manufacturer is the cheapest cost avoider and the cheapest insurer. Users who, in some cases, exercise greater control – for example, by providing training data to modify the product – may be considered as manufacturers under the PLD (cf. Art. 7(4) of the PLD Proposal).

However, from the perspective of the victim, limiting (strict) liability to manufacturers may lead to difficulties regarding enforcement: A pedestrian may pass a smart construction site and be injured by an autonomous machine. It should be easy for the victim to identify and sue the machine’s user. The machine’s manufacturer, however, may be more difficult to identify and could be situated in a different country. The 1985 PLD addresses these issues by providing for “supplementary liability” – a concept explored in particular by Helmut Koziol – of importers and distributors. The PLD Proposal extends this idea to other operators, especially online platforms (Art. 7(6) of the PLD Proposal). Such supplementary liability generally seems appropriate for all commercial users. Within the limits of this article, the model I am proposing – which differs from the aforementioned rules – can only be briefly explained: In general, commercial users should be required to take all reasonable measures to ensure that claims against their products’ manufacturers can be enforced. If they fail to comply with this inferred duty of care, they should be liable for the manufacturers’ obligations towards the victim, similar to a surety. Users who would pay for the damages that are attributable to the manufacturer could turn against the manufacture and seek recourse.

Conclusion

The incorporation of truly strict liability for manufacturers and supplementary liability for commercial users into the defect-based product liability system would create a fair and efficient liability framework. This framework would be tight but at the same time leave room for innovation. By adding truly strict liability as a second track to the defect-based system, damage costs would be allocated to manufacturers as the cheapest cost avoiders and the cheapest insurers. The advantages and disadvantages of dangerous products would be more evenly distributed. Overdeterrence would be avoided by restricting strict liability to particularly dangerous products. A general clause could provide for the flexibility and adaptability needed to deal with rapidly developing technologies. Furthermore, the risk of victims of not being able to enforce their claims against manufactures should be addressed by introducing supplementary liability of the commercial users of products. Victims with a claim under product liability law would then be able to seek compensation not only from the manufacturer, but also from the commercial user, who is usually easier to identify and to sue.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Ann-Kristin Mayrhofer

    Ann-Kristin is currently working as a postdoc researcher at the Chair of Civil Law, Civil Procedure, European Private Law and Procedure of Prof. Dr. Beate Gsell at Ludwig-Maximilians-Universität München. She completed her legal education in Munich and Paris. In October 2023, Ann-Kristin published her dissertation on Non-Contractual Liability for Different Autonomous Systems.

Ann-Kristin is currently working as a postdoc researcher at the Chair of Civil Law, Civil Procedure, European Private Law and Procedure of Prof. Dr. Beate Gsell at Ludwig-Maximilians-Universität München. She completed her legal education in Munich and Paris. In October 2023, Ann-Kristin published her dissertation on Non-Contractual Liability for Different Autonomous Systems.