Robotics and Civil Liability in the EU

The EU has shown much interest in the regulation of different aspects of robotics and Artificial Intelligence (AI). One of those aspects concerns the subject matter that I am interested in: civil liability for the use and handling of robots. This piece gives an overview over pressing regulatory challenges.

An article by Susana Navas Navarro

What is a robot?

In the first place, it should be determined what a “robot” actually is. To be considered as “robot” by the EU institutions, an entity should meet the following conditions:

  • acquisition of autonomy via sensors or exchanging data with the environment (interconnectivity) as well as the processing and analysis of this data;
  • capacity to learn from experience and also through interaction with other robots;
  • a minimal physical medium to distinguish it from a “virtual” robot;
  • adaptation of its behaviour and actions to the environment;
  • absence of biological life.

This leads to three basic categories of “smart robots”:

  • cyber-physical systems;
  • autonomous systems;
  • smart autonomous robots.

Therefore, strictly speaking, a “robot” is an entity which is corporeal and, as an essential part of it, may incorporate a system of artificial intelligence (embodied AI). This notion falls within the definition of AI suggested by the AIHLEG (“A definition of AI: Main capabilities and disciplines”, 08.04.2019).

Some issues of civil liability

Concerning the robot as a corporeal entity, issues related to civil liability are raised from a twofold perspective: first, in relation to the owner/user/keeper of a robot in the case of causation of damages to third parties when there is any relationship between them; and, secondly, regarding the damages that the robot may cause to third parties due to its defects. From a legal perspective, it should be noted that in most cases the robot is considered as “movable good” that, furthermore, may be classified as a “product”. We shall focus on each of these perspectives separately.

Civil liability of the owner/user/keeper (“operator”) of a robot for damages caused to a third party.

A robot can present different levels of autonomy. For example, in the case of autonomous vehicles several levels are proposed ranging from absence of autonomy to an entirely autonomous vehicle. In the case of drones, the EU distinguishes between a drone piloted from the ground and a completely autonomous drone, passing through a grey area where, despite being piloted by a human, the drone has the autonomy to carry out some specific actions beyond the pilot’s control. This varying level of autonomy also applies to assistive or therapeutic robots. The robot’s level of autonomy should be considered when it comes to regulating civil liability for damages caused to a third party.

In my view and in that of the Expert Group NTF (“Liability for AI and other emerging digital technologies”, 2019), if the robot does not have autonomy or it is reduced so that a human may exercise control over the machine, the rule regarding civil liability should be based on fault whereas, if the robot is capable of learning from the environment and making its own decisions in such a way that the human lacks control of it, the rule regarding civil liability should be founded on the abnormally dangerous activity that creates a significant risk of damage to third parties (strict liability). In this respect, the rule would be more akin to the civil liability rule contained in the civil codes regarding damages caused by animals. In case of two operators (“frontend and backend operators”) legislators should define which operator is liable under which circumstances, and all other requirements that need to be regulated.

On the other hand, the idea of attribution of “electronic personality” to the robots was not admitted by the NFT. In my opinion, if this attribution makes some sense is that of the robot being able to be the “subject” to which the behaviour that causes damage is “attributed” whilst the “subject” that is considered “liable” is the “human” (e.g. the “principal”). Thus, it would be considered a (new) case of civil liability for someone else’s conduct.

Liability for robot defects that cause damages to a third party.

The fact that the “robot” is qualified as a “product” enables the application of the provisions of EU rules on Products Liability. As known, the NTF has reviewed the existing rules to see to what extent can it continue to be applied in light of the emergence of digital technologies. In addition, the Machinery Directive and the General Product Safety Directive are currently under review.
From the various documents being published, it appears that the producer’s strict liability will remain as the main liability rule but combined in the case of the breach of a duty of care by the producer with a fault-based liability rule. This approach leaves an open question, that is, how to properly combine both grounds of liability in the field of products that cause damage. In my opinion, this necessarily leads to distinguish types of defects. In so far as corporeal robots are increasingly more sophisticated, it may not be unreasonable to think that the emphasis will have to be placed above all on their design, whereby the defects causing the robot to be considered “defective” shall more frequently be “design defects” rather than “manufacturing defects”. Similarly, their level of sophistication implies greater precision in the warnings, information and instructions that the producer should provide the customer with. As information becomes more complex, it may lead to defective information becoming a more frequent type of defect―along with design defects―than manufacturing defects when we deal with robots and smart machines. On this basis, if the producer is to be considered liable for all type of defects, as the current legal provisions state, investments in high technology by manufacturers may decline if not stop altogether.

In search of the balance between investment in technological research and liability respect to third party, the solution should not be for the producer to be granted immunity in the case of certain defects; instead, the option may be to distinguish the “grounds of liability according to the type of defect”. In the case of design and information defects, liability should be based on fault with a possible reversal of the burden of proof alleviating the evidentiary burden to the victim; whereas, on the assumption that there is a manufacturing defect, strict liability would be the most appropriate rule. Only in the case of manufacturing and information defects, the so-called “development risks defence” may be put forward as ground for exempting producers from liability. This would not apply in the case of “design defects”, to which the “reasonable alternative design rule” would take significance. Two tests would thus be combined: the “consumer expectation test” (manufacturing and information defects) and the “reasonable alternative design test” (design defects).

On the other hand, the legal concept of “producer” should be reviewed. According to Art. 3 of Directive 85/374/EEC, the producer is liable for damages caused to third parties by a defective product. A model that places liability solely on the producer, even where the defect is not strictly a manufacturing defect and some individually identified persons (e.g. algorithm creator, programmer, designer, manufacturer of a component product, etc.) or a research team have been involved in the design, may disincentivise investment. If, in the case of robots and smart machines, one considers that a large proportion of defects may be due to design, the concept of “producer” should be broadened to encompass the “engineer-designer” and not just to be considered as “operator”. However, to the extent that the software is viewed as a fundamental component part of the product, the producer of the finished product may be held not liable if the defect is due to the design of that component. Thus, the designer could be held liable directly as “manufacturer of a component part” of the robot for the damage caused.

Further, it is increasingly more common to use open source software to create the robot (open robots) and, in these cases, anybody can make modifications, innovations, add certain aspects to public protocols, etc. Subjective uncertainty affects the existence and proof of causation between the defect and the damage. Therefore, even though the idea may draw criticism, I do not think that the “market share liability rule”, when the damage is attributable to two or more persons (multiple tortfeasors), should be absent from this debate.

Challenges for civil liability in the age of IoT

The Internet of Things (IoT), as well as robots and other smart machines, poses a challenge for the rules regarding civil liability because they highlight that there is a need for a coherent system on the said liability that answers to new scenarios.  Issues concerning “factual causation” (overdetermination, pre-emption and subjective uncertainty), “scope of liability” (in particular, foreseeability), grounds of liability (based-fault or strict liability) and the evidentiary burden of proof of causation in relation to the mechanisms that could alleviate it for the victim instead of establishing as general rule the reversal of the burden of proof should be (re)considered by national legislators. Furthermore, the notion of “operator”, which included categories that traditionally had have different legal meaning in the Civil Codes (ie user/owner/keeper), leaves open questions regarding, among others, how the relationship between the frontend and the backend operator could affect the victim’s claim, when considering contributory negligence or the vicarious liability when the auxiliary of the principal is an expert system.

On the other hand, the behaviour of an expert system or robot is unpredictable, and its algorithm is constantly under review or updates are taking place. So, its computational logic is usually elusive, opaque; so, the query of a “black box” or “logging by design” is not going to be the Panacea for the victim. However, it will serve to fix clues.

Looking into the future

The legal system should be a flexible system that adapts to fast-paced technological changes, establishing “innovation” as a principle or legal rule.

In my opinion, the future regulation of expert systems in the field that I have been talking about namely the civil liability, should be made by areas that is ultimately what has happened, for instance, in the field of ​​liability for damages caused by traffic events or by drones. In addition, Civil Codes could regulate on the one hand, the case of damages caused by the expert system used by the principal and, on the other hand, the civil liability of the owner/keeper/user of a robot in accordance with its levels of autonomy.

Moreover, the rule of “proportional liability” regarding causation and uncertainty in an environment operated by expert systems that interact with humans, must, be taken into account when considering a prospective regulation of the civil liability for the damages caused.

Despite the NTF opinion, in terms of products liability rules, it should be carried out an important review. The fact that the NTF considers the breach of a duty of care besides the strict liability when the producer acts as “operator” poses new challenges for civil liability regulation as well as for procedural national laws.

Published under licence CC BY-NC-ND.

This Blogpost was written by

Author

  • Susana Navas Navarro

    Susana Navas Navarro is Full Professor for Private Law at the Autonomous University of Barcelona. In the last years she has devoted herself to the study of Digital Law.

Susana Navas Navarro is Full Professor for Private Law at the Autonomous University of Barcelona. In the last years she has devoted herself to the study of Digital Law.