DEMYSTIFYING THE EU AI LIABILITY DIRECTIVE

Artificial Intelligence (“AI”) is progressing quickly and permeating more aspects of our daily life. Along with its benefits, the rise of AI is also accompanied by risks to fundamental rights, which raise questions of liability for AI. In September 2022, the European Union (“EU”) introduced the AI Liability Directive, which provides principles for imputing accountability to AI machines for any harm they may cause, to address the uncertainties associated with this technology as well as to address the issue of liability. The crucial provisions of the Directive are specifications on the disclosure of evidence and a rebuttable presumption of the causal link between fault and damage. However, the directive falls short in addressing specific challenges that proving the causation of harm by an AI system can pose. This article highlights the shortcomings of the AI Liability Directive and suggests improvements to better regulate AI and hold it accountable.

An article by Pritha Lahiri and Khushi Kabra

AI is the buzzword in today’s time. Described in layman’s language, AI mimics human intelligence to perform tasks. AI is becoming increasingly prevalent in everyday life, with the potential to replace humans in various fields, such as law, medicine etc. However, despite its advantages, AI can cause potential violations of consumer confidentiality, bias, harm to individuals through unintended consequences among other things.

In order to mitigate such risks, the EU has recently introduced a proposal for an AI Liability Directive (“AILD”) which will complement the EU AI Framework to include liabilities for the damages caused by AI systems and will move hand in hand with the EU AI Act and the revised EU Product Liability Directive (“PLD”). In 2021, the AI Act (“AIA”) was introduced by the EU Commission. Although the PLD and AILD are centred around compensating for damages caused by AI, the AIA prioritises safety and preventing harm caused by AI. The three laws are meant to function as a tripartite framework where safety and liability are intertwined. While the adopted AIA would apply immediately, the directives first require implementation by national member states.

Although the provisions of the AILD may well strengthen the position of aggrieved parties in court and is therefore a good step in the right direction, there is still plenty of opportunity for advancement. This article analyses the Liability Directive Proposal and offers ideas for how to address the current problems with imputing liability to AI.

The New Directive

  1. Rebuttable presumption of the existence of the causal link between fault and damageThe proposed Directive includes a rebuttable presumption of the causal link between fault and damage. The proposed AILD under Article 4 intends to make it easier for victims to prove fault in causing damage by easing the burden of proof. If victims can prove that someone was at fault for not meeting an obligation related to the harm caused by the AI and if there is a “reasonable likelihood” of a connection to the AI’s performance, then national courts can assume that the non-compliance caused the damage. This grants victims a “Presumption of Causality.” Nevertheless, the party responsible can still challenge this assumption by claiming that harm was due to another cause.
  2. Disclosure Requirement

The Directive under Article 3 requires providers and users of high-risk AI systems (as defined in Article 6 of the EU AI Act) to disclose evidence about their systems to national courts if those courts suspect the systems of causing damage. This evidence includes information about datasets, technical documentation, logs, and quality management systems. Providers of high-risk AI systems must keep all information related to their systems for ten years after they are placed on the market to comply with the AIA under Article 50.

  1. Relation to National Laws

The regulation of the principles of the burden of proof as well as the level of certainty needed to determine the threshold of proof is in general a matter of the regulatory competence of the Member States. Thus, this EU Directive aims only to set the guidelines for the presumption of the casualty with minimal strategies for harmonisation so that citizens can take maximum advantage of national law provisions. Moreover, this AI Liability directive comes with some leeway to member states in interpreting and analysing their laws.

  1. Review clause

Article 5 of the proposed EU Directive describes a review clause that aims to leave a room for future legislative reforms. As per this clause, the Commission shall, after 5 five years of enforcement of this Directive, submit an assessment report to Parliament and the Social and Economic Committee. This review is necessary to determine the need for “no-fault liability rules” to deal with the claims against the operators along with the compulsory insurance to deal with certain AI Systems.

Shortcomings in the New Directive

This new regulatory directive is a welcome initiative taken for regulating AI but also it suffers from various shortcomings:

  1. Option of choosing Compliance

According to the Article 3 of the proposal, a Defendant will be assumed to be in breach of the Duty of Care if they decline to produce trade secret data regarding an AI system as evidence. A defendant may decide it would be more prudent tactically to just pay compensation in exchange for confidentiality. Trade secrets are crucial to these businesses’ ability to conduct business.

The EU AIA pushes for transparency in high-risk AI systems under Article 13. However, the AILD allows users to forgo transparency by paying compensation. This could incentivize companies to prioritise their interests over transparency and accountability. Compensation may not fully restore the harm caused by opaque AI systems. The transparency requirement from the AIA to the AILD and appointed independent third-party auditors to assess AI systems without requiring full disclosure of proprietary information. These auditors could report their findings to regulators and other stakeholders, providing a level of accountability without compromising the secrecy interests of AI developers. This would have encouraged ethical AI development while upholding accountability and transparency.

  1. Rules on Liability

The directive under Article 4(1)a imposes a burden upon the consumers to demonstrate the fault of the defendant. However, the opacity and complexity of AI systems would make it difficult or almost impossible for consumers to claim their right to compensation for the damage caused. AI systems often involve multiple parties, making it difficult to identify who is responsible for damage caused by the system. Article 4(2) and 4(3) of the AILD are limited to cases where the operator or user of high-risk AI systems fail to comply with specific requirements. This means liability is dependent on the ability of the complainant to prove non-compliance, which can be difficult in the case of an AI system which is complex and its working is not easily understood. Liability for AI systems not covered under this Act will need to be determined on a case-by-case basis.

Furthermore, once Machine Learning comes into the picture, the question of liability becomes complex due to the fact that AI machines are capable of learning from their surroundings or environment autonomously without any human intervention.

Systems should thus be built to be explainable rather than interpretable in their functionality and features in order to balance the interests of the User and the Provider. This can help increase transparency and accountability while at the same time being inclusive of complex non-transparent AI systems due to the fact that such AI systems may still be explainable if it is designed in a way that their functioning and features can be easily understood.

If an AI system’s functionality and features are clear to users, they may be better able to understand how it works and how decisions are made which can lessen the likelihood of misunderstandings or disagreements and increase confidence in the system.

  1. Ambiguity in defining the level of risk in AI systems

The proposed AILD adopts the risk-based approach of the AIA. However, the crucial delimitation criteria, especially under Article 6 of the AIA, are not specific enough because they rely on a vague notion of “High-Risk” AI systems that are not clearly defined. The conditions stated in Article 6 of AIA for an AI system to be high-risk leave room for interpretation and uncertainty. Moreover, Article 7 AIA allows the Commission to adopt delegated acts to amend Annex II and III of the AI Act, which lists the areas where high-risk AI systems are presumed or automatically classified as such. This means that the scope of high-risk AI systems can change over time without clear criteria or consultation. Article 6 of the AIA should thus provide more specific criteria for delimiting high-risk AI systems. It is recommended to adopt a precautionary approach and consider AI systems as Low-Risk rather than Minimal Risk. This would result in only one regulatory requirement: informing users that they are interacting with an AI system and giving them the chance to terminate the interaction.

Epilogue

The proposed AILD addresses legal uncertainties surrounding AI systems and provides a comprehensive policy framework. However, it is crucial to address shortcomings and create a harmonious relationship between the AIA and AILD. This will allow for the development of responsible AI technologies and ensure accountability and transparency rules across the EU and inspire other jurisdictions to work towards creating a safe AI environment.

From the Indian perspective, the AILD is an interesting regulatory model. India’s National Strategy for AI outlines a broad framework for AI development and emphasises transparency, ethics, and accountability. The Strategy is in some aspects similar to the European Union’s AIA and emphasises the importance of ensuring that AI is developed and deployed in a way that is transparent, ethical, and accountable while also recognizing the need to address issues such as bias, discrimination, and privacy. However, there may be differences in the specific requirements and approaches taken by Indian and European legislation and it remains to be seen whether India will adopt similar requirements or take a different approach to regulating AI.

Published under licence CC BY-NC-ND.

Authors

  • Pritha Lahiri

    Pritha Lahiri is a 4th-year law student pursuing B.Com LL.B from Institute of Law, Nirma University, Ahmedabad India.

  • Khushi Kabra

    Khushi Kabra is a 4th- year law student pursuing B.Com LL.B from Institute of Law, Nirma University, Ahmedabad India.

Pritha Lahiri is a 4th-year law student pursuing B.Com LL.B from Institute of Law, Nirma University, Ahmedabad India.