AI contracting: an overview of outstanding doctrinal questions

Most of the time, lawyers are concerned with the technology that exists now and what it will be like in foreseeable future, its impact on the order of the society, legal implications and uncertainties being brought by the utilization of those technologies. Time and again legal scholars realize that a law that was meant to be technology-neutral at the time of its legislation, is in fact tailored towards a particular technology that might be (and most probably will be) outdated in the future. This brings them to the point, where before going further, it is necessary to address all the issues created by the use of new technologies in legislation, in a manner that future generations will be able to be flexible and adjust them easily to their needs. Given this and extensive use of AI now and even more in the future, there are some outstanding legal issues that need to be clarified, when it comes to autonomous AI contracting.

An article by Turkhan Ismayilzada

Artificial Intelligence (AI) is almost in every aspect of our lives: starting from the medical treatment of patients, ending with the conclusion of small consumer contracts on behalf of their users. However, as with every technology at any point in time, AI has its limitations and researches from all over the world are fighting to tackle these limitations in the hope of creating Artificial General Intelligence. Astrophysicists and theologists express their concerns with the technological singularity, which is a hypothetical future point in time at which technological advancements will be out of control and will radically change human civilization, and arguably nature of human life as such. Whether it will happen or not is a matter of time and human competence, but one thing is apparent, no regulation will be able to stop technology from evolving into a human-like intelligence, which eventually will bring plenty of issues to address, especially in the legal realm.

  1. AI’s capacity to contract

 “A person is he whose words or actions are considered, either as his own, or as representing the words or actions of another man, or of any other thing to whom they are attributed, whether truly or by fiction. When they are considered as his own, then is he called a natural person: and when they are considered as representing the words and actions of another, then is he a feigned or artificial person.” (Thomas Hobbes, Leviathan)

First, the legal norms regulating the capacity of parties to contract, need to be revised, as autonomous AI contracting is an inevitable future. More to the point, should legal systems allocate legal personality to AI?

Some researchers suggest that AI should be given some limited legal personality as in the case of companies. The emergence of AI in the current point of time is often compared to that of companies in the late 19th early 20th century when conferring legal personality on a legal fictitious phenomenon was as absurd as driving autonomous vehicles. But times have changed, and the legal system was eventually forced to recognize the legal personality of companies, due to their rapid growing role in the world economy. The same can be said about AI and its role in the economy (arguably even more) of the 21st century. Thus, it is unclear, why the European Commission’s High-Level Expert Group on AI (AI-HLEG) in its Policy and Investment Recommendations recommends governments to refrain from conferring legal personality on AI systems.

By conferring limited legal personality, the legal system will create necessary legislative scope for AI to enjoy only relevant rights and bear only relevant obligations, to be able to autonomously transact on behalf of its user (or on its own behalf, when necessary). However, this approach fails, mainly because AI doesn’t possess any property to be liable with. As a workaround to this hurdle, some scholars propose a centralized registry system for AI agents, and also the creation of insurance funds, which will act as a security for AI liability.

Among other arguments against the legal personality of AI, the main is that the AI systems are not cognitive enough to be equalized to humans in their legal status. In my point of view, lawyers must be away from such anthropomorphic limitations and try to solve the problem that is facing us rather than pretending that it is not a problem at all. And as of now, conferring limited legal personality seems to be the only rational choice to make.

  1. AI’s declaration of intent

It goes on without saying that, soon, AI systems will participate in contract formation in a much more autonomous way than they are today. Examples will include personal assistants deployed with the goal of relieving its user from small day-to-day transactions, procurement AI systems that will oversee the stock and raw material needs of large companies, etc.

Assuming that AI has some form of legal personality, for a contract to come into existence, each party should freely express his/her/its will (to declare intention) to enter into a contract. This means we need to rethink the concept of contract formation to regulate the declaration of intent by autonomous AI. This is the issue, that is least discussed in interdisciplinary research of AI and law.

It is important to note that, such a complex topic as declaration of intent in AI concluded contracts is left out, since intention to contract is a process that is today exclusively devoted to human cognition (this is also the main differentiating factor of AI from companies, because there is always a human behind contracts made on behalf of companies). The main questions standing before the legal systems are explainability of AI and reasonableness criteria for AI.

So is the mistake doctrine, which allows a contract to be avoided by the party if there is a divergence between his/her inner will and outer expression of it. In other words, if what the party to a contract got as a result of a contract differs from what he/she actually intended to get, then the legal systems (to a various extent and under various conditions) consider a mistake to be operative and allow avoidance. To what extent can AI be mistaken and which of those mistaken will the legal systems allow, is a big question.

  1. AI’s responsibility

“It is likely that AI may reduce human errors, but one might also argue it simply shifts the source of the error from the human to the AI system.”  (Stephen Russell)

The issue that seems to interest legal scholars, practitioners and regulatory bodies the most, is the issue of responsibility for the contracts concluded by AI. It is obvious, that there are three players in the game of AI deployment and utilization: programmers/developers, users/principals and AI itself. Currently, legal systems have two forms of liability attributable to this Bermuda Triangle:

  • The doctrine of product liability as it is today, suggests that the responsibility for faulty AI should lay on the programmers as they are the ones who have the most control over the algorithm of such systems.
  • The doctrine of strict liability suggests that the user will always be responsible for the acts of AI because he/she assumes the risk that comes with the utilization of AI.

Even though there are three players in the game, only two of them are held responsible, in one form or another. Thus, AI possessing almost all of the human cognitive abilities to conclude a contract and act on its own behalf without human intervention is not liable for the faulty transaction. An obvious solution would be to hold AI responsible for its own actions, but that brings us to the first topic in this post, which is conferring some form of legal personality on AI.

  1. Conclusion

As we can see, all these topics are closely connected with each other. Let’s go backward through the highlighted topics to make the connection more visible.

  • To effectively hold AI responsible, we need to give AI the right to have assets.
  • To possess this right, the legal systems will need to confer legal personality on AI.
  • If AI is a legal person, it should be able to make declarations of intent to enter into a contract.
  • Considering that AI will eventually make mistakes when entering into contracts, which of those mistakes are we going to regard as an operative under any given legal system?
  • If a mistake is inoperative, who is going to be responsible?

In short, it is a cat and its tail, and the task that stands before legislators, like Mark Twain once said, is to “carry a cat by the tail, and learn something that can be learned in no other way”.

Published under licence CC BY-NC-ND.

This Blogpost was written by

Author

  • Turkhan Ismayilzada

    Turkhan Ismayilzada is a Ph.D. Researcher at the Martin Luther University of Halle-Wittenberg. His research focuses on the mistakes made by AI in contract formation and doctrinal questions they raise in various legal systems. Mr. Ismayilzada also holds LL.M. from University of San Francisco School of Law in International Transactions and Comparative Law.

Turkhan Ismayilzada Written by:

Turkhan Ismayilzada is a Ph.D. Researcher at the Martin Luther University of Halle-Wittenberg. His research focuses on the mistakes made by AI in contract formation and doctrinal questions they raise in various legal systems. Mr. Ismayilzada also holds LL.M. from University of San Francisco School of Law in International Transactions and Comparative Law.