Attributing Legal Consequences to and from AI Systems

With AI systems permeating more and more domains of (traditionally) human activity, many increasingly commonplace situations raise important legal questions. Such as: An automated vehicle A is riding in crashes into B. X buys shares from Y with a contract concluded using machine learning software. T tries to patent something generated by an AI system she programmed. F asks a chatbot to “write a scathing critique of G without any regard for the truth” and posts the output without any edits on social media. While the issues raised by these examples implicate several distinct legal areas such as torts, contracts, intellectual property, and data protection, they are underpinned by one broader question: how, and to whom, should law attribute the actions of an AI system and its potentially harmful consequence?

An article by Jerrold Soh

In an article forthcoming in Legal Studies, I approach the attribution question through the perspective of attribution theory, a field of psychology which studies how we attribute causes and implications to other agents’ behaviours. The theory is animated by a central tension between two complementary perspectives. The first, ‘dispositionist’ view attributes behaviour primarily to the actor’s internaldisposition: her preferences, wants, and moral traits. The second, ‘situationist’ view attributes behaviour primarily to the actor’s external circumstances, that is, her situation. If one fails a test, we may ascribe this dispositionally to their lack of intelligence, diligence, or competence. Or, we may point situationally to the difficulty of the test itself, deficiencies in what the test examines, or to the adversities in the test-taker’s living environment.

There is yet no consensus on whether disposition or situation plays a larger role in determining human behaviour. But psychologists generally agree that we tend to overly attribute others’ behaviour to disposition while missing situation. Or, in Hanson’s words, we tend to “see the actors and miss the stage”. Once something appears to have a disposition, we are quite ready to infer that it actually does — even when it is an entirely inanimate object. To see how, find your nearest volleyball, draw a face on it, and give it a name.

How the Law Dispositionises AI Systems

This article details how legal attributions are likewise often premised on dispositional constructs such as intention, control, and the actor’s ‘will’. In this way, attribution theory provides a powerful lens for understanding contemporary legal debates around AI systems. The article illustrates this with reference to four controversies in AI and law.

Firstly, in attributing liability for automated vehicle accidents, it is often argued that no human can be faulted when ‘the AI is driving’, because it does so autonomously, independent from any human control. This means the law must adapt to hold someone other than the ‘driver’ liable. Notice that this question already assumes that the act of driving is properly attributed to the AI system, not its developers or users. But the autonomy of self-driving vehicles must be seriously scrutinised: the Society of Automotive Engineers has, since 2018, held that ‘autonomy’ is not the appropriate adjective to use for such vehicles. Moreover, as I have argued elsewhere, it is also inaccurate to claim that no human has legal control over an automated vehicle.

Secondly, whether AI systems should have legal personality is shaped by whether we view them dispositionally or situationally. Consider the European Parliament’s 2017 proposal of a “limited electronic personality” for AI systems “so that at least the most sophisticated autonomous robots could be … responsible for making good any damage they may cause”. This proposal adopted a strongly dispositionist framing of AI that was, according to the trenchant expert critique published shortly after, based less on fact and more on science fiction. Later, the European Parliament’s 2020 resolution would hold instead that AI personality had been misguided because “all physical or virtual activities… driven by AI systems … are nearly always the result of someone building, deploying, or interfering with the systems”. This is a situational frame.

Thirdly, case law examining when an algorithm’s developers may be liable for defamatory content the algorithm generates have been broadly split along dispositional versus situational lines. Courts holding that developers are not liable typically emphasise that by how the material “has all been done by the web-crawling ‘robots’” (see Metropolitan International Schools Ltd v Designtechnica Corp & Others [2011] 1 WLR 1743 at [50]–[53]). Those holding that developers could be liable stress that the algorithm was merely operating in a way in which the developers intentionally designed and developed it to.

Finally, the multinational litigation launched by the Artificial Inventor Project seeking to recognise an AI system named DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) as a patent inventor may partly be understood as a contest between dispositional and situational AI frames. The petitioners, and the courts which agreed with them, tended to emphasise DABUS’s autonomous capacities. DABUS’s creator goes as far as to claim in an ostensibly scientific paper that DABUS “perceives like a person, thinks like a person, and subjectively feels like a person”. The courts which disagreed, however, were generally unconvinced of DABUS’s sentience and preferred to focus on the otherwise straightforward questions of patent statutory interpretation before them (see generally Part 3(e) of the full article).

Why the Law Should Properly Situate AI Systems

Legal dispositionism is particularly difficult to avoid in the face of AI systems which, by Russell and Norvig’s well-known definition, are built precisely to think and/or act as if they were human. AI systems are then naturally perceived as autonomous agents with inherent qualities, wants, and capacities. Such ‘AI autonomy’ is often raised as the crux of the legal issue: if an AI system acts independently from its developers or operators, it hardly seems fair to attribute the system’s actions to the latter. However, if we view AI systems less as dispositional actors than as situational characters whose actions are determined more by their training dataset, programming, and deployed environment, this problem largely goes away.

The key question therefore is whether contemporary AI systems are more accurately viewed through a dispositional or situational lens. Reviewing the terminology and technology surrounding today’s AI systems, this article argues strongly for the latter. It is true that today’s AI systems, especially large language models like GPT and LlaMA (Large Language Model Meta AI), are remarkably performant and human-like in their outputs. But they are fundamentally mathematical systems. Regardless of how sophisticated the mathematical system is, and how many billions of parameters and terabytes of data go into its training, the AI system’s actions are dictated by situation: its training data, internal source code, and deployment environment. These are in turn determined by the human actors who develop, maintain, and operate the system. This way, the article explains and supports the emerging consensus that AI regulation must consciously address the larger ecosystem of providers, operators, and users surrounding AI systems. Failing to regulate these stakeholders would be missing the forest for a few imaginary trees.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Jerrold Soh

    Jerrold Soh is an Assistant Professor at the Yong Pung How School of Law, Singapore Management University and the Deputy Director of its Centre for Computational Law. His research revolves around how emerging technologies, especially artificial intelligence, should be regulated by and also used in the legal system. He has published both doctrinal work on AI and automated vehicles liability as well as scientific work developing machine learning and network models for legal use cases. He holds degrees in Law and Economics from the National University of Singapore and an LLM from Harvard Law School.

Jerrold Soh Written by:

Jerrold Soh is an Assistant Professor at the Yong Pung How School of Law, Singapore Management University and the Deputy Director of its Centre for Computational Law. His research revolves around how emerging technologies, especially artificial intelligence, should be regulated by and also used in the legal system. He has published both doctrinal work on AI and automated vehicles liability as well as scientific work developing machine learning and network models for legal use cases. He holds degrees in Law and Economics from the National University of Singapore and an LLM from Harvard Law School.