What do we talk about when we talk about AI regulation?

Once upon a time, artificial intelligence (AI) was something of a niche subject in the law. There were, of course, some lawyers and legal scholars dealing with AI-related matters, but their work was either highly theoretical or something of a specialty. But this is no longer the case. As the use of AI becomes widespread, public and private lawyers are now forced to engage with the wide range of questions about these technologies and their legal implications. The challenges posed by AI, in turn, have prompted an explosion in the number of academic papers, monographs, and other sources on AI regulation. However, these works approach AI from a variety of perspectives, using different methodologies to pursue a broad range of goals. This blogpost examines what, if anything, these diverse approaches have in common.

At first glance, the answer to this question—what is AI regulation about?—is straightforward: AI regulation refers to the regulation of AI and related technologies. This definition is broad enough to encompass various kinds of work, ranging from those dealing with AI-specific laws, such as the draft EU AI Act, to those focusing on the implications of AI technologies for traditional branches of the law such as contracts, the laws of armed conflict, or taxation. At the same time, it excludes the large body of work on regulation through AI, which is often referred to as algorithmic regulation. Therefore, one could simply define “AI regulation” in terms of the laws dealing directly or indirectly with AI.

The lack of cohesiveness in AI regulation

Such a definition would describe well the kinds of work usually called “AI regulation”. However, it does not give any normative guidance about how one should approach this subject. In established branches of law, this kind of normative guidance can take various forms, such as the stipulation of certain values regarded within a particular branch: works of law and economics, for example, tend to put a premium on identifying efficiencies and inefficiencies in legal arrangements. Normative guidance can also come from the definition of which sources are studied as forms of regulation: all branches of law devote considerable energy to the study of State-made law, but disciplines such as contract law or (more recently) international law also take into account the roles of other actors in the production of regulation. Finally, each branch of the law is interested in the solutions to specific research problems, such as how to frame new situations or resolve apparent normative conflicts. Through these three mechanisms, well-defined branches of the law allow current and future practitioners to evaluate what counts as good work within that discipline.

Despite its popularity—or maybe because of it—AI regulation (or, it has been argued, technology law in general) is not yet seen as an autonomous branch of law. It is, of course, a popular topic among early career researchers, and universities increasingly hire specialists in AI regulation at all levels. Furthermore, institutions such as RAILS provide valuable spaces for the exchange of views and knowledge between people interested in topics related to AI and law. But these spaces and opportunities have an unspecified scope, usually covering all sorts of topics related to AI. Such an approach accounts for the diversity of use contexts of AI in society. But it also means that prospective AI regulation scholars, such as myself, seldom have a shared background. For example, data protection scholars focusing on AI will have different skills, canons of reading and law, and priorities than scholars versed in labour law or intellectual property. And the differences become even more salient when we move across jurisdictions. So, one might wonder whether these scholars share anything beyond the general concern with AI technologies.

As time goes by, some factors can mitigate the lack of normative guidance outlined above. The diffusion of AI technologies is likely to give scholars more experience in AI technologies and their societal impacts, after an initial disruption. Likewise, the fact that early career scholars are now explicitly trained as AI regulation scholars might lead to the emergence of well-defined schools of thought on the subject, or at least in specific niches on AI regulation. Yet, it remains to be seen in practice whether consolidation will turn AI regulation into an established field of law.

But maybe AI regulation should not be a separate branch of law at all. After all, a research area is built largely upon shared commitments of the kind we described above. But, as AI becomes more widespread in society, research output on AI is likely to cover more—and not less—issues and rely on more approaches. So, the only way to build a discipline from that would require the arbitrary exclusion of some of the work currently covered by the label “AI regulation”. And to make things worse, this exclusion might eliminate techniques and approaches that would help AI regulation scholars face future challenges.

The menu of AI regulation

AI regulation scholars are not forced to choose between “anything goes” or having a focus so narrow that it ignores important problems and tools. Indeed, some scholars in technology law—which is broader, but plagued by similar identity concerns—have suggested alternative ways to build a coherent field of study. For example, Michael Guihot proposed a definition of technology law built around five dimensions:

  1. the technology it looks at;
  2. the applications of that technology that it deems relevant;
  3. the threat, risks, or benefits of that technology;
  4. the critical lens used to analyse that technology; and
  5. the regulatory approach used to resolve issues.

Since technology law is not a unified discipline, it admits various responses to each of these criteria. But the label “technology law” requires the presence of all of these elements. Not only that: in each dimension there are some values that are considered acceptable, or not, according to the practices of scholars in that group. For example, current scholarship on technology law dedicates its efforts to AI, genetic engineering, and other present and near-future concerns, while mostly consigning issues such as teleportation and outer space travel to science fiction. By identifying what is acceptable in each dimension, we can build a “menu” of what counts as an acceptable contribution to technology law scholarship in a given moment.

Is it possible to create such a menu for AI regulation? Given the sheer volume of AI regulation works published in the last few years, some have tried to approach this issue through quantitative methods. My paper, instead, follows a qualitative approach, in which I started from high-profile works on AI regulation and used them to identify other sources to read, until the point of conceptual saturation (that is, when additional readings were not bringing any new concepts to the table). From that reading, I identified some trends along the five dimensions highlighted by Guihot.

For the most part, these trends reflect a broadening of what is acceptable in each dimension. On the technology dimension, the precise definition of “AI” remains a heated problem, in which consensus is unlikely to emerge even after AI-specific legal instruments come into force. The target applications are multiplying, as AI techniques are increasingly used in new domains. Likewise, new critical lenses have appeared: whereas normative and applied ethics were salient in earlier work on AI regulation, the last few years have seen a proliferation of works drawing from critical theory, science and technology studies, and other frames that were less present before 2018 or so. Last but not least, AI regulation scholarship attends to a very broad range of regulatory approaches, as various jurisdictions develop their approaches to regulation and private and international actors also set norms in this field.

Still, the last few years have seen some convergences on methodological issues. In spite of the AI demarcation problem, current regulatory scholarship has focused on machine learning and related technologies, with logic-based and other approaches playing a secondary role. There has been a shift towards the AI system as the adequate target for regulation, which is reflected in legislative proposals around the world. And, increasingly, AI systems are framed through a sociotechnicallens, which takes into account not just the legal impact of technical properties of AI systems, but also the social context in which they are developed and used. Accordingly, AI regulation is conceived not as the regulation of technical objects, but of the broader societal changes prompted by AI.

By mapping convergences and divergences in current scholarship, my goal was to provide an initial list of the elements of the “menu” of AI regulation. Such a map does not answer the should question presented above—should AI regulation become a discipline? Yet, the divergences mapped above reinforce the impression that shoehorning AI regulation into a well-defined discipline would entail losing important parts of what is currently covered by the term. If that is the case, a menu approach would still allow newcomers to the field to know what is expected of them, as well as the more general evaluation of what counts as good AI regulation scholarship. And, in doing so, it will allow scholars to take seriously the diversity and complexity of AI as a subject matter of law.

The author would like to thank Renata Vaz Shimbo, David Hadwick, Anca Radu, and the RAILS blog editorial team for their feedback.

Published under licence CC BY-NC-ND.

This Blogpost was written by

Author

  • Marco Almada

    Marco Almada is a doctoral researcher at the European University Institute in Florence, Italy. Marco has a Master in Comparative, European and International Laws from the European University Institute, a Bachelor of Laws from the University of São Paulo, and Bachelor's and Master's degrees in computing from the University of Campinas. His research deals with the role of technical knowledge in legal reasoning, with a specific interest in how AI regulation models the technical properties of AI technologies.

Marco Almada Written by:

Marco Almada is a doctoral researcher at the European University Institute in Florence, Italy. Marco has a Master in Comparative, European and International Laws from the European University Institute, a Bachelor of Laws from the University of São Paulo, and Bachelor's and Master's degrees in computing from the University of Campinas. His research deals with the role of technical knowledge in legal reasoning, with a specific interest in how AI regulation models the technical properties of AI technologies.