How to regulate AI: To code or not to code?

The emerging currents debates about LegalTech, RegTech and the other current applications of technology in the legal profession are increasingly encouraging questions about the form of communication and language of norms. Until now, this communication has been considered at the level of human-to-human and related to the transmission of the content of legal acts to a certain, designated group of recipients (natural or legal persons), who should comply with certain norms. Increasingly, there are interesting questions that no one has asked before: what if we want to pass on certain standards of behaviour to a non-human being?

An article by Dariusz Szostek and Rafał Prabucki

Since artificial intelligence (AI) is built it algorithmic codes, it seems correct to regulate AI not only through the traditional law established in the natural human language but also through its implementation into algorithms.

The modern practice of law increasingly often uses the tools of LegalTech and RegTech to support the processes of legal analysis or even application and enforcement of law. Regulating AI only with the text of a legal act is so complicated that it can be very difficult, if not impossible, to be done by the traditional way of law-making. It is possible, and even necessary, to regulate the actions associated with the creation and operation of AI, but what about further developments in AI and problems where it will be necessary to address directly not humans but AI/autonomous systems?

Therefore, it is necessary to formulate the following hypothesis: Since AI is an algorithm, then the method of its regulation should be the use of an algorithm embedding legal standards.

In the article “Is the traditional method of regulation (legislative act) sufficient to regulate AI. Or should AI be regulated not only by the text of a legislative act but also by an algorithmic code?” published in  Bialystok Legal Studies (Białostockie Studia Prawnicze), this hypothesis is subjected to a preliminary analysis. The topic seems futuristic and far away in time, but nevertheless perhaps we should recognise that now is the most opportune moment to start the discussion, when so many people are interested in it?

This kind of discussion is needed. The more people who involved in this discussion, given the diversity of views, i.e. concerns and benefits, the better. Especially as discussions also arise in the context of Internet governance. In the last Internet Governance Forum (IGF), young people signaled that the Internet is “outdated” in its own way. They prepared 10 thematic groups as part of their ‘Youth‘ session. One of the groups was even dedicated to AI. In their conclusions, the young people, mentored by experts, stated that further development of AI under current technical and organizational conditions is a threat to their future. Young internet users do not see the due steps of relevant organizations to control and supervise AI. This is a good time to highlight the impact of AI on the economy, and since we are discussing the future of the Internet and building trust in it. Young people from all over the world already know this and expect us scientists to act in advance.   

The questions that may arise are many, but a few are the most relevant:

  1. How to make a successful incorporation of law into codes?
  2. How should an algorithm work as a tool for AI regulation?
  3. Who should be the developer of the algorithm(s) regulating AI? The legislator or a private entity?

The answers to these questions, even if only general, are intended to consolidate the discussion for the future, which may accelerate certain actions and result in benefits:

  • Transparency;
  • Accountability;
  • Appropriate dynamism.

Certainly, this is also an opportunity to build a strategy. Regulatory discussions about technology innovations are also a way of opening up to new challenges in terms of the slogans the EU wants to pursue: digital sovereignty.

The European Union’s new legislative ambitions are also a new platform for discussion, to which the current assumptions seem to fit perfectly. In the article cited above we find a passage that outlines the initial framework:

Cyberspace is an artificial creation operating through software. AI is an algorithmic code constituting an element of cyberspace, and therefore it could be regulated through the same technique, that is through the codes with the legal regulations implemented into them. Cyberspace is dynamic and undergoes continuous changes. AI is also dynamic and undergoes continuous changes, and therefore the method of regulation should be also subject to dynamism   to appropriately adapt to changing social relations and take into account different spaces and legal systems.

So why not try to outline the general answers?

Turning a law into code is quite a challenge. So it seems clear that the most cost-effective approach at present is a hybrid approach. What does this mean? On the first point, it seems quite clear that the EU will legislate in a traditional form. Moreover, current draft legislation specifically addresses persons who are involved in the creation or use of AI. However, there is nothing to hinder the emergence of code within the regulatory framework.  Among the countries that are working intensively on the issue of law and code is New Zealand. A think-tank dedicated to studying the issue of law and its translation into programming code indicated in 2021 that “Legislation should not be code, but law as code is the future”.  Similar efforts are being made with increasing frequency in the area of contracts, where the financial element is codified in the form of a ‘smart contract‘ system.

Obviously, these examples raise two questions – what is the use of code for AI control and AI supervision? The important issue then is, how should this look like?  When it comes to this question, there are unfortunately many contentious issues. This requires collaborative conversations between AI developers as well as regulators and other stakeholders. Ultimately, it seems rather unlikely that the EU itself would undertake to set up an institution to develop novel regulatory tools.  Nevertheless, it is necessary to not lose sight of the fact that the world is constantly changing. In the context of recent and future developments, there is one other venue where experts would also like to have an opportunity to speak on and contribute to this issue, the Internet Governance Forum (IGF)

This year’s IGF was held in Katowice, Poland. The theme was: “Internet United”. However, looking at cyberspace as an international space the question arises as to whether such a union is possible? At present, this seems unlikely. Unfortunately, there was no one at the summit who could directly indicate how the biggest AI players – China and the US – would approach these issues. Therefore, regulation based on an algorithmic code seems to be the most feasible solution at the moment, especially as it can operate at different levels of legislation and can be created by state bodies, government bodies and private entities alike. It also has the advantage that it can be linked to a specific territory. This is quite important for the EU and for the security of citizens and businesses.

It is worth emphasizing here that it is important not only to have a standpoint, as the EU does, on this issue, but also to observe on the international stage what others want to say. The two most powerful participants in the issue of artificial intelligence governance, China and the US, may already be looking for some solutions in this area and these may come not from lawyers but from technology people.

To sum up the multitude and diversity of threads, it should be noted that:

  • The future of next generations who want to benefit from AI rests on us as scientists;
  • Young people are concerned that AI uncontrolled will cause negative consequences for them and the entire future Internet community;
  • When debating the future of the world with AI, we need to include different voices, we cannot lock ourselves into our narrow areas of interest;
  • When looking for solutions to AI, we need to take into account modern ways of regulating various social phenomena – programming code is one of the forms that must be taken into account when it comes to supervising and controlling AI;

And last, but not least: We need more debate on this issue within the European Union!

Published under licence CC BY-NC-ND. 

Professor of the Faculty of Law and Administration at the University of Silesia, Partner and founder of the law firm Szostek-Bar and Partners. Expert in the European Parlament's AI Observatory, European Law Institute (Vienna), expert in new technologies, co-author of the e-court concept, originator of the electronic confirmation of receipt (electronic stanza currently introduced in MS), author of the concept of the electronic administration office and changes in the code of civil procedure, parliamentary expert, lecturer, author of several dozen publications.