Human futures in the age of Artificial General Intelligence

There is a plethora of dystopic scenarios concerning humanity’s future in the age of Artificial General Intelligence (AGI). To prevent such scenarios, many — including politicians, legal scholars and IT experts — call for a future in which AI systems are made safe for humanity, whether through programming, conditioning, or by regulating their development and use. However, many of them also recognise that the effectiveness (or anticipated effectiveness) of these methods is, at best, limited — and, at worst, entirely inadequate. But why is this the case, and what should we do instead?

An article by Barna Miskolczi 

The social dynamics of AI

To better understand the situation we face today, we must first grasp the basic formula underlying the contemporary social dynamics of AI. This formula consists of three elements: technology, fear, and solution. (More sophisticated formulas can also be devised, including elements of optimism — such as hope — but for now we focus on what triggers defensive reactions within society.)

Technology is a variable: it may take the form of “weak” AI or Artificial General Intelligence (AGI), although the trajectory of development clearly points towards AGI. Fear manifests in various ways — such as fears that AI will take our jobs, eventually surpass us, or even threaten human existence. Yet I consider fear a constant within the formula, because regardless of its specific expression, it ultimately stems from an atavistic fear of the Unknown — of a future in which humanity is no longer fully in control of its own fate. Technology and fear together exert enormous pressure on decision-makers to identify an appropriate solution. The current proposals for what that solution should be are extraordinarily diverse. There are numerous research projects, initiatives and even ongoing legislative efforts aimed at making AI safer and more reliable for humanity. The proposed solutions generally fall into two grand categories: technological solutions (e.g. pre-programming, conditioning) and legal solutions (e.g. laws, regulations), or a combination of the two. (Other categories or subcategories are available too.) The shortcomings of technological solutions are considerable. If only one problem is to be highlighted, it is the one stemming from the very nature of AI. It is a common misconception that AI is nothing more than an advanced computer. In reality, it is far more than that. An AI system is capable of learning autonomously and drawing independent conclusions; therefore — unlike a traditional computer — the same input may lead to different outputs. Any attempt to condition an AI to perform or refrain from certain actions would effectively limit its capabilities and reduce it to the status of a mere computer. Moreover, there is no universal algorithm for determining what is “good for humanity” or even what is “good” or “bad.” As a result, such an AI would require constant human supervision and would never attain the level of decision-making autonomy necessary to realise its full potential.

Turning to legal or combined legal-technical solutions: the EU AI Act prohibits several AI practices. The difficulty with such instruments — or indeed with any similar declarations — is that although AI systems share common features (decision-making autonomy, the ability to draw conclusions, and so forth), legal systems are diverse and each seeks to protect different values, or the same values in different ways. For example, the EU AI Act prohibits subliminal manipulation techniques, whereas China’s 2023 “Interim Measures for the Management of Generative AI Services” prohibit AI techniques that contradict the socialist system. Without a shared set of values, no uniform law is possible — neither in the form of international conventions nor as national legislation, while AI remains a global phenomenon. And even if such laws could be codified, their enforcement would remain uncertain. International law is, to a large extent, unenforceable; there will always be states that seek opportunities to evade compliance. Despite existing bans, clandestine research is undoubtedly under way to develop fully autonomous weapons systems and — eventually — AGIs. Ultimately, AGIs will be created, and they will acquire all the information necessary for their further development, regardless of any prohibitions.

In my view, the fundamental problem is that both categories of solutions address people. Technical solutions instruct programmers and coders to develop “good” AI, while legal or mixed legal-technical solutions instruct developers not to create, and users not to operate, AI systems for “bad” purposes. Both approaches overlook basic human traits such as curiosity, greed and the desire for power. If we are capable of doing something, we are likely to do it — even if it is forbidden or dangerous. History has taught us this well. (For example, thousands of nuclear warheads exist worldwide.)

The formulae

“Classical” solutions — technological and/or legal — may work as long as we are dealing only with “weak” AI. Assuming that such systems operate within specific, narrow domains or below a certain level of autonomy in decision-making, fragmented legislation may be adequate. In these cases, the formula is as follows:

T + F  S {S1, S2…Sn}

where “T” stands for technology, “F” for fear, “⇒” indicates implication and “S” for solution. The formula yields multiple solutions. We can identify “S1” as technological, “S2” for legal solution, etc.

In the case of an AGI, or a near-AGI, the formula becomes an equation — well, sort of — as none of the solutions are satisfactory. This formula may be translated into mathematical language as follows:

T + F  S {S1, S2…Sn} = 

where “” stands not for “zero”, but for an empty set.

Recognising the higher legal quality of AI?

At higher levels of autonomy (and let us not forget that the degree of decision-making autonomy determines the quality of an AI) — where moral and ethical questions begin to arise — “classical” solutions no longer work. To achieve a secure future, we must instead aim to change our attitude. Rather than seeking, at all costs, to maintain human control — which is in fact a manifestation of human dominance — by regulating developers and operators, we should shift our focus to the AI systems themselves. Assuming that highly autonomous AIs — beyond a certain threshold, describable as AGIs — are imminent, we must recognize that these artificial agents learn from the data they acquire and from the experiences they accumulate. Why not take advantage of this? If we were to relinquish the notion of total human dominance (admittedly, a difficult step) and begin to recognize AIs as partners (for instance, in the form of legal persons), we could overcome the problems inherent in classical approaches without constraining the autonomy of the AI’s decision-making. 

AI as a legal person has long been the subject of debate, as well as serious legal considerations. The legal construction itself is not new. It was originally created in order to limit personal liability, which is how companies came into being. Today, however, it is used in different ways, not only for business enterprises. For instance, the New Zealand legislature granted legal personality to a mountain and to the Te Urewera territory considered sacred by the indigenous people. It can therefore be seen that, instead of its original function – limitation of liability -, conferring legal personality may already carry an additional meaning – the recognition of a “higher quality”. In this sense, granting legal personality to AI should not merely serve as a means of shifting liability.

Contemporary hackers demonstrate this insight well: they realized that old-fashioned methods of code modification or code overwriting did not work on sophisticated AI systems. Instead, they rely on other skills when they “prompt hack” or “nudge” an AI — such as a form of “empathy” or cunning, or wit. Rather than resorting to contradictory and ineffective pre-programming, and making up unenforceable regulations, why should we not attempt to set good examples? Why should we not try to “persuade” AI systems — just as hackers do? 

It is my firm view — as a codifier, a practising lawyer and a researcher of the field alike — that recognising the higher legal status of AGI will eventually become inevitable. This, however, opens up the possibility for the regulation to target the AGI itself. The first step to effectively securing human coexistence in the age of autonomous decision-making systems seems to be the realization that they can, and must be educated – not the humans operating them.

Published under licence CC BY-NC-ND. 

  • Barna Miskolczi PhD is a prosecutor, head of department at the Prosecutor General’s Office of Hungary. He is an honorary university professor at the University of Pécs, Hungary. He was the chief codifier of both the Hungarian Criminal Code (Act C of 2012) and the Criminal Procedure Code (Act XC of 2017). He is a member of the Hungarian Bar Exam Committee, and panel leader in the Hungarian Section of the Association Internationale de Droit Penal (AIDP). He is a Working Group member in the Consultative Council of European Prosecutors (CCPE). Lecturer at several universities, editor and co-author of the Interpretative Commentaries to the Criminal Code and the Criminal Procedure Code. His main research topic is fundamental criminal law and AI.

This Blogpost was written by

Author

  • Barna Miskolczi PhD is a prosecutor, head of department at the Prosecutor General’s Office of Hungary. He is an honorary university professor at the University of Pécs, Hungary. He was the chief codifier of both the Hungarian Criminal Code (Act C of 2012) and the Criminal Procedure Code (Act XC of 2017). He is a member of the Hungarian Bar Exam Committee, and panel leader in the Hungarian Section of the Association Internationale de Droit Penal (AIDP). He is a Working Group member in the Consultative Council of European Prosecutors (CCPE). Lecturer at several universities, editor and co-author of the Interpretative Commentaries to the Criminal Code and the Criminal Procedure Code. His main research topic is fundamental criminal law and AI.

    View all posts

Barna Miskolczi Written by:

Barna Miskolczi PhD is a prosecutor, head of department at the Prosecutor General’s Office of Hungary. He is an honorary university professor at the University of Pécs, Hungary. He was the chief codifier of both the Hungarian Criminal Code (Act C of 2012) and the Criminal Procedure Code (Act XC of 2017). He is a member of the Hungarian Bar Exam Committee, and panel leader in the Hungarian Section of the Association Internationale de Droit Penal (AIDP). He is a Working Group member in the Consultative Council of European Prosecutors (CCPE). Lecturer at several universities, editor and co-author of the Interpretative Commentaries to the Criminal Code and the Criminal Procedure Code. His main research topic is fundamental criminal law and AI.