ARTIFICIAL INTELLIGENCE AND FUNDAMENTAL RIGHTS IN THE JUDICIAL PROCESS

Artificial intelligence has not yet been introduced into the judicial process in most countries. There are a number of reasons for this, not all of which are justifiable, but one of the most important is that a fair trial cannot be conceived if due process -i.e. the procedural rights of the parties – has not been respected. And it is not easy to reconcile these procedural rights with artificial intelligence, given that they were formulated long before AI existed.

An article by Jordi Nieva-Fenoll

Even for human courts, there is no doctrinal unity, or even a certain uniformity, in the way the right to a fair trial is set out. First formulated in a somewhat more systematic way, fundamentally in the IV, V and VI amendments of the US Constitution – with the addition of amendment XIV.1 () -, these rights have passed through various catalogues of rights around the world with greater or lesser influence, displaying a wide range of aspects that should be considered. The result is a scattered catalogue of rights or guarantees that should be respected in the process.

Analysing its content, this extensive list can be reduced to three main categories:

  • the right to an independent judge,
  • the right to defence and
  • the right to res judicata (i.e. right to non-repetition of a properly completed process).

The first right includes the independence and impartiality of the judge with respect to the powers of the State, which comprises the right to a legal judge, but it also makes it possible to study under its content some aspects that affect the autonomy of the judicial function. The judge must, overall, be isolated both from external influences and from his/her own emotions, when these go so far as to prevent him/her from formulating a fair judgment.

As for the right to defence, it includes all the guarantees that make it possible for a person to present his or her point of view with the reasonable expectation of winning a process. This obliges the authorities to provide for free access to the courts, including at least the publicity of proceedings, the fundamental rectifiability of the parties’ errors – thus excluding formalism, which is nothing but a form of authoritarianism – and judicial action without undue delays that render the defence useless if the protection is delayed. It also includes the right to formulate allegations, to propose evidence and to contradict the opposing party’s positions (See Nieva-Fenoll (2022), p. 123-151 https://editorial.tirant.com/es/ebook/derecho-procesal-i-introduccion-2-edicion-jordi-nieva-fenoll-9788411308076). The right to the motivation of judicial decisions and their enforcement, the right to appeal, as well as the right to equality of the parties and to legal assistance are also added. If any of these are missing, the defence ceases to be efficient or meaningful.

Finally, there remains the right to res judicata, i.e. the prohibition of retrial.

Given all these rights, how can the introduction of AI in proceedings affect them?

Right to an independent judge

At the heart of a judge’s lack of independence are his or her emotions, influenced by the powers of the state, the powers that be, journalists or the social networks themselves. But in an AI context, does it make sense to talk about a concept such as that of the independent judge that, at its core, is based on the judge’s emotions, when a machine does not feel emotions?

The AI that has existed up to now assists the judge, it does not replace him/her, as it is the judge who makes the final judgment. In this sense, the judge can use very weak AI programmes, such as case law search engines, or more advanced programmes that even predict risk, as in the case of COMPAS in the US or HART in the UK. In both cases, what the machine does is simply make a proposal.

In the first case, the judge tries to search for case law on a given topic and the programme selects the decisions that, following its algorithms, it classifies as relevant, excluding others that it leaves in second place. Usually the programme works by keywords, but it also has thematic searches that can further guide the judge’s decision. Thus, the selection of the programme may lead him/her to a certain decision to the exclusion of other alternatives. It is possible that a minority case law is left in the background by the algorithm, or that it is artificially increased in diffusion in an attempt to consolidate it. And this with something as seemingly innocent as a simple case law search.

Let’s imagine what happens if we move to stronger AI, such as COMPAS’ and similar tools’ prediction of the risk of recidivism. COMPAS selects 137 items from an offender that try to qualify a person by such sensitive variables as their place of origin, their social environment or even to some extent their political opinions. Be that as it may, COMPAS, like any other tool, when making a risk prognosis, can qualify human beings in a Manichean way, and thus incline the judge to see a danger in people who in reality are not dangerous if one looks at the evaluation of the facts that are being investigated or judged. In other words, it is perfectly possible for an AI tool to officialise social aversions. The obvious danger is that the judge, even if he or she wants to be independent, may be assisted by a very powerful tool that can condition his or her judgement, which would mean that the judgement is not actually made by the judge, but rather by whoever configured the tool’s algorithm. Judicial independence would thus become a mere theory.

If that is the situation, it is not so much the judge but the IT professional who developed the algorithm who must be closely monitored. This has many disadvantages but also some advantages. The disadvantage is that a non-democratic power such as a systems company – private or public – can be the master of our justice system, which, despite the directives given to it by a government or even the judiciary itself, can manipulate the configuration of the tool.

But therein lies its advantage. AI can enable better law enforcement. In an ideally imagined scenario, AI would make judicial decisions dispense with the judge’s emotions, even if only to replace them with the emotions reflected by the algorithm’s programmer. This would have the disadvantage of not making them as controllable as the judge’s emotions theoretically are.

But let us suppose that, thanks to the democratic mechanisms established for their appointment, the programmer is finally neutral, and therefore does not reflect emotions in the elaboration of the algorithms. In that case, the algorithm will simply reflect what is mandated by the legal system. If the algorithm nevertheless reflects some emotions, they will only be those of the legislator, and will not be so focused on the concrete case, which could prevent, in an ideal scenario, the striking contrasts between the judicial decisions of judges in this matter.

However, the legal system cannot be static, as was in fact imagined by various religions and not a few dictators. It has to adapt to the times. And in this adaptation to the times, the work of the judge is essential. In his mission of analysing the specific situation in which the rule is applied, he must observe the nuances of that situation and provide for its best application. It is true that this makes the application of the law less predictable than a mathematician or an AI programmer might accept, but it is precisely the guarantee that the law is not introduced as a foreign body into people’s lives, but that it regulates their coexistence in a reasonable way.

But if in the future – today it is completely impossible – AI might be able to capture in real time, through the analysis of social networks and search engines, what people think at any given moment. The algorithms could also include the content of the legal system. This system, being the fruit of a social consensus, would condense the concept of justice that we pursue in every process – although with caution, as it would be an application that would capture social sentiment but should only apply it in a judgment if the facts are not distorted or the will of the legal norm is not distorted as well. We would finally have achieved the lawful judge, or we would have achieved that desired end which we so often mention but which we are not even able to define accurately: justice.

The problem is the danger that such a tool would entail if the algorithm is configured incorrectly, as well as the complexity of this configuration. Initially, it would emerge as a support tool for the judge, pointing out the alternatives of the rule to the factual situation evaluated also with the help of another artificial intelligence tool, assessing the evidence presented in the process. Would this be better than what we have so far referred to as a judge’s common sense, moral certainty, Erfahrungssätze, sound criticism or intimate conviction?

Right to defence.

Much of the content of this right only makes sense if the judge as a person is taken into consideration. That is to say, most of this content is aimed at remedying the judge’s human weaknesses, but not really from the perspective of his/her emotions, but rather with the aim of capturing his full attention. The goal would then be that he puts the best of him-/herself into the resolution of a case having listened carefully to the information provided to him/her by the parties. Such a thing is inconceivable in a machine, which is attentive by default because it is impossible for it to be absent-minded.

Therefore, in an AI scenario, all claims will be admitted without any formalistic obstacles. They would need to comply with the requirements established by the application, which sometimes, as we already know from many flight booking websites, can be somewhat bureaucratic. But this bureaucracy is very human and will gradually disappear. What the machine will do is locate us in a system in which we will already be fully identified, and it will process our request, without further ado. It would transfer the request to the other party, who will also be identified and located automatically. The whole process will be so public that it will be accessible online so that anyone can verify its regularity. And it will be executed immediately, so that undue delays will be unimaginable.

The allegations will in turn be recorded and the evidence, fundamentally documentary – which includes multimedia documents – and expert evidence, will also be automatically evaluated: the system would check that it confirms or denies what the parties affirm, and in the case of expert evidence it would automatically test the DAUBERT criteria. It would seem likely that interrogations will end up disappearing from civil proceedings, due to the impossibility of really assessing whether a person is lying or not, even taking into account the parameters of the psychology of testimony. Interrogatories made sense at a time when it was necessary to prove reality due to lack of documentation or lack of plausibility. Today, they are mostly dispensable, even in criminal proceedings, since in the face of the avalanche of technological evidence – again, documentary and expert evidence – that exists, crimes are rarely well established through the credibility of interrogations.

The dialectical contradiction of the pleadings will hardly enrich the debates. In other words, once the allegations have been made and the evidence presented, the machine will decide. If today it is already useful for the pleadings to be as objective as possible, in the future this requirement could be truly imperative. The machine would not be impressed by the rhetoric of elegant pleadings, but will require the coldest and most verifiable data, the better.

As for enforcement in civil proceedings, it would mostly be an activity which, having already been essentially administrative and sometimes almost bureaucratic, will become automatic. The machine would seek out the assets of the parties and liquidate them to satisfy the debtor. In criminal proceedings, all the activity would consist of locating the defendant in order to serve the sentence. But particularly in the case of prison admission, AI would be essential. Risk prediction programmes already play an essential role in this respect and will continue to do so in the future. The challenge here is to ensure that these programmes are not ideologically biased.

Finally, the motivation of judicial decisions would be conceivable so that the citizen will have an answer to what he or she raised, but it would be based on the application of the legal system in the way that was seen in dealing with independence. It is therefore quite likely to be synthetic, plain, simple and understandable to the citizen, and to contain references that are difficult to refute in the face of the relentless application of decisions that are conditioned by algorithms.

The algorithms of justice would have to be declassified in any case, or justice might be even more inextricable than it already is today. And the only possibility to question the ruling would be to discuss the functioning of the algorithm, explaining why it has not decided a particular case correctly. There would be no other possible option in a context of AI dominance of judicial decisions. Appeals would continue to exist but with a different configuration, focusing on possible errors of the algorithm. This might even lead to the disappearance of supreme courts. Once the law is precisely established with all its legislative background, intentions and objectives of the legislator well defined, a Supreme Court would have absolutely nothing to add. This supreme court would thus return to its origins, so revealing in the English process: the House of Lords, of which the Cour de Cassation, always auprès du corps législatif, was only an imitation. And it would definitely be reintegrated into this legislative power.

Right to res judicata?

Res judicata is nothing more than a prohibition on the repetition of processes, which has served us for millennia to prevent the legal uncertainty derived from the fact that proceedings that totally or partially deal with the same object are repeated over and over again. It has not usually been easy to identify these cases, despite frequent attempts by the doctrine to simplify the analysis of the question, which not infrequently made it even more complicated.

However, in a process absolutely governed by AI, this problem disappears or will be reduced to a minimum. The basis of AI are gigantic databases that cross-reference each other, which will make it difficult for the machine not to automatically detect definitive rulings that are incompatible with the parties’ current requests. In other words, duplicities will be much easier to spot.

Reconciling the demands of due process and the potential of artificial intelligence is not easy. It is urgent that jurists recognise its existence in judicial processes and legislate for its normal and appropriate use, but not at the price of sacrificing any fundamental rights.

Published under licence CC BY-NC-ND. 

Jordi Nieva-Fenoll Written by:

Jordi Nieva-Fenoll is Full Professor of Procedural Law (Chair) at the University of Barcelona.