The fully automated court decision is an illusion – but there are reasons to think about it

Like all human decision-making, judicial justice is not free from errors. What appears to be a truism is impressively demonstrated by empirical data from recent (social-) psychological research: despite in-depth education and expert status, judges are susceptible to weaknesses in rationality. Various ways are conceivable to reduce the influence of these weaknesses. One of them is the use of new technologies. Although there are different types of rationality, the following question arises: can algorithms and “artificial intelligence” help to make judicial decisions more rational?

An article by David Nink.

Errare humanum est. Fallacies, biases, and other weaknesses in rationality can contort judicial decisions. This can lead to unequal treatment – or at least to the fact that irrelevant, non-legal aspects are incorporated into court decisions. Because, for examples, the prosecutor’s pleading occurs before the defendant’s or the defense attorney’s in criminal proceedings, and because the plaintiff in a civil lawsuit presents his views and facts first, the anchoring heuristic can have an effect on the length of a prison sentence or the amount of compensation for immaterial damage: The judge’s decision-making process is already “anchored” on the first named numerical value or its tendency (high/low). Studies have also shown significant regional differences in the criminal sentencing practice in Germany – although a “regional tradition of sentencing” is not a legally provided distinguishing feature. With regard to Art. 3 (1) of the German constitution, in comparable cases (whatever that means in detail), courts shall judge in a comparable manner, regardless of where a trial takes place. Other studies (although not always uncontested) indicate that also hypoglycaemia or hunger, lack of sleep, the results of sporting events, or media reports can also influence the judicial process.

So, what requirements do legal systems make on the process of judicial decision-making? Are “errors” or distortions maybe consciously accepted? Judges are granted with far-reaching freedom and factual and personal independence (cf., e.g., Art. 97 of the German constitution). However, they are subject to the duty of “internal independence” and neutrality: they must be impartial, free themselves from irrelevant influences and decide “without distinction of person” (Sec. 38 (1) German Judiciary Act). In any case, the law does not require that judges completely “discard” their personality and empathy. At the same time, there are hardly any valid standards for measuring “wrong judgments”: a decision that is justifiable in terms of process and material law is not a wrong judgment, even if a cognitive distortion may have influenced the judicial decision-making. It is certain, however, that, in the judiciary, weaknesses in rationality can have a particularly serious impact because the decisions have a direct influence on the lives of those affected. For example, the difference between five and seven months imprisonment may seem small, but for the person condemned it is tremendous.

Against this background, let’s broaden the perspective. The buzzword Legal Tech shows that new digital applications and approaches have arrived in the legal system (examples are found in passenger rights, tenancy law, or compliance systems) and already brought improvements, especially for consumers. For the judiciary, current debates and modernization processes revolve primarily around electronic file management or video technology (cf. for developments in Europe this overview of the CEPEJ). Such developments have recently been driven by the upheavals of the Covid-19 pandemic. For the “core decision” in a legal case, support or even substitution of the judge by software seemed unthinkable up to now, at least far away. Well, basic considerations on automation in law are not new. However, they can and should be rethought due to new technical possibilities, such as Big Data and Deep Learning, and a better understanding of the human decision-making and its rationality weaknesses.

First of all, a complete automation of court proceedings is excluded. There will be no “robo-judge” that not only decides, but also manages negotiations and (interpersonal) procedural communication as well as investigates the facts. But even beyond these preliminary stages, a complete automation of the “core decision” is inconceivable: in fully automated processes, it is not possible to ensure individual justice with sufficient certainty. Exceptional cases and unforeseen circumstances can overwhelm the machines’ binary mode of operation. With Fuzzy Logic, for example, it is possible to simulate intermediate and grey shades, which can be relevant for legal decision-making and its focus on actual life; in recent years, some spectacular progress has also been made in the field of deep learning, and the same generally applies to the field of natural language processing. The basic problem, however, remains: machines lack contextual knowledge and a basic understanding of the real world. Technical systems follow a quantitative logic, not qualitative individual evaluations. In addition, they cannot take (personal) responsibility.

In addition to these technical barriers, talking about Germany, it is the constitution that sets limits to the use of new technologies by entrusting, by vesting the judiciary (only) to the judges. A machine cannot be a judge in the constitutional meaning; the constitution expressly and exclusively places the final binding (judicial) decision on legal issues in the hands of certain natural persons. The judiciary cannot relinquish this personal element. A “dehumanized” as well as a dehumanizing jurisprudence is inadmissible. Judges must have the ability to provide for justice in individual cases, and to assess the social consequences of a decision. Judge can only be an entity with the “necessary social skills” (cf. Sec. 9 No. 4 German Judiciary Act) – which is more than the purely technical-theoretical aptitude. In addition, only a human can help the right to be heard gain significance.

Does that mean that besides electronic files, negotiations via video chat, and deadline apps, no support for judicial decision-making (in the narrower sense) is possible? Well, a ban on algorithm-based decision support systems in the judiciary is not to be found in the European constitutions. However, judicial independence must be protected. There must not be any binding effect and no “takeover automatism” with regard to the algorithmic suggestions. The “automation bias” would have to be countered through training and other countermeasures (by design). In addition, the basic functionality as well as the decision criteria and their weighting must be comprehensible and explainable. If an algorithmically determined prognosis is included in a court decision, the criterion of comprehensibility also includes the basic mode of operation: The reasons (justification) must then be based on a legal and at least an overview-like technical review, explaining why the court came to precisely this decision. The way of working and decision-making, particularly in systems based on deep learning, can be a black box; and also fully rule-based systems, e.g. decision trees, can be practically non-transparent for laymen due to their complexity. However, the human brain is not transparent either: the sympathies and the decision-making processes in the judge’s brain are also not precisely comprehensible for a defendant or the parties. But due to the broader (quantitative) impact compared to human decisions, there is a risk that errors in the database or discriminatory tendencies will be compacted and even reinforced in the long run, and individuals could be reduced to a group membership or their past. Machines make decisions based on logic and therefore often more consistently than humans, but also – if they have a discriminatory effects – consistently discriminatory. On top of that, there are various technically possible fairness terms and standards for the use of technology. The democratically legitimized legislators must not leave the definition of these standards to private companies.

In criminal justice, biases and contortions can have particularly serious consequences. Against this background, risk assessment tools for decisions in connection with pre-trial custody (in Germany: Sections 112 et seq. Code of Criminal Procedure) and parole (Sections 56, 57 Criminal Code) are conceivable as elements of decision-making support. Applications to support the sentencing itself (Sec. 46 Criminal Code) are also conceivable on principle; for them, however, a basic consensus would first be required with regard to the normative requirements, in particular for punitive purposes (determent; prevention). There are already (rule-based) drafts for this. A system that is based on the average of legally binding decisions in comparable cases and also on sample cases assessed by experts could propose a preliminary judgment to the judge. The judge is then free to use the algorithmic suggestion or not; and possible rationality weaknesses of the judge have less of impact because they average out. Systems such as COMPAS (in the US), however, are not acceptable under German law: they lack transparency and their virtue is also in question.

Numerous unanswered questions remain – e.g. organisational ones: Who should be responsible for the conception, maintenance, and training of the systems – the (federal) states, the federal Department of Justice, the individual courts, independent and interdisciplinary expert commissions? On which database should the systems be trained? In addition, it is not only decisive which tasks we (technically) can and (legally) may hand over to machines, but also which path we as a society want to and should choose (ethically). As a result, the use of new technologies certainly has some potential to make court decisions more rational – but only if it is possible to instil the rule of law into the systems. This should come into focus less or at least not primarily for reasons of efficiency, but rather against the background of the weaknesses of rationality in human decision-making.

** The article is based on the monograph “Judiciary and Algorithms – On the Weaknesses of Human Decision-Making and the Possibilities of New Technologies in Jurisprudence” [published shortly by Duncker & Humblot].

Published under licence CC BY-NC-ND.

This Blogpost was written by

Author

  • David Nink

    Dr David Nink works as a lawyer for IT and data protection law in Frankfurt and advises clients in projects like digitalisation, outsourcing, (development of) digital businesses, as well as in software licensing or GDPR-related issues. Previously, he worked as a research fellow and coordinator of the digitisation programme area at the German Research Institute for Public Administration in Speyer where he researched and published on public law, in particular on data protection law, legal issues of digitisation, and the regulation of new technologies.

David Nink Written by:

Dr David Nink works as a lawyer for IT and data protection law in Frankfurt and advises clients in projects like digitalisation, outsourcing, (development of) digital businesses, as well as in software licensing or GDPR-related issues. Previously, he worked as a research fellow and coordinator of the digitisation programme area at the German Research Institute for Public Administration in Speyer where he researched and published on public law, in particular on data protection law, legal issues of digitisation, and the regulation of new technologies.