Fostering Judicial Transparency Through Algorithmic Transparency: The US Challenges

Artificially intelligent algorithms’ use in judicial systems is expected to profoundly impact judicial transparency in the United States and beyond. While they offer a way to counteract judges’ personal biases, they can also serve as a “black box” obscuring judicial processes. Algorithms used to predict recidivism, determine sentencing, and even to help decide legal cases analyze information and output results. But their reasoning is often mysterious: reflecting biases in data used to program them, algorithms have been shown to base decisions on factors unknown to judges or parties to cases. A number of proposals have been put forth to offset this, most focusing on making algorithms, and thereby the judiciary, more transparent. The importance of this, along with algorithms’ prevalence in judicial systems, continues to increase.

An article by Teresa Schuster

Artificial intelligence (AI) is transforming judicial systems worldwide. Currently used to streamline court proceedings, estimate defendants’ likelihood of recidivism, and determine sentencing, artificially intelligent algorithms promise increased efficiency and objectivity. But these benefits are accompanied by negative consequences: AI is particularly criticized for perpetuating bias in judicial systems and decreasing their accountability and transparency. Such effects are especially acute in the United States, where the use of algorithms like COMPAS and Public Safety Assessment (PSA) has been challenged on legal grounds and proposals to increase algorithmic and judicial transparency are vigorously deliberated.

Initially lauded for its capability to counteract judges’ personal biases, AI instead oftentimes reinforces them. Reflecting biases in the data used to develop and train them, algorithms have been shown to unfairly disadvantage certain defendants. Yet because they often function as “black boxes” with obscure decision-making processes, the extent of this is unclear. Have these algorithms reduced bias or merely obfuscated its prevalence? Many argue the latter, pointing to instances of controversial algorithms’ use.

COMPAS is perhaps the best-known of these algorithms. A popular risk assessment tool in the US, it is at the center of many debates on algorithmic justice and judicial transparency. Several US states and counties use it, a decision increasingly criticized by journalists, activists, and academics alike. After analyzing over 10,000 criminal cases in Florida’s Broward County, journalists for ProPublica found the algorithm often inaccurately predicted recidivism rates (in some situations its success rate was only 20%). And in 2016, criminal defendant Eric Loomis sued the state of Wisconsin for using it in his trial, arguing this violated his due process rights under the US Constitution. While Loomis’ legal challenge was ultimately unsuccessful, it rekindled debate on algorithms’ place in judicial systems and paved the way for future court cases and academic work.

More recently, the literature has focused on increasing algorithmic and consequently judicial transparency, with some scholars calling on judges to demand explainable AI. Although transparency itself does not rectify algorithmic injustices, it enables parties to identify them. And algorithmic transparency, if strengthened, could enhance court decisions’ transparency, improve their consistency, and possibly lessen the impact of human judges’ bias.

Several researchers have recommended frameworks for increasing transparency in judicial systems incorporating AI. Their proposals have largely fallen into two general categories: those addressing how algorithms should be chosen and designed, or how their decisions should be interpreted and explained. Under the first approach, algorithms like COMPAS and PSA would be analyzed for structural flaws in their code or data training sets and corrected accordingly. The second would respond to algorithms’ inevitable bias by adjusting how we view and explain their predictions. Exemplifying the former attitude towards algorithms, Carissa Véliz writes that they should be tested before use like medical treatments in light of their profound effects on people’s lives; the Wisconsin court’s decision in Loomis requiring judges be presented with a statement on the algorithms’ limitations is an instance of the latter.

Others’ ideas are less conventional. Allowing case parties to review, challenge, and even replace algorithms prior to trials could reduce bias and foster more accurate judicial decision-making, argue Saul Levmore and Frank Fagan. Currently, some US states like California that reject algorithms they consider biased instead rely on cash bail systems, reinforcing other inequalities. But under Levmore and Fagan’s proposal, if a party can show an alternate algorithm would capture the circumstances of a case better than the existing one it should substitute for it. Although this carries its own set of challenges — how to ensure algorithms are not overly favorable to particular defendants, for one — it facilitates transparency in algorithmic design. This would align well with other countries’ initiatives to familiarize people with algorithms and expand their agency when determining their use.

Although helping people understand AI may be appealing in theory, it is much more difficult to implement. The algorithms some US courts use are simple, but interpreting them or programming them to explain their own decisions requires technical expertise beyond most defendants’, lawyers’, and judges’ abilities. Simplifying this explanation risks overlooking the structural issues that can make a world of difference in certain cases, as in Loomis. And as AI technology advances, its judgements may become nearly impossible to explain. Explaining AI algorithms also compromises their security and their developers’ intellectual property rights — one reason US defendants and judges using the COMPAS algorithm do not currently have access to its system design.

This is a point Joshua New and Daniel Castro expound in a 2017 report for the Center for Data Innovation, a US think tank. In their view, regulating AI to further transparency can easily backfire, disincentivizing innovation and hindering progress. They conclude that US policymakers and jurists are better served ensuring AI’s accountability. New and Castro are not alone: many others have explored the feasibility of holding flawed algorithms accountable and establishing vicarious liability for AI.

Accountability and transparency are closely connected and equally important for algorithms used in legal systems. But neither can be fully achieved, and AI’s place in US courts appears limited. Decisions informed by AI entail normative judgements few would comfortably delegate to algorithms: in Kentucky the PSA algorithm’s implementation failed to noticeably improve judges’ decision-making, while those in Chicago routinely ignored its recommendations. US citizens also tend to mistrust these algorithms, in one study ranking them 40% less procedurally fair than judges. Although judges, like algorithms, might not reveal their motives and implicit beliefs, they provide a sense of responsibility and accountability algorithms do not.

Those seeking to further integrate AI in US judicial systems must adequately counter this perception by instituting clear protocols for AI’s use and misuse. Transparency about algorithms’ design, benefits, and pitfalls is key here. While they can perpetuate injustice in the US legal system they also offer immense potential to prevent it — neither aspect should be overlooked. Reevaluating how we design and interpret data from algorithms used in courts can go a long way towards ensuring AI’s positive features will overshadow its negative ones. Though transparency cannot replace other forms of algorithmic accountability, maximizing algorithmic and judicial transparency is crucial in building public trust in AI. As the US legal system and its algorithms continue to evolve, this is critical.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Teresa Schuster

    Teresa Schuster is a political science student and global learning research fellow at Florida International University. Her research focuses on regulatory reform, artificial intelligence and technology policy; she is particularly interested in how technology influences judicial and governmental transparency.

Teresa Schuster Written by:

Teresa Schuster is a political science student and global learning research fellow at Florida International University. Her research focuses on regulatory reform, artificial intelligence and technology policy; she is particularly interested in how technology influences judicial and governmental transparency.