Algorithmic Decision-Making in the Public Sector: The Challenges for good Public Management

Fuelled by the remarkable advances in technology and digital transformation we have witnessed in the past five years [McKinsey (2022)], a new wave of interest and enthusiasm has been dictating the growth in the use of Artificial Intelligence (AI) systems for – among others – decision-making in the public sector [Wirtz et al. (2021)]. The concept of machines as “agents” [Russel & Norvig (2021), 1–60] which take decisions or support human decision-making has been long documented in the history of AI [Wilkins (1968)], but now, across first- and second-world governments of all longitudes and ideologies, the emergence of new AI Strategies [OECD.AI (2021)] is leading scholars, policy-makers and practitioners to rethink how to regulate such fast and structural changes [Reichman & Sartor (2021)]. All the while, an increasing group of countries – such as China, the United States, and the European Union – have already explicitly laid down in their Strategies the means for the implementation of “algorithmic decision-making” (ADM) in the public sector. [a.o. Molinari et al. (2021), 18–29; Ada Lovelace Institute et al. (2021); OECD (2021), 43–50]. Against this backdrop, this article summarizes the findings of the Author’s research (see full paper here) into the governance challenges that public administrations must face when introducing ADM in their operations. Many trends, lessons and recommendations can be pinpointed from this study.

An article by Salvatore Rocco

Three intertwined strands of research are followed. First, a technical approach. To close the many gaps between law, policy and technology, it is indeed necessary to understand what an AI system is and why it concerns decision-making in the first place. Second, a legal and «algor-ethical» [Benanti (2018)] approach. This stems from the consideration that a good use of AI in the public sector is called into question by the manifold potential risks that this technology – and the humans behind it – pose when interacting with the ”social world” [Selbst et al., 2019], for which it is necessary to build a framework of solid principles and key-practices capable of ensuring procedural and substantive justice. Third, as the core subject of this analysis, a governance approach stricto sensu. This traces back the renowned issue of the “governance of AI” [Kuziemski & Misuraca (2020)] to four sets of challenges which ADM raises in the public management chain [GAO (2021)]: 1) defining clear goals and responsibilities; 2) acquiring competency and knowledge; 3) managing and involving stakeholders; 4) managing and auditing risks.

Technical background

On the first strand of research, it has emerged that AI is best defined for governance and policymaking purposes as an “engineered system” [INCOSE (2019); ISO (2021)]. That is to say, as a socio-technological system which comprises not only artificial “Agents” [Russel & Norvig, supra], but also human “Actors” (data scientists, project managers, governments, end users, hackers), and material and immaterial “Assets” (data, hardware, environment).

Any change made along any of these three building blocks leads to potentially great differences in AI systems. This explains the vast variety of applications going under the name of AI – from smart weapons to smart toothbrushes [Bertolini (2020)] – such that not all of them eventually are even a cause of concern, at least from a legal and policy standpoint.

The way in which AI Agents are built, however, should be treated with specific caution, as the use of Machine Learning (ML) increases, propelled by the surge in data availability, hardware/computational power, and technical expertise [Areiqat & Alheet (2021); OECD (2019), 19–34]. On this regard, there is often a lack of understanding about how ML works, which has negatively affected early policymaking [Lehr & Ohm (2017)]. ML is, essentially, a family of techniques and statistical methods aimed at automating the creation and the refinement of the AI models, i.e. of the rules which AI Agents use to process given inputs into the desired outputs. The disruptive element and the common denominator of these techniques is, thus, the fact that Agents built using ML can improve autonomously overtime, as they do not need a human expert (to a certain extent, more or less determined by the level of complexity of the model built and, accordingly, by the ML technique used) for writing down manually the model’s rules.

Figure 1 – A simplified illustration of the differences between ML (in the form of
Supervised & Unsupervised Learning) vs. Classic Programming

In this way, ML overcomes the classic programming’s impasse when the rules to be determined are either unknown, or too many, or too expensive to be written down manually; and in so doing, ML unlocks unprecedented economies of scale in terms of data it can process and insights it can give. This gave rise, among others, to new operational domains such as Natural Language Processing (NLP), Computer Speech and Vision, but also, as anticipated above, to a new spring for Algorithmic Decision-Making (ADM) [Tencent Research Institute et al. (2021), 15–46]. Citizens’ sentiment analysis, crime risk scoring, digital welfare administration, judicial sentence prediction [Ballester (2021), 70–72; Misuraca & Van Noordt (2020), 41]: these are only some of the possibilities in ADM that ML unlocked for the public sector. But at what cost?

Securing a good State use of ADM

While the operational domain of AI defined as ADM has been widely empowered by ML – both in the private and in the public sector –, less thought has been given, until recently, to its implications for individuals and society [Reis et al. (2019)]. The main concern is that ML-backed ADM may become an untamed driver of the decision-making capabilities of a government – in its political, executive, administrative and judiciary capacity. It is then appropriate to open this second research approach by recalling that all AI is built by humans: therefore, humans must retain the power to shape and use ADM systems in a way that is just and trustworthy.

Against this principle, ML-based ADM puts important questions on the tables of governments, courts and public administrations. How can algorithms make lawful, fair and ethical decisions for the well-being of the entire society? How can humans govern the resulting hybrid system of public administration and spur the necessary social processes for AI adoption? How to ensure that algorithmic decisions are explainable and compliant with the Rule of Law? How to ensure that the system is robust, safe, accountable, and trustworthy? Can we balance the bias in statistical methods? Can technical complexity coexist with transparency? Can the self-learning feature preserve legal certainty? As ML techniques keep advancing, the list of implications and issues becomes – generally – longer. Thus, governments must be ready and capable of accounting not only for the technical layer of AI briefly described above, but also for the “social layer” of AI.

To identify the key values of this layer, we may start with the assumption that the use of AI for ADM in the public sector is essentially an exercise of public power, because it aims to modify or interfere with the legal and socio-economic sphere of individuals and societies. That is, indeed, the very essence of any administrative decision, and the fact that the latter is taken by (or with the support of) a machine would not change the consequence. The 2020 and 2021 Automating Society Reports by AlgorithmWatch, for instance, confirm this point: take the “ISA Fiscal Reliability Index”, an Italian support-ADM system which helps detecting tax evasion and classifying a citizen’s fiscal reliability, with a clear impact on that citizen socio-economic sphere of rights [Chiusi et al. (2020), 153]; or the “SyRI System Risk Indication” enrolled in the Netherlands to determine citizens’ potential abuses of the social welfare state, once again with direct consequences from a legal and socio-economic perspective for any citizen who would be signalled as non-compliant [Spielkamp et al. (2019), 101].

ADM use in the public sector proves to be, in conclusion, intrinsically disruptive, in a way that is not necessarily fair, just, accurate, or appropriate. Therefore, as an exercise of public power, it must be regulated, restrained, informed and legitimised by a principle of substantive and procedural justice. Looking for workable proxies of this rather abstract concept – justice –, five principles may serve governments and practitioners as a basis for a comprehensive and justice-oriented use of ADM: 1) the respect of common ethic, legality and fundamental rights; 2) digital welfare and well-being; 3) good governance; 4) explainability; 5) trust and accountability. These are the five nodes of our social layer of AI.

Figure 2 – The five keystones of a “good” State use of AI

The governance issues

In the third strand of research, we focus specifically on the third pillar of the framework introduced above: good governance. From the Author’s study, it has emerged that the latter is challenged on multiple aspects. Three crucial fronts are:

  1. governing the data and the other assets of an AI system;
  2. governing the adoption and the implementation of an AI system;
  3. governing the res publica throughout the implemented AI system.

On a deeper examination of the challenges affecting the second subject – also known as the “public management of AI” or, simply, the “governance of AI” [Kuziemski & Misuraca (2020)] –, mixed results are factored. The research runs through four families of key governance practices [GAO (2021), 25–36]: 1) defining clear roles, rules and responsibilities; 2) building up knowledge, literacy and internal capacity; 3) stakeholders’ management and involvement; 4) risk assessment, auditing and monitoring.

As for the first key governance practice, three shortcomings have been evidenced: the fight of private companies against anticipatory regulation [UNHRC (2019), 13]; States’ early failures at problem formalisation [Passi & Barocas (2019)]; the need for a renovated public procurement framework of ADM systems [McBride et al. (2021)]. As for the second key practice, the public sector shows a lack of internal capacity, especially in the form of: deficit of human resources and expertise [US NSCAI (2021), 119–130; Molinari et al. (2021), 35]; excessive dependency on external consultants [Berryhill et al. (2019), 121]; difficulty in retrieving and holding highly-demanded AI experts within public administrations [Wirtz et al. (2018), 602]; difficulty in setting together a diverse and multi-disciplinary workforce [UNHRC (2019)]; decision-automation bias and decision-distrust bias [Leslie (2019)]. As for the third key practice, although the conclusions are still partial and improving [CAHAI (2021), 19], the collaborative nature of the public sector administrations and the well-functioning of a democracy itself have been somewhat lessened by the use of ADM [Campion et al. (2020); Mikhaylov et al. (2018)]; in addition, this sector shows concerning exhibitions of “AI nationalism” [Franke (2021)] and yet is not backed by a solid and well-informed involvement of political power [Campion et al. (2021)]. The scholarship on the fourth key practice, finally, brings positive results on the side of qualitative risk assessments, but still raises profound questions as to the possibility to conduct quantitative assessments in conformity with a human rights and justice-based vision of AI in our society [Loi et al. (2021); Mökander (2021); GAO (2021)].

Conclusions

The use of ADM in the public sector is widely disruptive. First, because it affects a wide range of public functions and responsibilities: from justice and social security to democracy, local administration, environment, healthcare, taxation, economic affairs and foreign policy. [Dwivedi et al. (2021)] Second, because it is currently spreading fast and worldwide, mostly driven by the promise of delivering rational outcomes in complex environments. Third, and as a consequence of the first two points, because it is fraught with critical implications for the social layer of AI that must be urgently addressed. In the specific context of public management, significant barriers and constraints have proven hard to be overcome for a just and trustworthy adoption of ADM instruments. For example, lack of dedicated resources or knowledge, poor data, scepticism, technical illiteracy, insufficient diversity. As the outcome of this research, these and many other public management issues have been grouped under four categories of key governance practices that will need to be harnessed in the context of public-sector AI: 1) defining clear roles, rules and responsibilities; 2) building up knowledge, literacy and internal capacity; 3) stakeholders’ management and involvement; 4) risk assessment, auditing and monitoring.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Salvatore Rocco

    Salvatore Rocco is a legal engineer with an Integrated MA in Law from Bocconi University and a strong interest in Technology and Digital Transformation. Graduated with honors, defending a thesis on the use of AI in the public sector, and winner of the Bocconi Law School Merit Award. Currently based in Amsterdam at Loyens & Loeff N.V., where he works to automate, redesign and innovate the firm's legal services. Former member and president of B.S. Advocacy & Litigation and Author at MediaLaws.eu.

Salvatore Rocco Written by:

Salvatore Rocco is a legal engineer with an Integrated MA in Law from Bocconi University and a strong interest in Technology and Digital Transformation. Graduated with honors, defending a thesis on the use of AI in the public sector, and winner of the Bocconi Law School Merit Award. Currently based in Amsterdam at Loyens & Loeff N.V., where he works to automate, redesign and innovate the firm's legal services. Former member and president of B.S. Advocacy & Litigation and Author at MediaLaws.eu.