Person-Based Predictive Policing Under the AI Act

Seventy years ago, American writer Philip K. Dick published his science fiction novella The Minority Report, that depicts a future society where three mutants (‘precogs’), connected to a vast all-seeing machine, foresee all crimes before they occur. Based on their predictions, the police can arrest suspects before they commit actual crimes. With the advent of the AI Act, would future ‘precogs’ be restrained, forcing John Anderton to find a new line of work instead of leading the Precrime Division?

An article by Vahur Verte

John Snow famously showed the clusters of cholera cases in the London epidemic of 1854 (The Broad Street cholera outbreak) and tied the outbreak to contaminated water pumps. One of the first mapping tools was produced, and it changed the epidemiology as we know it. Almost two centuries later, similar paradigm shift happened also in policing, where advancements in information technology have helped the police in the 1960s and 1970s shift from reactive to preemptive and predictive policing.

It must be noted that the term “predictive policing” is not defined in law. However, three types of predictive policing can be distinguished, based on the type of predictions the underlying models are able to make:

(1) area-based policing, i.e. predicting the time and place in which crimes are more likely to occur,

(2) event-based policing, predicting the type of crime that is more likely to occur, and

(3) person-based policing, predicting the individual who is most likely to conduct a criminal act.

The importance of AI technology in enabling LEAs to identify and counter criminal activity has been also acknowledged by the European Parliament in the AIDA-report. The Parliament recognized that AI technology has inherent potential empowering for LEAs to be more effective.

However, empowering LEAs is a double-edged sword, as the past couple of years and implementation of strict Covid-19 restrictions, as well as the democratization of narrow AI in the  form of large language models have reminded that Orwellian dystopia might not be so far off. The rise of AI-driven surveillance systems, e.g., Palantir´s Gotham platform, has raised questions about predictive policing systems potentially infringing on the privacy of individuals who are not suspected of any criminal activity. For example, last February, the German Federal Constitutional Court ruled that the use of Palantir platforms  in Hessen and Hamburg was unconstitutional as it  violated the right to informational self-determination. Therefore, the recently agreed-upon AI Act is ever more relevant.

Status quo in regulating automated decision-making by the law enforcement

Person-based predictive rests on the insight that negative social networks of individuals, like environmental vulnerabilities, can encourage criminal activity. It utilizes big data capabilities to develop predictive profiles of individuals based on past criminal activity, current associations, and other factors that correlate with criminal propensity. It is already being implemented in cities like London, Amsterdam, Chicago, and New York.

Currently, the use of AI systems in law enforcement is already to some degree regulated by the GDPR and by the Law Enforcement Directive (LED). For example, Article 22(1) of the GDPR prohibits decision making, including profiling, solely on automated processing. The LED is more liberal, and Article 11(1) of the Directive prohibits automated individual decision-making, including profiling, which produces an adverse legal effect concerning the data subject or significantly affects him or her, unless allowed by Union or Member State law.

It must be noted that both the GDPR as well as the LED allow Union and Member States flexibility regarding allowing automated individual decision-making, including profiling. However, such resulting regulatory differences in different Member States might ultimately have negative implications. This has also been acknowledged by the European Parliament and European Commission.

In spe: future regulation of person-based predictive policing

The latter recognized that currently there is absence of a common European framework regarding AI, several Member States (e.g. Germany, Denmark, Malta) are already exploring options for national legislation to address the challenges created by AI. The Commission stated in 2020 that if the EU fails to provide an EU-wide approach, there is a real risk of fragmentation in the internal market, which would undermine the objectives of trust, legal certainty and market uptake.

The European Parliament, who also recognized that the use of AI in law enforcement must be regulated as potential infringements are extensive, adopted a text on Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters (2020/2016(INI)), wherein it acknowledged that some Member States make more use of AI applications, or embedded AI systems, in law enforcement than others, and this is partly due to a lack of regulation and regulatory differences which enable or prohibit AI use for certain purposes. It stresses that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of human dignity, non-discrimination, freedom of movement, the presumption of innocence and right of defense, including the right to silence, freedom of expression and information, freedom of assembly and of association, equality before the law, the principle of equality of arms and the right to an effective remedy and a fair trial, in accordance with the Charter and the ECHR. It also stresses that the use of AI applications must be prohibited when incompatible with fundamental rights, and no AI system used by LEAs should be enabled to harm the physical integrity of human beings, nor to distribute rights or impose legal obligations on individuals.

Ethical risks related to predictive policing addressed by the AI Act

The AI Act provides a comprehensive regulation that would govern the development and use of AI in the EU. The AI Act focuses on ensuring that AI is used in a safe, ethical, and beneficial way. Considering ethical risks related to predictive policing is essential as the development of AI is accelerating rapidly and there are various risks associated with unregulated AI, e.g., data bias, cybersecurity, weaponization, privacy violations, social manipulation, increased inequality, and environmental damage.

An example from the US shows what can go wrong if autonomous systems are deployed rushedly without considering the broader ethical, legal, and social issues. The COMPAS algorithm, intended to predict recidivism, showed inherent bias against black individuals, flagging them as future criminals at a higher rate compared to white defendants.

The AI Act addresses these ethical questions and recognizes that law enforcement’s use of certain AI systems often involves a significant degree of power imbalance, potentially leading to surveillance, arrests, and other fundamental rights infringements. In Recital 38, it is acknowledged that risks are intensified when AI lacks high-quality data, accuracy, robustness, or thorough pre-market testing, potentially causing discriminatory or unjust actions. If opaque, unexplainable, or undocumented, these systems may also hinder procedural rights like fair trial, effective remedy, and presumption of innocence. The Act argues that given these concerns, to mitigate possible risks, it’s cautious to classify various AI systems as high-risk within law enforcement contexts to ensure accuracy, reliability, transparency, public trust, accountability, and effective redress. High-risk AI systems notably include those used for individual risk assessments, emotional state detection, evidence reliability evaluation, criminal offense prediction, and personality or behaviour profiling.

Predictive policing as high-risk AI use case under the AI Act: the need for human input and oversight

The AI Act Article 5(1)(da) prohibits predictive policing based solely on the profiling of a natural person or on assessing their personality traits and characteristics. Hence, what other ingredients are needed in addition to AI to make predictive policing palatable under the AI Act?

The AI Act sets certain boundaries for LEAs regarding the use of AI systems, including for predictive policing purposes, and classifies such systems as high-risk AI systems under Article 6(2). Annex III provides in total three articles that can be considered when using AI technology for predictive policing purposes.

First, A.III.6(a) defines that AI systems intended to be used by LEAs or on their behalf to assess the risk of a natural person to become a potential victim of criminal offences.

Second, A.III.6(e) regulates that AI systems intended to be used by LEAs or on their behalf to assess the risk of a natural person of offending or re-offending not solely based on profiling of natural persons as referred to in Article 3(4) of LED or to assess personality traits and characteristics or past criminal behavior of natural persons or groups.

Third, A.III.6(f) stipulates that AI systems intended for use by LEAs for the profiling of natural persons, as referred to in Article 3(4) of the LED, must be regulated in the context of the detection, investigation, or prosecution of criminal offenses.

Sunde argued, that even though A.III.6(a) and A.III.6(e) use the term “predict” and A.III.6(f) uses the term “profile“, all three paragraphs describe tasks for the AI system to perform, and not only tasks, but also objectives, and therefore they all, despite the varied terminology, concern profiling. Moreover, as such profiling concerns an individual risk assessment of natural persons and assessing personality traits and characteristics or past criminal behavior, processing of personal data is a necessity. However, under the LED Article 11 such processing requires human assessment as a necessary component, as it is forbidden to make decisions exclusively based on automated processing of personal data. In addition to reducing ethical risks, human assessment is necessary due to the current imperfection of the technology. Recently published analysis revealed that less than 0.5% of the 23,631 crime predictions generated by Geolitica (formerly PredPol) for Plainfield, New Jersey, accurately matched reported crimes, highlighting significant discrepancies in the AI’s effectiveness.

Neither the AI Act nor the LED regulate how such human assessment should be carried out. However, the European Data Protection Board has adopted guidelines on the use of facial recognition technology (FRT) in the area of law enforcement. They recommend that LEA officers should always review the results of the FRT. In addition, they emphasize that continuous training and awareness raising for end users are important to prevent automation bias. They also stressed the importance of access to an adequate level of human resources to perform effective oversight, ensuring that each agent has enough time to critically challenge the results of the technology. It is paramount to record, measure, and assess the extent to which human oversight changes the facial recognition’s original decision. They also highlighted the importance of monitoring and addressing FRT model drift (performance degradation) once the model is in production, as well as establishing a process to regularly reassess the risks and the security measures, especially when the technology or use case undergoes any changes. These recommendations are vital. However, they fail to address the ‘black box’ problem and place a lot of weight on the shoulders of LEA officers. While using AI for FRT, it might be feasible to review the results, but it would be more problematic to ask LEA officers to fully and adequately assess the workings of person-based predictive policing tools.

In conclusion, the AI Act seem to prohibit predictive policing that lack human assessment component. However, the question remains whether the human using the above-mentioned AI systems is genuinelyable and willing to incorporate their human judgment into the process. Moreover, what role will the black-box effect and/or confirmation bias play in this delicate interplay between the human assessor and the AI system? Unfortunately, these are questions to which the AI Act does not provide any answers.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Vahur Verte

    Vahur Verte is a state prosecutor in Tallinn, Estonia, specializing in prosecuting cyber-dependent crimes. He is currently studying at the University of Tartu in the Information Technology Law programme.

Vahur Verte Written by:

Vahur Verte is a state prosecutor in Tallinn, Estonia, specializing in prosecuting cyber-dependent crimes. He is currently studying at the University of Tartu in the Information Technology Law programme.