AI, according to a widespread narrative, knows no borders. But that is not true. The way AI is developed and used is increasingly shaped by diverging legal rules. And these rules, unlike the technologies they govern, are bound by jurisdictional borders. This post addresses one of these borders under the AI-Act.
An article by Benedikt Bartylla
Under the AI-Act, Art. 2 para. 1 AI-Act governs the international scope. There are three points of contact that trigger obligations under the AI-Act. Art. 2 para. 1(a) and (b) AI-Act mirror traditional rules of product safety regulation, covering all AI-systems and GPAI-models placed on the market and put into use in the EU and all deployers established or located in the EU. Art. 2 para. 1(c) AI-Act, however, covers AI-systems (but not GPAI-models) not placed on the market or put into use in the EU, where, instead, “the output produced by the AI system is used in the Union”. This is a novel rule, and a far-reaching one, too. Essentially, it is a kind of ‘long-arm statute’ of European AI-regulation. In this blogpost, I aim to answer the question: What exactly does it mean?
AI-outsourcing and AI-offshoring
Recital 22 gives some insight into the thinking behind the rule. The EU legislator is worried that an ‘AI system used in a third country […] could process data […] transferred from the Union and provide to the contracting operator in the Union the output of that AI system […]’. I call this ‘AI-outsourcing’. In this scenario, the long-arm statute requires a hypothetical: An output is used in the EU, if supplying an AI-system to the person on the receiving end of the outsourcing process would constitute a ‘placing on the market’ of an AI-system ‘in the EU’ (Art. 2 para. 1(a) AI-Act).
The more intricate problem is what I call AI-offshoring. In this case, an AI-system is used directly by the person interested in the output, but in a third-country. Think of a business based in a third-country that uses an AI-system in their HQ abroad to evaluate applications for a position based in the EU. Or think of a third-country dating app with users in the EU that uses a prohibited biometric categorisation system (Art. 5 para. 1(g) AI-Act) in their matching-algorithm.
An intuitive solution to the problem of AI-offshoring is to look at the risks: If the activity in question poses a risk for a person located in the EU, then the output is used in the EU. I argue, however, that this is not the right test in most cases. Instead, I propose the following three-pronged test:
AI-systems regulating a physical environment
The first prong is based on the language of the AI-Act. The output-rule refers to the place where the output is used. This requires a literal interpretation in cases where the AI-system is used to regulate the physical environment in a specific place. Think of AI-systems that regulate temperatures in server farms (which can be a High-Risk System under Annex III No. 2). Such systems can create risks for EU persons even when deployed in third-countries, e.g. if the third-country server farm qualifies as critical digital infrastructure for EU users. Yet, under the language of the AI Act, outputs of such systems are used only where the physical environment is regulated.
Other sources of EU law
The second prong covers a broader set of cases. Many of the activities regulated under the AI-Act are already covered by other sources of EU law. Credit rating, for example, is part of an activity regulated under EU banking regulation. Employment decisions are regulated under EU anti-discrimination law. And, most importantly, many of the activities are a processing of personal data regulated under the GDPR. These other sources of secondary law have their own international scope, e.g. in Art. 3 GDPR. These rules, I argue, also define when an output is used in the EU: If the activity in question is covered by a different source of EU law, that addresses essentially similar risks, an output is (only) used in the EU if the activity is within the international scope of that source of EU law.
There are three reasons why I think this should be the rule. First, this interpretation gives effect to other policy objectives underlying those other sources of EU law. The international scope of the GDPR, for example, does not strictly focus on the person whose data is being processed. Instead, it requires that the processor ‘envisages offering services to data subjects in one or more Member States in the Union’ (recital 23). The same is true in banking regulation, where relationships initiated by the customer only are not covered by EU law (reverse solicitation, Art. 21c para. 2 sub. 1(a) CRD IV). In these cases, the EU legislator has provided nuanced rules to balance the risk for the EU citizen on the one hand and the interest in facilitating cross-border activities on the other hand. There is no evidence that the AI-Act is meant to disturb this balance. The AI-Act should therefore be read in harmony with those rules.
Second, the AI-Act itself supports this reading, though this requires some context. Art. 2 para. 1(g) AI-Act provides that the Regulation applies to ‘affected persons that are located in the Union’. This rule was introduced by the EP (Amendment 149). In the EP’s text the rule was expressly limited to cases in which an AI-system was placed on the market or put into service in the EU. In other words, the rule was expressly not an additional point of contact. In the final version, the language has changed. There is no evidence, however, that the policy has changed. On the contrary, the rule remains strikingly different from the other contact-rules: Art. 2 para. 1(a) and (c) expressly cover providers based in third-countries – Art. 2 para. 1(g) AI-Act does not. Art. 2 para. 1(g) AI-Act, therefore, should still be read in the way it was proposed by the EP. In that case, however, focusing on the person affected under Art. 2 para. 1(c) AI-Act directly undermines the decision not to make Art. 2 para. 1(g) AI-Act an additional point of contact – so we need to look somewhere else.
Third, using other sources of EU law makes the output-rule more foreseeable. Under the AI-Act, the rule covers a variety of activities, all using the same phrase. Without further guidance this rule could create major uncertainties in cross-border activities.
The affected person
Only as a third prong should we focus on the person affected. This prong is applicable only to those limited sets of cases that are not covered by the first two, e.g. AI-systems under Art. 50 AI-Act or systems used in arbitration (Annex III No. 8(a)). In these cases, the output is used where the person affected is located. This will yield a (very) broad application of EU law.
The question of intent
Careful readers of the recitals might feel that this test is missing a fourth prong: intent. Indeed, recital 22 limits the output-rule to cases in which the output is ‘intended to be used in the EU’. Originally, this was part of an operative provision introduced by the EP, but has, for whatever reason, been relegated to a recital in Trilogue-negotiations. The issue with intent is that, under the test proposed here, it does not make much of a practical difference. We cannot ask the provider to have intent – because the provider generally does not care what happens with the output. So, it must be the intent of the deployer or the non-professional user. In some cases, the second prong will already require the deployer to have some kind of intent (remember GDPR recital 23). In all other cases, the intent-requirement of recital 22 can be used. I doubt, however, that there will be many cases in which a deployer or user does not at least know that they are using an output in the EU (which, I think, should suffice).
Primary law limits
There is, however, one more restriction to be made. The principle of proportionality under Art. 52 para. 1 EUCFR requires, in my opinion, to cut providers some slack: The output-rule is triggered not by the provider, but by the person using the output – the consequences, however, are borne primarily by the provider. In other words, the output-rule makes providers liable for actions they did not take. This, I believe, crosses the line of proportionality if the provider has done all things reasonable to prevent that outputs produced by their systems are used in the EU (e.g. by implementing system-warnings).
Outlook
The output-rule of the AI-Act is vague and potentially far-reaching. The three-pronged test and additional restrictions presented here bring some clarity and nuance to the rule – but it is hard interpretative work to get there. Providers will need reliable guidance and a harmonised understanding of the rule. Until then, the output-rule will produce regulatory risks all over the world.
This post is based on a recent book-chapter covering (in more detail) the international scope of the AI-Act, published (in German) as ‘Der Internationale Anwendungsbereich des AI-Acts – Ein Schritt zu weit?‘ in: Dregelies/Henke/Kumkar (ed.), Artificial Intelligence: Rechtsfragen und Regulierung künstlicher Intelligenz im Europäischen Binnenmarkt, 9. Tagung GRUR Junge Wissenschaft, Nomos (available here). The book chapter also adresses the interpretation of Art. 2 para. 1(a) AI-Act, the international scope of GPAI-regulation and possible primary law challenges to Art. 2 para. 1(c) AI-Act.
Published under licence CC BY-NC-ND.