Artificial Intelligence (AI) can collect, while unperceived, Big Data on the user. It can identify their cognitive profile and manipulate the users into predetermined choices by exploiting their cognitive biases and decision-making vulnerabilities. A Large Generative Artificial Intelligence Model (LGAIM) can enhance the power of computational manipulation (CM). The EU moved the first steps towards protecting individuals against this technology. However, some forms of CM still do not fall under the shield of the EU. Therefore, there is a need for a redraft of existing and future EU legislation to cover every aspect of CM.

An article by Stefano Faraoni

What if Artificial Intelligence (AI) was able to create the perfect persuasive environment? What if, like Cyrano De Bergerac, pretending to be Christian and protected by the shadows of the night, an AI could find the right words at the right time and atmosphere to appeal to the heart of Roxane? Knowing Roxane and being in love with her, Cyrano moved the strings of Roxane’s emotions to induce new attitudes and behaviours in her, a feeling of love towards Christian. Cyrano misrepresented reality, pretending to be Christian, and won Roxanne’s trust and love.

What if Cyrano’s persuasion abilities were embedded in AI technology, implemented and used for manipulation?

A Computational Manipulation Perspective on Generative AI

Recently, users have experienced multiple possible applications of a Large Generative Artificial Intelligence Model (LGAIM), such as ChatGPT. The sensation of interacting with a human being while chatting with a chatbox. Images in the style of Picasso with a piece of music that appears as if Mozart wrote it. Love words, as those of Cyrano, in the perfect speech for the loved one. A generative AI is useful, interesting, helpful, often funny and creative.

However, an AI-led manipulative system is not Cyrano De Bergerac.

An AI can acquire Big Data on a user, Roxane, and attempt to manipulate her, as Cyrano did, but it has computational power and no good intentions. The data collected on the user will not simply be basic preferences or similar information that Cyrano could acquire. The AI system will have access to an unprecedented amount of data on the target (Roxanne), on personal preferences, such as favourite flowers, colours, places, music, books, political ideas, number and cause of termination of previous relationships, and any other information that can be used to induce positive feelings in Roxane towards a potential partner.

A manipulative AI system might not use deception as Cyrano did. It can rely on a more subtle manipulative process.

The AI system can identify cognitive biases, routes to Roxane’s instinctual brain (what has been called System 1 by Kahneman), such as if she conforms to the opinion of others or if she tends to do what her parents would like her to do, or to oppose them. A manipulative AI system can use what Sunstein calls a nudge, a choice architecture based on cognitive biases able to predictably modify people’s behaviour without forbidding any options or significantly changing their economic incentives (such as putting the salad in front of the sweets to induce, relying on the availability bias, individuals to choose the salad, easier to reach). When the nudges are applied and used by an AI, the power of the nudges increases, bringing to what Yeung calls the hypernudge, a technology able to collect a vast amount of data on single users and find links between data items not otherwise observable, channelling the user’s response and decisions in pre-chosen directions, according to and adapting to the users’ profile.

A manipulative AI can use dark patterns, defined by the OECD as “(…) business practices employing elements of digital choice architecture, particularly in online user interfaces, that subvert or impair consumer autonomy, decision-making or choice.” The power of an AI can transform a dark pattern into a second-generation one which has the ability, via the use of Big Data and ML, to target individual preferences, connecting a set of dark patterns to hypernudge and personalisation. Using generative AI, a manipulative system can generate the perfect messages with the perfect words, images, colours, and music on the perfect website, according to the profile of the target. The unperceived AI system will constantly change and shape according to Roxane’s cognitive profile until the target is achieved, using her vulnerabilities in decision-making against her.

The messages created by a manipulative generative AI are not used, as Cyrano did, to win the beloved’s love. A computational manipulative AI can change and shape any attitude and behaviour, from product preferences to political ideas. This technology can undermine individual autonomy and mental self-determination, the right to create a thought.

The EU approach to Computational manipulation (CM) and its limits

The EU recognised the existence of CM and expressly stated, in different existing and ongoing legislation, that it could subvert or impair user autonomy, decision-making, or choice. The Guidance on the interpretation of the UCPD refers to dark patterns. The EU mentioned manipulation and dark patterns in the DSA and the DMA. In the last version of the AIA approved by the European Parliament in June 2023, the EU directly addressed CM, considering it in the first normative article (5) as an unacceptable risk.

The EU approach to CM, however, cannot preserve individuals from every interference in their decision-making processes perpetrated by a manipulative AI-driven system, specifically if a generative AI is used in the manipulative process. The existing and future legislation has shortcomings, creates confusion and leaves blind spots. Some of them are significantly relevant.

The concept of subliminal manipulation under Art. 5 (1) (a) is undefined. It is not clear if it refers to the famous story of James Vicary, which, in 1957, held a press conference, sending subliminal commercials during motion pictures, namely “Coca Cola” and “Eat Popcorn”, which is reported to have led to unusually high sales of Coca Cola and Popcorn. If so, the kind of CM able to affect System 1 is not subliminal.

Art. 5 (1) (a) does not define the general manipulation concept. However, it links it to information, referring to the necessity of impairing the ability to make an informed decision. Therefore, it leaves the kind of CM that does not rely on false or misleading information (still affecting System 1, the primordial brain) out of its scope.

The Article does not apply to CM that causes non-significant harm, leaving every ordinary transaction outside its scope.

The most crucial observation is that Art. 5 (1) (a) refers to ex-post requisites. Specifically, it refers to the “effect” of distorting a behaviour, the causation of a decision and the presence of significant harm. The Article arguably does not require just a potential presence of the requisites but an actual presence. It is unclear, for example, how the EU intends to determine, ex-ante, that a manipulative AI can cause not only general harm but also significant harm. If the technology has to be prohibited, it can be identified and scrutinised just after the technology has been used, the manipulation occurred, and significant harm is already present. Therefore, the manipulative technology will likely be created and used, with the possibility of considering ex-post if manipulation already occurred. It seems that a prohibition should erase the possibility of using AI for manipulation ex-ante and not require waiting for its existence to identify it ex-post. The actual formulation is equivalent to stating that to understand if a gun is prohibited, it should be necessary to wait until it is used to shoot an individual, contemplating at that point if the harm caused is significant.

Furthermore, the legislation referred to in the AIA, which, in the EU’s opinion, should protect against different kinds of manipulation, has other shortcomings. The DSA and the DMA leave websites, small platforms, or small businesses that use CM outside their scope. Art. 25 of the DSA does not apply to what is already protected by the GDPR and the UCPD. The GDPR might be overcome by formal consent, a consent given to a privacy policy which is formally correct, however without reading it.

The UCPD and its Guidance leave out of their scope non-consumers and, in any case, the kind of CM that relies on System 1 in the absence of false information.

The existing and future EU legislation should be reconsidered.

Generative AI is not Cyrano de Bergerac. CM does not use just false information, as Cyrano did. It relies on a more subtle process. It can covertly identify and exploit via LGAIM routes to the target’s (Roxanne’s) primordial brain that are not subliminal, do not need false information and do not cause relevant harm but still impact the right to self-determine thoughts through second-generation dark patterns and the hypernudge.

Therefore, the existing and future EU legislation should be reconsidered, including what today is left outside in a game of chess of different overlapping provisions and legislation against CM, a game otherwise destined to be lost.

Published under licence CC BY-NC-ND. 

Stefano Faraoni Written by:

Stefano Faraoni has been a practising lawyer (Avvocato) registered with the Turin Bar since 2006 and is admitted to represent his clients in front of the Corte di Cassazione (Supreme Court). He is a Turin Bar Commission on Artificial Intelligence member, an Adjunct Professor in Legal English at the University of Turin, European Legal Studies, Law Department and, since 2021, a Post Graduate Researcher at the University of York (UK), Law Department. His research topic focuses on how contract law should regulate Manipulative Persuasive Technology led by an Artificial Intelligence system.