Commercial manipulation in the time of AI: the role of fair trading law?

Artificial Intelligence is fundamentally reshaping manipulative marketing and challenging a discussion on the future role of fair trading law. How should the latter tackle manipulation when it becomes a systematic operation of behavioural modification of consumers through machine-driven experimentations?

From “Know Your Customer!”

As we all know, the traditional mantra of marketing is “Know Your Customer” (KYC). If you know what your customer likes, prefers, needs, you can tailor your offer to his or her tastes so to increase the likelihood that they will accept your deal. The idea behind KYC assumes that consumers’ preferences are somewhat pre-existing the marketing interaction. Once you get to know them, there you have it! Your returns on advertising will go up.

Big Data, machine-powered analytics and personalised marketing have taken the rhetoric of KYC to its most extreme consequences. The increasing number of consumers’ data, which is getting more and more particular and contextual in the world of “ubiquitous computing”, is converging with ever-refined pattern extraction techniques. This creates a previously unseen amount of actionable knowledge for marketers who are today able to benefit from it and adapt their commercial communication to meet real-time consumers’ specific preferences and needs.

Is Big Data and personalisation changing the nature of marketing? I don’t think so. Personalisation has always existed to a certain extent and will always exist. Of course, we could make an argument of scale. The more businesses know about customers, the more personalised the offer will be, thus the greater the ability to influence consumers. But preferences and behaviours are still under the control of consumers. The commercial logic underlying data-driven personalisation remains that of “Know Your Customer”, whereby knowledge is easier, personalisation is more granular, hence marketing more efficient and lucrative.

To “Make Them Click!”

There is something else on the move that is fundamentally changing marketing which I would like to raise awareness about. That is what I call “machine-driven experimentation”, i.e. the capacity of tech companies to perform automated, instantaneous and continuous experimentation on consumers’ behaviour and adapt commercial decisions accordingly. Let me explain better.

In a famous lecture at Berkeley looking at how computer-mediated transactions are revolutionising contracts and trade, the former Chief Economist at Google, Hal Varian, endorses and celebrates the novel ability of businesses to make continuous experiments on consumers’ behaviour to optimise conversion. For example, if you want your customer to buy a product, you just need to establish a dynamic of stimulus and response. Target the customer with the ad of that product. See the result. Did they click? No? Target the same customer with another commercial of the same product. Nothing? Keep on tuning the piece of advertising you provide to the customer until they click.

Machine-driven experimentation is pretty much the idea underlying emerging marketing applications based on Reinforcement Learning (RL). RL is one of the newest approaches of machine learning that teaches algorithms to interact with a highly dynamic environment and receive rewards for performing the right actions until they successfully meet their goal. Initially, the performance of a task is generally sub-optimal, but when the algorithm sees more data, it learns how to decide from the reward/punishment function and its predictive (likelihood) power increases with time until it approximates causality (certainty). Imagine this feedback loop continuously running, every single time you interact with an algorithm that tries to sell you something. The algorithm makes a move, studies your response and adjusts its behaviour in relation to yours, having in mind its long-term goal, “Make Them Click” (MTC).

In a scenario where automated and instant experimentation takes place anywhere and at any time, “manipulation becomes a structural feature of the interaction between consumers and traders that is mediated by intelligent algorithms. Consumer preferences and behaviours no longer pre-exist the interaction, they are engineered by the same business through the algorithm. In his book The Age of Surveillance Capitalism, Zuboff calls this “the reality business”, a commercial logic centred on the “knowledge about real-time behaviour that creates opportunities to intervene in and modify behaviour for profit”.

What will the stance of fair trading law be?

Against this background, we need to discuss what the role of fair trading law should be, if there should be any.

Traditional fair trading policy strives for a market wherein individuals are free to make what they believe is the best choice for them. That is why traders must not unduly influence consumers by giving them misleading information or by exerting whatsoever pressure. This would alter consumer decision-making with the risk of fostering unfair behaviour between competitors and unfair competition in the market. Yet, this understanding of consumer manipulation lies on the assumption that consumers are the ones in full control of their behaviours, their preferences and their needs.

How should fair trading law tackle manipulation when it becomes a systematic operation of behavioural modification of consumers through machine-driven experimentations? How can consumers be protected when they are no longer the “ends” to which the marketer is committed to organise their commercial strategies, but become the “means” through which the marketer achieves its own goals, even at the cost of bending their behaviour? How to protect the autonomy and self-determination of consumers in such a scenario? These are all crucial questions that we must start to ask ourselves even if they might lead us to reconsider some of the key elements of market law.

I already see some steps forward. Last April US Senate has proposed a bill called “Deceptive Experiments To Online User Reduction” (sc. “DETOUR” Act). If passed, this bill would require large online platforms to routinely disclose to its users and the public any experiments that the users were subjected to with the purpose of promoting engagement or product conversion. Each large online operator would also have to establish an independent review board to review and approve experiments in advance. The Federal Trade Commission, whose general task is to prevent unfair or deceptive acts that affect trade, will be competent to ensure compliance under the power and enforcement rules of the Federal Trade Commission Act.

Admittedly, such a proposal may have the unique potential to help enhance consumers’ autonomy. Not so much for the new set of valuable information consumers will possess and possibly use, but for the increased and profound scrutiny by external authorities on market behaviour of large tech companies. However, the initiative clearly indicates a key point: the institutional secrecy on algorithms and machine-driven experimentation can no longer be sustained if we want to ensure fair marketing relationships.

On the other hand, it will have to be seen whether also the EU will decide to stride along this path. In the meantime, we already have strong legislation on fair marketing law in force, above all the Directive 2005/29/CE on unfair commercial practices that prohibits business practices that against professional diligence are able to distort the economic behaviour of consumers. Given its open-textured formulation and technology-neutral approach, the Directive might already represent a good point of departure to face new forms of manipulation. However, some of its basic concepts might need to be organically reviewed and updated to be fit for the time of technology-driven behavioural modification.

Published under licence CC BY-NC-ND.

 

This Blogpost was written by

Author

  • Federico Galli

    Federico has studied law in Bologna. He is currently a Last-JD PhD student and researcher at the Interdepartmental Centre for Research in the History, Philosophy, and Sociology of Law and in Computer Science and Law (CIRSFID) at the University of Bologna and at the Computer Science Department at the University of Luxembourg. This article represents a brief summary of one of the sections of the author’s ongoing PhD research, titled “Algorithm-mediated commercial practices and the challenges to the duty to trade fairly”. The latter is being carried out within the interdisciplinary Doctoral Programme in Law, Science and Technology, under a cotutelle agreement between School of Law at the University of Bologna and Computer Science Department at the University of Luxembourg.

Federico Galli Written by:

Federico has studied law in Bologna. He is currently a Last-JD PhD student and researcher at the Interdepartmental Centre for Research in the History, Philosophy, and Sociology of Law and in Computer Science and Law (CIRSFID) at the University of Bologna and at the Computer Science Department at the University of Luxembourg. This article represents a brief summary of one of the sections of the author’s ongoing PhD research, titled “Algorithm-mediated commercial practices and the challenges to the duty to trade fairly”. The latter is being carried out within the interdisciplinary Doctoral Programme in Law, Science and Technology, under a cotutelle agreement between School of Law at the University of Bologna and Computer Science Department at the University of Luxembourg.