This post looks at some of the lessons we’ve learned over the last year in our work as part of the ARTSY Project (ARTificial intelligence SYstems and consumer law & policy). The project was undertaken at the European University Institute in Florence, Italy, thanks to the support of the Ethics and Governance of AI Initiative at the Berkman Klein Center for Internet and Society.
Last years have seen many successful implementations of AI-powered tools in the private sector. Advancements in machine translation and self-driving cars, behaviour prediction and targeted advertising, not to mention Google Assistant’s ability to make phone calls imitating humans have continuously made the news. Various media outlets reported about the growth of AI-related spending by key digital players. The arrival of AI to consumer markets promises considerable benefits: more relevant content, reduced transaction costs, greater variety and higher quality of goods and services. However, the risks are also present and merit close attention. In the ARTSY project we have studied the ways in which businesses employ AI across different sectors, and identified the related challenges for consumer law and policy.
The project report is available on SSRN and on the EUI’s publication repository Cadmus. The report is accompanied by an executive style summary outlining key takeaways. The ambition of the present post is to revisit and restate some of the lessons we have learnt while working on this project.
Staying Nuanced in the Age of Twitter
In the time of ubiquitous information overload (coincidentally, a vital success factor for many AI-based applications), in order to attract reader’s attention this piece should ideally start with several strong statements about the looming threats resulting from the businesses’ use of AI. Such statements could read as follows:
- Algorithmic price discrimination will dramatically change the conditions on which everyday goods and services can be accessed by individuals.
- Automated decision-making can lead consumers to be excluded from many essential goods and services without them even knowing about this, or being able to object.
- We will have to fundamentally rethink the premises and tools of anti-discrimination law since the discrimination-based concerns created by AI (data bias) differ radically from those known from the analogue world.
- Predictive tools and personalized commercial messages will allow traders to exert technological influence upon consumer choices of type and degree yet unaccounted for.
- Personal digital assistants will deprive consumers of agency and distort markets.
None of these statements is fundamentally untrue or unrealistic. Indeed, the deployment of AI makes each of these concerns possible to materialize. As is usually the case, however, the assessment of the issues at hand and the respective policy recommendations will depend on the nature and degree of these developments as observed in specific, real-life contexts. This is where the picture becomes more nuanced.
Aware of this complexity, in the ARTSY project we have sought, as a first step, to provide a map – description and analysis – of the market practice and of the scholarship. The task was not an easy one as AI is without doubt one of the most topical issues these days. There is a great variety of both factual reports about the present-day AI applications and more speculative sources about potential AI uses and the challenges of tomorrow. However, if the purpose of one’s investigation is to enrich a policy debate, one must be able to distinguish these particular knowledge inputs from one another.
In ARTSY we decided to focus on the current stage of AI development and deployment, i.e. on the applications which are actually developed by the businesses with a practical implementation in mind. To do so, we established an AI Observatory tasked with the documenting of the market practice in seven different sectors: finance and insurance, information services, energy and “smart solutions”, retail, autonomous vehicles, healthcare and legal services. Further research is certainly needed regarding the scale at which the identified AI-based tools are currently (or are likely to be) deployed in particular markets. We recognize this as a matter which forward-looking policy-makers should keep a close eye on.
Some of the cross-cutting uses we identified as part of the project include: contract-related risk assessment (e.g. credit scoring), personalization of contractual conditions, tailored digital advertising as well as automated customer support and advice (chat-bots, personal digital assistants).
At the second stage of the project we have sought to extract the challenges that consumers can face from the identified uses and began to assess them normatively. As our threshold for evaluation we have chosen European Union consumer law as reflected in the four principles: protection of the weaker party, regulated autonomy, non-discrimination and privacy. We reflected on the developments which appear to affect these principles to the greatest degree, due to their ability to impinge upon consumers’ self-determination and decision-making capacities as well as their access to a variety of offers and even to the goods and services as such.
In our assessment we did not aim to be exhaustive: we provide only for a taste of how the respective analysis could look like. Even if our assessment was completed, it would still only be a fraction of the overall reflection needed to agree on the normative goals and explore the spectrum of the relevant means. Insights from jurisdictions other than the EU and from other fields of law remain crucial. Consequently, rather than assessing the aforementioned developments (e.g. personalized advertising, credit scoring, etc.) against specific legal provisions (e.g. of Directive 2005/29/EC on unfair business-to-consumer commercial practices or General Data Protection Regulation 2016/679), we conclude with a list of questions which we consider most pertinent for consumers across the globe and which, in our view, should be explored as a matter of priority.
Navigating the policy minefield: an illustration
One of the developments enabled by AI, which emerged from our analysis of market practice and scholarship, is targeted and personalized, but massive price discrimination in retail consumer markets. The term is generally used to describe the practice of offering identical products or services to different consumers at different prices where such differences are not motivated by different cost structures, e.g. different costs of supply. The extent to which more advanced forms of price discrimination are an actual phenomenon is not readily apparent. Its use by one of the major online marketplaces is reported to have met with significant consumer backlash, resulting in the return to the previous pricing policy. The degree of the socio-technological transformation may therefore not – as for now – be dramatic. Still, considering the gravity of implications of the practice from consumers’ perspective – particularly of the so-called first-degree price discrimination – policy makers can be advised to have their normative assessment ready. In providing such an assessment, one should consider both the existing market practice and the potential (smaller-scale and structural) transformations and analyse their respective implications against different normative thresholds (including, but not limited to law and economics). One could start be inquiring whether first-degree price discrimination is problematic in itself. A further set of questions could follow: Can the analysed practice become less problematic if specific tools are applied (e.g. of consumer empowerment)? Can it become (more) problematic under certain conditions (e.g. when applied to specific individuals or to specific goods or services)? Is there a direction towards which we would certainly not like our markets to evolve (e.g. a “digital bazaar” with total personalization and no alternatives)? If we do not wish to draw any such red lines, perhaps the time is right to explore novel responses (e.g. personalized price caps)?
No one-size-fits-all approach
What we have certainly learnt throughout our work on ARTSY is that there can be no one-size-fits-all, top-down solution to the “AI challenge”, because there is no single “AI challenge”. Different technologies used by different types of actors, for different purposes, pose different risks and might need very different responses. Some of the issues unfolded (and exacerbated) by AI can have their roots in the wider societal phenomena. Therefore, we see much more value in identifying as many concrete challenges as possible, and asking specific questions about severity of these challenges under particular circumstances and the ways of addressing them. Finally, to create viable ways of achieving the identified goals a wise interdisciplinary dialogue is needed. It is critical to ensure that debates on AI governance are not dominated by experts in one field, but that a robust exchange of a normative and technical expertise takes place.
Der Artikel erscheint unter der Creative Common Lizenz / licensed under CC BY-NC-ND.