Conversing with AI: Conference Report and Mental Property Ascription

On March 6th and 7th the closing conference of the IMPACT project on relationship building, transparency, and communication in everyday human-AI interactions took place in Berlin. The following provides an overview and a critical assessment.

A conference report by Moritz Cordes

Over the last four years the Volkswagen foundation sponsored a project on the “Implications of conversing with intelligent machines in everyday life for people’s beliefs about algorithms, their communication behavior and their relationship building” (IMPACT) conducted at Universität Duisburg-Essen, Universität Kassel, Universität Bielefeld, and Evangelische Hochschule Nürnberg. Investigators included social psychologists, legal scholars, ethicists, and computer scientists. The conference consisted of joint talks by project participants, external keynotes, two panel discussions, and a poster session.

XAI/transparency

Several talks centered around the topic of explainability and transparency in AI. André Artelt’s part of the project focused on a model of counterfactual explanations (“Why did you do A?” – “Had X been different, I would not have done A, but B.”). In the discussion, Burkhard Schafer expressed the importance of contrastive explanations, as when a person whose credit application was rejected wonders why somebody else with similar credentials got a credit (instead of wondering in what respect oneself was found to be lacking). The problems of algorithmic bias and of data protection (with regards to contrasting data sets) were mentioned.

Bruno Gransche pointed to situations in which humans are satisfied by rationally insufficient explanations, for instance, when the word ‘because’ is used vacuously: “The credit was rejected because the application was assessed not to provide sufficient reasons for creditability.” Such locutions might encourage users to invest unwarranted trust in AI. In this vein, Aike Horstmann was careful to mark the distinction between perceived and actual trustworthiness. The socio-psychological studies in the IMPACT project focused on perceived trustworthiness. Several participants (and a poster) reiterated the need for institutionalized trust, i.e. the conferral of seals to systems by expert institutions. Christian Geminn drew attention to the known lack of legal regulation, citing, for example, the limited provisions in the AI Act draft. Schafer remarked that requirements of transparency can cause responsibility to be shifted away from the artificial agent (or its manufacturer) to the user – after all the user must have been transparently instructed about all the risks.

Trust and tact

The field studies conducted within the IMPACT project examined, among other things, the dispositions of children toward voice assistants (VA). Unsurprisingly, children were observed to be quicker at trusting VAs (“Alexa, can you keep a secret?” – “I am as silent as a grave.” – …). This was unanimously perceived as a problem and there was some discussion about whether communication based appliances should always have to be explicit about not being a person or, if applicable, about storing users’ data less than securely. Schafer underlined the value of non-disclosure in certain situations, e.g. in entertainment contexts where users strive to immerse themselves in a simulation.

In the philosophical keynote, Gransche claimed that the lack of tact in AI lies at the heart of their other limitations: to grasp real meaning, to have desires, or to have awareness. He argued that implementing tact into code represents an insurmountable challenge since the norms of tact in human-human interaction are largely implicit.

Property ascription

The admonitions against the ascription of mental (and moral) properties ran throughout most of the conference, with the notable exceptions of Catholijn Jonker’s and Iyad Rahwan’s talks. For instance, the following are either rough or verbatim quotes from presenters:

  • artificial systems do not warrant polite or moral treatment,
  • artificial systems are not alive, have no consciousness, and have no emotions,
  • machines are not bearers of human dignity,
  • chatbots, per se, are no moral actors,
  • it is clear that [artificial systems] are not living,
  • machines do not think in concepts.

Maybe presenters intended these remarks to serve as disclaimers, but regardless, such claims carry a substantial burden of proof. This burden was not honored by those who proposed these positions. Indeed, the data and arguments presented were usually preceded by these claims, as if they were uncontested premises. Several discussions during the breaks suggest that there are those who oppose this illusory consensus. One might be forgiven for wishing that this dissent moved from in-between chats to the plenum.

Rahwan gave a keynote on the emerging field of machine behavior but, when asked, refrained from taking a position on whether artificial systems can have consciousness. This display of thoughtfulness suggests that instead of attributing consciousness (or lack thereof) to artificial systems, there is an option to abstain from judgment. Such a position leaves room for further consideration of the involved concepts and can be taken to acknowledge the ambiguities associated with them. It is important to note that there is not just one theory of consciousness, but several, and some of them are rather unassuming/deflationary, making it not all too difficult to show that certain artificial systems can have consciousness. Objecting to deflationary theories of consciousness is not the same as offering a viable alternative. The same goes for meaning, moral agency, dignity, thinking, emotions, etc. To refrain from judgment would, incidentally, agree with tradition: Turing, in his famous 1950 paper, did not answer the opening question of whether machines could think, but instead offered a test and a thesis about how people will speak about machines in the future. (Sidebar: The illusory consensus extends beyond the conference. An earlier entry at the RAILS blog includes the following claim, unsupported therein: “[T]oday’s AI systems cannot be attributed mental states in a factual sense”.)

The illusory consensus is frequently presented at the beginning of a talk and in a bundle. The wave of denials of mental and moral properties inevitably drives presenter and audience in a general direction – not by rational argumentation but by mantra. Often these assurances are bundled together with claims that are irrelevant to the issues at hand. For instance, during the conference, there was a point where artificial systems were deemed not alive. But this had nothing to do with the topic of the talk. At another point it was said that “First of all, because the machine doesn’t have an aim.”, a patent falsehood if one looks at the role of aims/goals in AI coding.

The negative effects of the illusory consensus and how it is presented are twofold: Firstly, it dilutes scientific methodology. In scientific presentations one should not reserve a slot (e.g. the first three minutes) for the making of unclear assertions whose warrant is extremely difficult to determine and whose relevance is questionable. Secondly, it leads to blind spots. If one does not investigate the claim that, say, artificial systems have no understanding, then the foundation of this perceived lack, even if it is real, remains opaque and a change in that foundation will less likely be registered. (It is, indeed, an interesting question of whether something like this has already taken place.) 

Legal and social future

The emphasis placed by the IMPACT project on how children interact with artificial systems is indirectly connected to these problems. For at least two reasons children should be polite to, say, VAs: (i) Most people want their children raised so that they are not rude to other humans and it is hard to see how this cause is helped by allowing them to be rude to a VA, even if VAs will never be moral subjects. (ii) More importantly, in our ignorance about the future, we should take into consideration a possible future scenario where VAs or similar systems are generally taken to be deserving of politeness. Do we want our children to step into such a future with the baggage of an internalized illusory consensus (and without reasons to support it)?

There may be a legal dimension to such possible future tensions. For example, Schafer said during the conference: “I have not seen one case where a problem is solved by granting legal personhood to an AI system.” The author of this report would have a hard time contesting this claim. However, if it rests on whether such systems have understanding, moral status, consciousness, or any of the properties mentioned above, then, perhaps, some problems have already been overlooked.

Hybrid human-machine systems: Several contributions to the conference had to do with symmetrical or mutuality relations between users and artificial systems, which might provide for a perspective that is agnostic on the issues just discussed. Jonker’s talk was the one that was most directly concerned with this avenue of research. She advocated for applying the term ‘system’ to the conjunction of user and machine (and not just to the machine) and to assess the systems collective performance. Does the ability to reflect on oneself make more sense when asked about a hybrid system than only about its artificial component?

The author acknowledges the contributions of Jared Millson to this blogpost.

Published under licence CC BY-NC-ND.

This Blogpost was written by

Author

  • Moritz Cordes

    Moritz Cordes is a research fellow at the Center for Advanced Internet Studies in Bochum. Before this, he has taught and conducted research at the University of Greifswald, where he also earned his PhD in 2015, as well as the Frankfurt School of Finance and Management, and the University of Regensburg. Classically trained as a philosopher his dissertation is an investigation into the concept of pseudo-problems. He works in the fields of philosophy of language, philosophy of mind, epistemology, and logic and has specialized in concept formation, erotetics, epistemic acts, and AI.

Moritz Cordes Written by:

Moritz Cordes is a research fellow at the Center for Advanced Internet Studies in Bochum. Before this, he has taught and conducted research at the University of Greifswald, where he also earned his PhD in 2015, as well as the Frankfurt School of Finance and Management, and the University of Regensburg. Classically trained as a philosopher his dissertation is an investigation into the concept of pseudo-problems. He works in the fields of philosophy of language, philosophy of mind, epistemology, and logic and has specialized in concept formation, erotetics, epistemic acts, and AI.