What’s appropriate?! Rethinking robot’s behavior in public spaces

As robots enter public spaces in order to clean parks, deliver food, or guide us to the next train station, they face not only technical challenges but also complex social and ethical dilemmas. How should they interact with people? Which social norms should they follow? And more importantly, should they conform to existing social norms – even if those norms are oppressive or discriminatory?

Public spaces present an exciting new application for robots. No longer confined to factories or hidden behind barriers, autonomous machines are now entering the ultimate challenge of human-robot interaction: the public space.

Imagine a typical scene in a Berlin park: You are a pedestrian, strolling along the path, lost in thought. Joggers pass by, followed by cyclists navigating their way with large front-mounted carriers. Nearby, a group of dog owners stands chatting while their dogs bark energetically. On one bench, two elderly women chat about their daily lives, while on another, a group of teenagers drinks, smokes, and listens to loud music. In this park, you encounter people of different ages, genders, races, social classes, and abilities, all coexisting in the same public space.

Now, imagine robots are also part of this scene!

Robots in public spaces generally serve three main functions: maintenance, guidance, and delivery. Maintenance robots help keep public areas clean and safe by picking up litter, mowing the lawn, or patrolling buildings. Guidance robots assist people to navigate large spaces such as airports, shopping malls, or museums. Delivery robots transport goods and must traverse public spaces to reach their destinations.

In some cases, robots simply coexist with people, like delivery robots, which do not engage with humans unless they need to maneuver through a crowd. In other cases, such as guidance robots, direct human interaction is necessary. However, regardless of their function, all these robots share one common challenge: navigating not only the physical environment of public spaces but also the social landscape. To be accepted in public spaces, robots must adhere to social norms and behave in a manner that is considered appropriate. This could be as simple as not honking at an elderly woman walking at a slow pace or as complex as addressing people in a socially acceptable way to offer some service.

This raises important questions:

What does appropriate behavior mean? What social norms should robots follow?

One place to seek answers is the law. After all, if we are discussing norms, there must be regulations and standards that provide guidance. However, there is a significant lack of legal frameworks governing robots in public spaces. Since this is a relatively new application, there are only very few standards restricting robotic behavior in public. Worse still, existing regulations are unfitting for social robots in public spaces. For example, autonomous mobile robots in traffic are only allowed to use specific acoustic devices, and sound signals are restricted to warning purposes. Hence, voice output is generally prohibited and only possible with a special permit. Therefore, a robot on the street is legally forbidden to say “hello” or “goodbye”.

This example highlights how current regulations fail to address the social challenges of human-robot interaction. However, even if such regulations existed, they would not fully cover the complexities of social interactions. While parameters such as speed, lighting and noise are important, social behavior is much more nuanced and cannot be entirely captured by standards and regulations.

First, we need to acknowledge the complexity of social norms. Although we instinctively recognize when someone fails to follow them, it is difficult to articulate these norms precisely. Social interactions are like a dance – our behavior changes based on our partners, the setting, and the atmosphere. The elderly women in the park behave very differently from the group of teenagers, yet both groups share the same public space and could interact with the same robot.

How should a robot be designed to navigate these differences?

Before answering this question, let’s consider two flawed approaches:

  1. A universal approach: The robot adheres to a predetermined set of social norms chosen by its designers—speaking at a fixed volume, addressing everyone formally as “sir or madam,” and always talking in German, for example. Even with participatory design methods, this approach would still reflect the norms of a specific group while excluding others.
  2. A tailored approach: The robot adjusts its behavior based on its interaction partner – using a different tone for elderly women than for teenagers, for instance. But this approach risks reinforcing stereotypes. Imagine a robot that automatically addresses elderly women in German but assumes a person of color speaks English. Even if well-intended, such behavior could be racist.

Moreover, just because a social norm is widely accepted does not mean it is just like this. On the contrary, if we believe that we live in a world where minorities are systematically oppressed, we must also recognize that many widely accepted social norms are, in fact, oppressive. Hence, societal norms often reinforce systemic discrimination. Should robots blindly follow such norms? Ethically speaking, the answer is no. Instead of merely reflecting existing social norms – some of which may be oppressive – robots should be designed to embody just and equitable behaviors.

This was the argument I presented at the WeRobot 2024 conference in Berlin. While I outlined the problem and emphasized the need for socially aware robots, my conclusion was unsatisfying because it left open the crucial question:

What constitutes a “just” norm?

So, at this point, I would like to go a little further and present two promising approaches for integrating justice into human-robot interaction: However, I might add, that this search for answers is an ongoing process to which there can be no final answer.

  1. The Value-Laden Approach by Brey and Dainow : Originally developed for artificial intelligence, this “Ethics by Design” framework integrates ethical values into the development process. They identified six key values: respect for human agency, privacy, fairness, well-being, transparency, and accountability – while acknowledging that these values may be altered. To ensure these principles are embedded in AI systems, they devised specific methods for integrating them throughout the design process.
    Although not originally designed for robots, this approach can be adapted to ensure that human-robot interaction is guided by principles of justice rather than arbitrary social norms. Instead of designing robots to follow predefined behaviors, roboticists should develop robots that actively uphold these values: For example, robots should acknowledge and accommodate the diversity of human beings, allowing for autonomous decision-making; Robots should not collect or analyze personal data to modify their behavior toward individuals;  Every individual should be treated equally, without bias or discrimination; Instead of applying a generalized or standardized approach, robots should assist individuals in pursuing their unique needs; Robots should be transparent about their behaviors. Designers should be accountable for robots’ behavior.
  2. Robots for Social Justice by Zhu et al.: This approach applies “Engineering for Social Justice” principles to robot design. It is based on philosopher Martha Nussbaum’s capabilities approach, which focuses on enhancing humans’ capabilities. Instead of simply conforming to existing values, this approach considers how robots can positively impact individuals – helping them become healthier, more playful, or more affiliated, to name just three of the outlined ten capabilities. A particularly compelling extension of this approach comes from Zhu’s co-author, Tom Williams, who has explored how roboticists wield power in shaping human-robot interactions. His insights further emphasize the need for ethical considerations in robotic behavior.

However, these design approaches remain largely conceptual. Implementing them in real-world robotics is challenging. I encourage engineers to design robots that defy established social norms – queer robots, counter-stereotypical robots, surprising robots. As roboticist Rodney Brooks once urged, we need to create “brave, creative, and happy HRI”.

Published under licence CC BY-NC-ND. 

  • Lena Fiedler is a research assistant at the Berlin Ethics Lab at the Technical University Berlin and part of the research project rokit. She investigates ethical implications of robots in public spaces and does her doctorate in philosophy on gender stereotypical design of robots.

This Blogpost was written by

Author

  • Lena Fiedler is a research assistant at the Berlin Ethics Lab at the Technical University Berlin and part of the research project rokit. She investigates ethical implications of robots in public spaces and does her doctorate in philosophy on gender stereotypical design of robots.

    View all posts

Lena Fiedler Written by:

Lena Fiedler is a research assistant at the Berlin Ethics Lab at the Technical University Berlin and part of the research project rokit. She investigates ethical implications of robots in public spaces and does her doctorate in philosophy on gender stereotypical design of robots.