The Robot’s Moral Compass? Exploring Citizen Perspectives on Moral Decisions by Robots in Public Spaces
Description
How do people judge moral dilemmas involving robots? And how should robots make moral decisions in everyday life? This thesis explores these questions by inviting citizens to evaluate and reflect on moral dilemmas in which a robot must choose between two or more possible actions. Through direct, face-to-face interactions in public urban spaces, a humanoid robot (Pepper) will present short moral scenarios via an interactive website displayed on its table asking passes-by, “What should I do?”. These spontaneous encounters will encourage citizens to select between predefined options or suggest their own decisions the robot should take, thereby expressing how they believe the robot should act in such situations as well as explaining the reasoning behind their choices. Besides this, the citizens are invited to reflect on possible consequences of the robot’s decision, offering insights into how moral intuitions and social expectations emerge in everyday contexts. Rather than aiming to identify ‘correct’ answers, the study will focus on how people reason about delegation, responsibility, and the boundaries of machine agency. The format is intended to be designed as low-threshold, visible, and engaging public intervention that makes key questions of robot ethics tangible: robots as social actors, moral delegation, and shifting agency boundaries.
Current approaches to ethical decision-making in robotics are largely shaped by expert communities such as ethicists, engineers, and system designers, who aim to encode normative principles into algorithmic frameworks [1]–[3]. While technically rigorous, these approaches often follow a top-down structure and tend to exclude citizens from participating in how robots make moral decisions. Large-scale studies such as the Moral Machine Experiment [4] presented abstract dilemmas to millions of online users but offered limited opportunities for contextual reflection or dialogue. By contrast, this thesis brings the question of robotic morality into public space, transforming abstract ethical scenarios into embodied encounters and shared moments of reflection. In doing so, the thesis builds on work that moral agency in robotics should be seen not as a fixed capability of machines, but as socially negotiated domain shaped by human expectations, norms, and interactions [2], [5], [6].
This thesis thus creates opportunities for situated ethical reflection and contributes to responsible robotics by connecting technical decision-making with everyday moral imagination of citizens.
Objectives
- Design and implement a robot-hosted website presenting moral dilemmas
- Conduct in-person fieldwork in urban settings, gathering citizen responses
- Analyze how moral reasoning unfolds in situated interactions with the robot
- Identify patterns in public expectations and judgments about robot behavior
- Contribute to responsible robotics by foregrounding diverse, real-world moral perspectives
Requirements
-
Interest in human-robot interaction, AI ethics, and participatory research
-
Basic programming skills (e.g., Python, simple web development) or willingness to learn
-
Familiarity with qualitative and quantitative research methods (e.g., surveys, content analysis)
-
Ability and motivation to conduct fieldwork in public urban spaces
-
Optional: experience with UX design, interaction design, or interactive systems
Thesis Type
M.Sc., M.A., or M.Ed.
Starting date
As soon as possible. Contact the supervisors if you are interested.
Supervisors
Nora Weinberger (nora.weinberger∂kit.edu), Institute for Technology Assessment and Systems Analysis (ITAS)
Barbara Bruno (barbara.bruno∂kit.edu)
References
[1] J. Rhim, J.-H. Lee, M. Chen, and A. Lim, “A deeper look at autonomous vehicle ethics: an integrative ethical decision-making framework to explain moral pluralism,” Frontiers in Robotics and AI, vol. 8, May 2021, doi: 10.3389/frobt.2021.632394.
[2] M. Scheutz, “The need for moral competency in autonomous agent architectures,” in Fundamental Issues of Artificial Intelligence, V. C. Müller, Ed., Cham: Springer International Publishing, 2016, pp. 517–527. doi: 10.1007/978-3-319-26485-1_30.
[3] J. van der Waa et al., “Moral decision making in human-agent teams: human control and the role of explanations,” Frontiers in Robotics and AI, vol. 8, May 2021, doi: 10.3389/frobt.2021.640647.
[4] E. Awad et al., “The Moral Machine experiment,” Nature, vol. 563, no. 7729, pp. 59–64, Nov. 2018. doi: 10.1038/s41586-018-0637-6
[5] F. Alaieri and A. Vellino, “Ethical decision making in robots: autonomy, trust and responsibility,” in Social Robotics, ICSR 2016, A. Agah, J.-J. Cabibihan, A. M. Howard, M. A. Salichs, and H. He, Eds., Cham: Springer International Publishing, 2016, pp. 159–168. doi: 10.1007/978-3-319-47437-3_16.
[6] P. Reiter, U. Norman, N. Weinberger, and B. Bruno, “Artificial moral agents: should machines take ethical responsibility?,” in 2025 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO), Osaka, Japan, July 2025, pp. 218–224. doi: 10.1109/ARSO64737.2025.11124921.