An alternative to framing moral dilemmas of robot vehicles through the trolley problem

In a previous post, I argued that the “trolley problem” is a misleading conceptual framework that is being pushed by the automated vehicle industry to shape policy debates in a way that prioritizes their interests over public safety. As the article explained, the trolley problem scenario is extremely unrealistic and fails to capture the types of everyday driving situations that pose the greatest risks.

A new paper published in AI & Society proposes an alternative model called the Agent-Deed-Consequences (ADC) framework that aims to provide a more nuanced and realistic approach to studying how people make moral judgments in traffic situations. The model suggests that judgments are based on evaluating three key factors: the character of the agent/driver, their actions and compliance with traffic rules, and the consequences of those actions.

The authors propose testing this model through virtual reality experiments depicting mundane, low-risk traffic scenarios rather than unrealistic sacrificial dilemmas. Character evaluations would be based on discernible cues like driving style to avoid biases. They hypothesize that positive assessments of each factor increase perceived morality, with character and actions weighted more heavily than consequences for everyday situations.

By incorporating character considerations and focusing on common traffic interactions, this framework addresses some of the limitations of prior “trolley problem” approaches. It also has the potential to provide more ecologically valid insights into how autonomous vehicles should make decisions on the road. However, the study still relies on participants as observers rather than actors, and consequences remain certain rather than uncertain risks.

From a Vision Zero perspective, what does this revised conceptual framework for studying driving decisions imply about how autonomous vehicles should be programmed to behave? I would argue it points to the importance of prudence. Prudence means expecting that other actors may do unexpected things and therefore giving a margin of error to others so catastrophic consequences do not occur if an error does.

Autonomous vehicles should be programmed with a prudent mindset that treats all road users, whether compliant or not, as potential sources of risk. This involves limiting speeds, prioritizing defensive avoidance over rules compliance alone, and reducing the “freedom” of robots to move unimpeded through busy urban spaces with innocent citizens present. Only through such caution can we ensure these technologies do not undermine Vision Zero’s goal of preventing all traffic deaths.

In conclusion, while this alternative framework is an improvement, developers and policymakers must not lose sight of Vision Zero’s primary principle – that no loss of life is an acceptable trade-off for autonomous technology. Prioritizing prudence over convenience or profit is key.

Image by Dall-e 3; Text edited with help from Claude Instant

Leave a comment