Skip to content
AI driverless cars machine learning

A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

13 January, 2020

Autonomous cars will be required to make value judgments which must be pre-programmed. What should we tell them to do? To understand decisions human drivers would make before determining the ethical decisions that cars should make, researchers crowdsourced the question by launching a game called Moral Machine in which players are presented with a version of the trolley problem: a driverless car can either stay its course and hit what is in its path, or swerve and hit something else. Over 2 years, more than 2 million people globally participated in the largest study on moral preferences for machine intelligence ever conducted, logging more than 40 million decisions. In the results 3 strong preferences were identified that could provide a base for developing a standardised machine-ethics framework: sparing human lives, sparing more lives, and sparing young lives. Diving further into the details proves a little more alarming. Players showed little preference between action and inaction, but strong preferences for what kinds of people they hit. If billions of driverless cars in the future are all programmed to make the same judgement call, it may be a lot more dangerous for some people to cross the street than others.

Read the full article here.

The New Yorker, 24 January 2019

Scroll To Top