What Moral Code Should Your Self-driving Car Follow?

The arrival of the self-driving car presents a challenging new dilemma: Whom should the vehicle save – and whom should it harm – when an accident is unavoidable?

By Pragati Verma, Contributor

In 2016, MIT Media Labs researchers developed a platform to ask people, worldwide, philosophical questions about who a driverless car should kill in the case of an unavoidable accident. Should the car swerve to save five pedestrians crossing the road, even if it means sacrificing three passengers? Should it endanger animals to save humans? How about an elderly person versus a child?

Called Moral Machine, the data-gathering tool has caught public attention, generating conversation around autonomous driving. According to Jean-François Bonnefon, research director at French Centre National de la Recherche Scientifique and co-creator of Moral Machine, the website went viral, gathering people’s decisions on 14 million scenarios by January 2018.

The platform draws accident scenarios and lets people pick their preferences, then compares viewpoints and allows people to debate the issues online. Bonnefon said it’s the “largest global AI ethics study ever conducted.”

And while detailed results are expected to be published later this year, preliminary, unpublished research based on the responses of people from nearly 200 countries shows a few clear preferences. For one, people around the world overwhelmingly favored protecting children over adults. However, the answers were not always so cut and dry. According to Bonnefon, “opinions get more divided as we dive into more complex scenarios,” such as when people had to choose between minimizing the loss of life and saving the car’s occupants.

The team at MIT Media Labs also noticed regional trends when it came to developing a moral algorithm. Respondents from Western countries placed a relatively higher emphasis on minimizing the number of overall casualties compared to respondents from Eastern countries, who seemed to have a preference for saving passengers regardless of the total numbers of lives lost.

“Our algorithm was not aware of the geography of countries, but we [the researchers] saw broad differences between the preferences of people in the Eastern, Western, and Southern part of the world,” he explained.

It’s these discrepancies that make it tough to code ethics into a single global algorithm.

Facing the Uncomfortable Realities of Ethics

Some researchers argue the variability and complexity of the moral decisions is why majority opinion alone shouldn’t determine a unified ethical standard. A team of researchers from University of Bologna, Italy, for instance, proposed outfitting self-driving cars with an “ethical knob” that lets riders control how selfishly the vehicle will behave during an accident. In the U.S., Nicholas Evans, philosophy professor at the University of Massachusetts, says the first question people should ask themselves is: How do we value and how should we value lives? Studying ways to make driverless vehicles capable of making ethical decisions, he said, forces us to confront uncomfortable realities about subjective versus objective ethics. In other words, what happens when you are the person in the car? “You could program a car to minimize the number of deaths or life-years lost in any situation, but then something counter-intuitive happens. When there’s a choice between a two-person car and you alone in your self-driving car, the result would be to run you off the road,” Evans explains on University of Massachusetts website. “People are much less likely to buy self-driving vehicles if they think theirs might kill them on purpose and be programmed to do so.”

Bonnefon conceded that people don’t always do what they say in surveys. His previous research found that although most people approve of self-driving cars to sacrifice their occupants to save others, they don’t want to ride in such cars themselves. And that is the reason, he said, why they “asked people what a car should do rather than what would they do.”

“You could program a car to minimize the number of deaths or life-years lost in any situation, but then something counter-intuitive happens. When there’s a choice between a two-person car and you alone in your self-driving car, the result would be to run you off the road,”

– Nicholas Evans, philosophy professor at the University of Massachusetts

Making Room for an Ethics Debate

Crowdsourcing platforms like Moral Machine might not be perfect, but they are more relevant than ever. As self-driving cars hit the road and start getting into accidents, the ethical exercises begin to take on a dose of reality. Uber’s self-driving program, for instance, is under close scrutiny since an autonomous Uber struck and killed a 49-year old pedestrian.

Yet most of the debates following such accidents seem to turn into a blame-game. Whenever we hear of an accident by a machine, Bonnefon said, people tend to look for a human to blame, as we are better equipped to detect mistakes in humans. While this may be logical, he acknowledged, this “hurts our trust in self-driving cars too.”

As it turns out, who is responsible—the driver, the car, or the pedestrian—is not the only question facing the future of autonomous cars. It’s not easy to determine the level of safety that self-driving cars need to demonstrate before they should be allowed on road.

Research by McKinsey suggests that autonomous vehicles could reduce the number of overall accidents, and therefore fatalities on our roads, by 90 percent. “It would be significant progress if self-driving cars could eliminate [even] 10 percent of the accidents,” Bonnefon said.

According to the National Safety Council, there were 40,000 automotive fatalities in the U.S. in 2017—the second consecutive year with record numbers of automotive deaths. While any technology that helps reduce that number sounds good in theory, this safety comes with a huge trade-off.

“We will have to wait for a very long time, if we want to reach that level,” pointed out Bonnefon, “it would mean getting used to accidents by self-driving cars every week.”

As people grapple with these life-and-death decisions, one thing is clear: Researchers are not likely to provide clear-cut moral or ethical guidelines—and that’s by design. “We are not trying to find out what is ethical and what is not, said Boneffon. “We are providing a tool for people to appreciate the complexity of algorithmic decision-making for governments and regulatory agencies to understand what people in their country expect from machines.”

As far as regulation goes, well, that’s a regulator’s job.

The Regulation Game

Despite the intrigue around public opinion when it comes to self-driving cars, policy decisions will not be made on public opinion alone. Take the ethical guidelines for self-driving cars issued by Germany. Despite overall public opinion to save children over adults, the German government mandated that algorithms do not discriminate based on age, gender, race, disability, or any other discernible factor.

“It’s a slippery slope once you begin discriminating between human lives,” Bonnefon said. “Many people think it’s okay to save children over adults, but where do you stop this? What about men versus women or rich versus poor? Even if regulators don’t go by what the majority wants, it is important to foster a debate; weigh in on public opinion before making a decision and explaining the logic behind it, when you go against those expectations.”

According to Bonnefon, the current transportation and traffic system works because we are used to it and have come to trust it. On the other hand, self-driving car algorithms are new—and are trying to do what has never been done before.

While not perfect, the dialogue Moral Machine is creating around self-driving ethics is a strong starting point for stakeholders to better understand the dilemmas at hand. In order for developers and their issuing governments to build trust in the new autonomous world, Bonnefon explained, people “need to understand what [the public] expects and what they are likely to find offensive in algorithmic morality.”

“It’s a slippery slope once you begin discriminating between human lives. Many people think it’s okay to save children over adults, but where do you stop this? What about men versus women or rich versus poor? Even if regulators don’t go by what the majority wants, it is important to foster a debate; weigh in on public opinion before making a decision and explaining the logic behind it, when you go against those expectations.”

– François Bonnefon, research director at French Centre National de la Recherche Scientifique and co-creator of Moral Machine