How to build a morally ethical self-driving car

What happens when lives are at stake and a split-second decision is necessary?

By Mark Buchanan

February 11, 2020 at 4:40PM
This undated product image provided by Volvo Cars shows the Volvo XC90 SUV. Uber is teaming with Volvo Cars to launch its newest self-driving vehicle. The ride-hailing company said Wednesday, June 12, 2019, that it can easily install its self-driving system in the Volvo XC90 SUV. (Volvo Cars via AP)
This undated product image provided by Volvo Cars shows the Volvo XC90 SUV. Uber is teaming with Volvo Cars to launch its newest self-driving vehicle. The ride-hailing company said Wednesday, June 12, 2019, that it can easily install its self-driving system in the Volvo XC90 SUV. (The Minnesota Star Tribune)

Not too long ago, tech enthusiasts were telling us that by 2020, we'd see self-driving cars hit the mainstream, with some 10 million on the roads. That turned out to be a wild overestimation. The actual number of vehicles in testing is thousands of times smaller, and they're still driving mostly in controlled conditions. Companies have also scaled back their ambitions, aiming more for driver support than full autonomy, just as sober-minded transport experts told us to expect.

But slower development is probably just as well, as it should help improve vehicle safety and give engineers time to prepare for other threats, such as hackers turning cars into destructive weapons. Slower rollout also gives us a chance to form some social consensus on the built-in ethics of autonomous vehicles, which will inevitably face decisions with moral implications — being forced to choose, for example, between killing the car's passengers by hitting a tree or veering into a nearby group of pedestrians.

Programmers will have to prepare cars to make such decisions when certain conditions arise, and they will need some justifiable basis to do so. This need is creating a somewhat bizarre research alliance, as professional ethicists work alongside experts in artificial intelligence. We have a lot to learn — and many mistakes to make — before we find acceptable solutions.

So far, only one national government has laid out actual guidelines for how autonomous vehicles should make decisions. That nation is Germany, where official guidelines take a strongly egalitarian view: "In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited. It is also prohibited to offset victims against one another. General programming to reduce the number of personal injuries may be justifiable."

This position tries to steer clear of any weighing of one person over another — male vs. female, old vs. young, skilled surgeon vs. well-known local drug dealer. All people, in this view, count equally. This seems natural enough, although such egalitarian notions could run up against local cultural variations in moral attitudes, as is clear from a large survey of people's moral intuitions.

A couple of years ago, researchers used a website to collect some 40 million choices involving theoretical self-driving dilemmas from people in 233 regions around the world, spanning many cultures. They found that while people do generally prioritize human lives over animal lives, and they would like to save more rather than fewer lives, they also tend to prefer saving the young over the old. People from countries in Central and South America tended to prioritize the lives of females and the physically fit. In many regions, people also expressed a preference for high-status individuals — valuing an executive over a homeless individual.

Studies of this kind offer a rough guide to real moral preferences and how they vary from place to place, and trying to align with them might be a good starting point for engineers. Even so, surveys can't be the only guide, either, as prevailing moral attitudes change with time. Historically, in many places, explicitly racist or sexist values have held sway, despite widely being viewed as unethical by most people.

A better way to identify reliable rules, some experts argue, would be to combine the survey-based approach with analysis based on prevailing ethical theories developed by moral philosophers. One might start with public views but then put these through the filter of ethical theory to see if a rule is, on closer scrutiny, truly defensible. Ethicists refer to views that survive this test as "laundered preferences." For example, all ethical theories would reject preferences for one gender over another, even though the survey found such preferences in some regions. In contrast, preferences to save the largest number of people would survive, as would a preference for the very young over the very old.

In this obviously messy area, policies will have to be guided by some mixture of the empirical and the theoretical. When a self-driving car makes a choice and kills some children, it won't be obvious how that decision was made. But people will want to know. And the rules at work had better survive systematic ethical scrutiny.

Mark Buchanan, a physicist and science writer, is the author of the book "Forecast: What Physics, Meteorology and the Natural Sciences Can Teach Us About Economics." He wrote this article for Bloomberg Opinion.

about the writer

about the writer

Mark Buchanan