Driverless Car Ethics: When “Gut Decisions” Are Made By Machines

With so many vehicles coming out with at least partially-automated features, many drivers are growing more comfortable with the idea of technology-assisted driving.

After all, it’s easy to see the benefits of technology you make use of everyday.

Cruise control keeps us from speeding; smart sensors and camera parking-assist systems take the fear out of parallel parking, and nearly everyone you know uses some form of technology-assisted navigation when driving somewhere unfamiliar, even if it’s just through an app on their phone.

Safety in (Automated) Numbers

As a public, we’re not yet as trusting of semi-autonomous or fully-autonomous vehicles, but history suggests this may shift as more drivers and commuters become able to familiarize themselves with the new technology and personally interact with it.

After all, society has adapted to elevators, trains, and planes–all inventions that can cause fatalities in very rare, worst-case scenarios. Only daily experience and social norms allow us to accept that these worst-case scenarios are so low-probability that we can trust we’re safe when we use them every day.

driverless car 1

Though we’re at least 5 years away from having fully-autonomous vehicles available for the consumer market, safety experts project that if the commuting public is able to accept their widespread use, the boon to public safety could be unprecedented.

We lose about 1.25 million people to road traffic crashes every year, according to the World Health Organization, and human error has been estimated to account for 94% of all motor vehicle crashes. Worldwide, driving is the primary cause of death for people aged 15-29. Were we not so accustomed to driving culture and vehicular accidents, this would surely constitute a public health crisis.

These numbers are cited by the National Highway Traffic Safety Administration (NHTSA) to explain their commitment to working with the automotive industry and regulators to coordinate for safe deployment of autonomous vehicles for the public good.

Given everything we know at this stage of development, the evidence appears to overwhelmingly favor autonomous vehicles, even if we account for hard-to-predict worst-case scenarios.

But with new technology, it’s the worst-case scenarios that stick in people’s brains. Whatever problems come to light with the first Level 5 models of autonomous vehicles on the market, they will have an outsized effect on public trust and rates of adoption.

Worst-Case Scenarios: How Researchers Plan

Researchers and manufacturers developing driverless vehicles know how important proof of safety will be for public adoption of the nascent technology.

As a result, many teams of experts are devoted to both predicting the rare situations that may cause challenges, and to tweaking the algorithms and machine learning components that will be relied on when human drivers relinquish control of decision-making.

Level 5 Autonomy means manufacturers and programmers will have to program vehicles to make split-second decisions in even the most unusual of scenarios–the kind you statistically may never encounter as a driver, or only very rarely.

driverless car 2

Your Car’s Ethics: Do You Want a Model Citizen, or a Protector?

The A.I. decision-making components involved in creating truly driverless vehicles opens up debates more familiar to ethics professors than mechanics or tech savants.

A driverless car may sometimes end up in a situation of choosing between the lesser of two evils. Imagine a scenario in which a vehicle’s only options are head-on collision with a group of people blocking a road ahead, or swerving into a side barrier to avoid killing pedestrians.

If the car’s program chooses the barrier collision, it risks injury and fatality for its passengers.

If it chooses to drive straight into the group of pedestrians blocking the road, casualties are certain, but there’s a high probability that the car’s own passengers would survive. Should your car choose the option that has some probability of saving your life, no matter how high the cost to others, or should it always choose to minimize total fatalities, even when that may result in passenger death?

Economic and ethics theorists refer to dilemmas like this as the Tragedy of the Commons: a situational phenomenon in which individuals acting in their own self-interest sometimes behave in ways that are counter to the greater common good.

Safety experts who seek to reduce the number of casualties overall strongly recommend the “Utilitarian” approach, which prioritizes decision-making that results in the fewest casualties over the safety of one vehicle’s passengers. But in practice, humans tend to act out of self-interest, even if they support acting on behalf of the common good in theory.

Recent surveys seem to bear this out when it comes to our preference for driverless cars. A survey of 2,000 U.S. residents found that though a majority strongly agrees that autonomous vehicles should be programmed to save as many lives as possible – the “Utilitarian” model – the same majority of responders said they would be hesitant to buy such a car.

Instead, 59% indicated they would prefer a vehicle programmed to save its passengers at all costs when buying for themselves. What’s more: participants expressed a strong preference against the government passing regulations requiring autonomous vehicles to adopt a Utilitarian algorithm

If the goal is to reduce overall casualties for the greater good, presenting the public with an ethical model that makes them uncomfortable could kill the concept in the water by slowing or even halting public adoption.

A Reality-Based Model: Keeping Risks In Perspective

driverless car 3

There are two things we should all keep in mind when reading about how manufacturers attempt to plan for these highly-theoretical worst-case scenarios:

  1. They’re extremely unusual, and these scenarios are equally divisive as ethical dilemmas even when human drivers are calling the shots.
  2. They are not “new” problems – when discussing these scenarios in relation to driverless cars, it’s easy to get the general impression that they’re more dangerous than vehicles already on the road. The truth is, we already accept that other human drivers may make these impossible decisions – in ways we agree or disagree with – at any time.

As much as possible, manufacturers and regulators should seek to make autonomous vehicles the safest ones on the road. At the same time, the vehicles’ decision-making processes should not stray too far outside what the consumer and commuting public are comfortable with, or little progress will be made on overall road safety.

Better to design cars that mimic humans’ self-preservation instinct in these very rare scenarios than to shoot for the ideal of a Utilitarian approach and risk scaring the consumer market away from adoption. Instead, it may be more helpful to focus on all the ways that properly-tested autonomous vehicles can help eliminate and prevent the types of human driving errors that currently cause so many fatalities every year.

If you’d like to learn more about the kinds of ethical situations researchers are considering for development of driverless technology, check out M.I.T.’s interactive “Moral Machine,” and see where your own preferences lie.