When it comes to self driving cars, one of the biggest questions on many minds is, “would you let your car kill you for the greater good.”
A modern spin on the trolley problem (which has become a joke), a situation where a vehicle loses control and two options arise. Crash into a group of pedestrians crossing the tracks, and saving the driver, or crash into a sandpit, saving the pedestrians and killing the driver.
Since this is a tough topic to touch on, researchers at the Massachusetts Institute of Technology (MIT) have created The Moral Machine. It’s a game (for lack of a better term) which puts the user in control of the situation.
The three functional scenarios of the game are judge, design, and browse.
The first mode, as the name implies presents users with random moral dilemmas revolving around the idea of a car losing the brakes. Design mode enables users to create their own scenarios.
Although the default impact is death, users can set the fates of each character independently using a dropdown menu, plus they can also add legal implications by setting the pedestrian signal.
Finally users can browse the scenarios and use the like button to express appreciation, and/or they also can share or link the scenarios using the corresponding buttons.
Although a novel concept, there are a few key limitations worth noting. The Verge had a conversation with Anuj K. Pradhan, who is an assistant research scientist in UMTRI’s Human Factors Group specializing in human behavior systems.
Pradhan said that while these studies and tools are helpful, they shouldn’t be used as a direct comparison,
Because human drivers who face these situations may not even be aware that they are [facing a moral situation], and cannot make a reasoned decision in a split-second. Worse, they cannot even decide in advance what they would do, because human drivers, unlike driverless cars, cannot be programmed.
Although self driving cars pose a moral dilemma to the average consumer,for many programmers responsible for these systems, it’s not as big of a deal as you would expect.
As The Guardian found in recent conversations with employees at X, the Google sibling in charge of developing self-driving cars, it’s not as big of a deal as you would think because it’s an issue which hasn’t come up.
Simply put, rather than worrying about programming logic to determine who lives and dies, the priority is on ensuring the situation never comes up. The article went on to mention how even if the situation reached that level there’s no time to make a moral decision.
Andrew Chatham, a principal engineer on the self driving car project mentioned, you’re much more confident about things directly in front of you, just because of how the system works, but also your control is much more precise by slamming on the brakes than trying to swerve into anything.
So it would need to be a pretty extreme situation before that becomes anything other than the correct answer. Nathaniel Fairfield, another engineer on the project, joked with The Guardian that the real question is “what would you …oh, it’s too late”
Image Source: Becky Stern