'Moral Machine' Ponders Driverless Dilemma of Who Dies
By Matt Schmitz
October 18, 2016
Share
Google
CARS.COM — In the late 1960s, Kelsey D. Atherton wrote in a recent Popular Science piece on the driverless-car dilemma, the so-called Trolley Problem pondered the moral course of action in the case of a runaway streetcar. Pull the track-switch lever and kill one person or leave the car careening on its course to kill five?
Atherton recounted his experience with MIT Media Lab’sMoral Machine, a study that presents participants with 13 different scenarios, all calling for one of two choices that reflect the lesser evil in the mind of the test subject. In other words: Who should they allow to die?
“Driverless cars, the future, people-carrying robots that promise great advances in automobile safety, will sometimes fail,” Atherton wrote. “Those failures will, hopefully, be less common than the deaths and injuries that come with human error, but it means the computers and algorithms driving a car may have to make vey human choices when, say, the brakes give out: Should the car crash into five pedestrians, or instead adjust course to hit a cement barricade, killing the car’s sole occupant instead?”
Moreover, Atherton states, the Moral Machine poses options like: swerve or stay the course; strike pedestrians crossing legally versus jaywalkers; humans versus animals; child versus adult; homeless person versus pregnant woman. Atherton also discussed “emergent behavior,” when artificial intelligence behaves in an unanticipated way — pointing to the recent failure by a Tesla Model S in Autopilot mode to distinguish a white truck against a pale sky, resulting in the driver’s death.
Read Atherton’s full piece, titled “MIT Game Asks Who Driverless Cars Should Kill,” here.
Assistant Managing Editor-News
Matt Schmitz
Former Assistant Managing Editor-News Matt Schmitz is a veteran Chicago journalist indulging his curiosity for all things auto while helping to inform car shoppers.