What if your answer to an absurd hypothetical question had no bearing on how you behaved in real life?
“Scientists have been using a set of cheap-and-easy mental probes (Would you hit the railroad switch?) to capture moral judgment. But if the answers to those questions don’t connect to real behavior, then where, exactly, have these trolley problems taken us?”
The Trolley Problem takes a dark turn.
“Most of us would sacrifice one person to save five. It’s a pretty straightforward bit of moral math. But if we have to actually kill that person ourselves, the math gets fuzzy.
“That’s the lesson of the classic Trolley Problem, a moral puzzle that fried our brains in an episode we did about 11 years ago. Luckily, the Trolley Problem has always been little more than a thought experiment, mostly confined to conversations at a certain kind of cocktail party. That is until now. New technologies are forcing that moral quandry out of our philosophy departments and onto our streets. So today we revisit the Trolley Problem and wonder how a two-ton hunk of speeding metal will make moral calculations about life and death that we can’t even figure out ourselves.”
Similar to the dilemma raised by the trolley car problem: is it right to sacrifice one person to save many?
“Here’s the story: Hawkeye has gone insane and is spending time at a hospital. Throughout the episode, he tells this story about how they were able to go out to a beach and have a great day. Just playing at the beach. They all pile up on a bus to head home. Suddenly, they realise that the enemy is nearby, so they shut off the engine, turned out all the lights and everybody got quiet. Except this woman in the back who has a chicken that won’t get quiet. In this scene, BJ shows up to tell Hawkeye that he (BJ) is going home but can’t because Hawkeye is getting very upset. So BJ calls in the DR.”
“It’s tempting to hope that someone else will come along and solve the trolley problem. After all, finding a solution requires confronting some uncomfortable truths about one’s moral sensibilities. Imagine, for instance, that driverless cars are governed by a simple rule: minimize casualties. Occasionally, this rule may lead to objectionable results — e.g., mowing down a mother and her two children on the sidewalk rather than hitting four adults who have illegally run into the street. So, the rule might be augmented with a proviso: Minimize casualties, unless one party put itself in danger.”
“When people are dying and you can only save some, how do you choose? Maybe you save the youngest. Or the sickest. Maybe you even just put all the names in a hat and pick at random. Would your answer change if a sick person was standing right in front of you?
“In this episode, we follow New York Times reporter Sheri Fink as she searches for the answer. In a warzone, a hurricane, a church basement, and an earthquake, the question remains the same. What happens, what should happen, when humans are forced to play god?”
“When a surge of patients — from a disaster, disease outbreak or terrorist attack — overwhelms hospitals, how should you ration care? Whose lives should be saved first?”
A site that allows you to judge how you’d respond to various moral dilemmas created by “intelligent” machines like driverless cars.
“As Adam Elkus has argued in Slate, the trouble with imparting “human” values onto computers is that different humans value competing things under varied circumstances. In that sense, the true lesson of Moral Machine may be that there’s no such thing as a moral machine, at least not under the circumstance that the site invites its visitors to explore.”