Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen?

“As Adam Elkus has argued in Slate, the trouble with imparting “human” values onto computers is that different humans value competing things under varied circumstances. In that sense, the true lesson of Moral Machine may be that there’s no such thing as a moral machine, at least not under the circumstance that the site invites its visitors to explore.”

http://www.slate.com/blogs/future_tense/2016/08/11/moral_machine_from_mit_poses_self_driving_car_thought_experiments.html


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s