More ethical than humans?
Many ethicists and artificial intelligence developers want to ensure people are kept in the loop when lethal force is applied. At the moment, that’s a given, at least from those nations adhering to the law of war. Robots struggle to differentiate between soldiers and civilians in complex battle settings.
But the day may come, some say, when robots are able to be more ethical than human troops, because their judgment wouldn’t be clouded by emotions such vengefulness or self-preservation, which can shape human judgment.
“Unfortunately, humanity has a rather dismal record in ethical behavior in the battlefield,” Ronald Arkin, director of the Mobile Robot Laboratory at the Georgia Institute of Technology, wrote in a guest blog for the IEEE, a technical professional organization. “Such systems might be capable of reducing civilian casualties and property damage when compared to the performance of human warfighters.”