Mark D. White
Thanks to Orly Lobel at Prawfsblawg for pointing out this New York Times Magazine piece on new ideas. The one he points out in particular involves "ethical robots" (scroll down in the piece a few items), which will be programmed with basic ethical tenets and will perform more reliably (according to this programming) on the battlefield than humans would.
The idea that robots can be programmed for ethical behavior is based on the false impression that morality boils down to rules, a view that Deirdre McCloskey lampoons so well with her 3×5 index card metaphor. (The fact that the writer of the article mentions Kant's categorical imperative, often mistakenly interpreted as generating easily applicable rules, serves to reinforce this.) Anyone was has read Isaac Asimov's R. Daneel Olivaw novels knows that even a handful of "simple" rules (such as his Three Laws of Robotics) creates endless conflicts and conundrums that require judgment to resolve – and even Asimov's robots, with their advanced positronic brains, struggled with judgment.
The article does say that ethical robots would work "in limited situations," which suggests that the researchers have some idea of the minefield (pun intended) that they're getting into. But my concern is that people will read this piece, appreciating (as I do) what the researchers are trying to do to improve battlefield conditions (though I remain skeptical about the real-world prospects), and this will reinforce the "morality-as-rules" idea of ethics, and that the only reason people fail to follow these "rules" is weaknes of will, not that ethical dilemmas are complicated, contentious, and often irresolvable.
Even more curiously, the article claims that the robots ar programmed to "feel" guilt, in order "to condemn specific behavior and generate constructive change." Certainly, guilt (as with emotions in general) are essential to reinforcing moral behavior in imperfect humans (as well as being an integral part of the human experience), but why would robots need them – are they going to be tempted to resist their programming? One would think the point of developing robots was to guarantee "ethical" rule-based behavior – so where does the guilt come in?
Leave a comment