By Michael Anderson, Susan Leigh Anderson.
Published in Scientific American.
Robots that make autonomous decisions, such as those being designed to assist the elderly, may face ethical dilemmas even in seemingly everyday situations. One way to ensure ethical behavior in robots that interact with humans is to program general ethical principles into them and let them use those principles to make decisions on a case-by-case basis. Artificial-intelligence techniques can produce the principles themselves by abstracting them from specific cases of ethically acceptable behavior using logic. The authors have followed this approach and for the first time programmed a robot to act based on an ethical principle.
Autonomous machines will soon play a big role in our lives. It’s time they learned how to behave ethically.
In the classic nightmare scenario of dystopian science fiction, machines become smart enough to challenge humans—and they have no moral qualms about harming, or even destroying, us. Today’s robots, of course, are usually developed to help people. But it turns out that they face a host of ethical quandaries that push the boundaries of artificial intelligence, or AI, even in quite ordinary situations . . .
Read the full article at