News  |    |  July 10, 2017

Asimov’s Laws won’t stop robots harming humans so we’ve developed a better solution

By Christoph Salge. 
Published in The Conversation. 

Excerpt:

How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldn’t work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.

Science fiction already envisioned this problem and has suggested various potential solutions. The most famous was author Isaac Asimov’s Three Laws of Robotics, which are designed to prevent robots harming humans. But since 2005, my colleagues and I at the University of Hertfordshire, have been working on an idea that could be an alternative.

Instead of laws to restrict robot behaviour, we think robots should be empowered to maximise the possible ways they can act so they can pick the best solution for any given scenario. As we describe in a new paper in Frontiers, this principle could form the basis of a new set of universal guidelines for robots to keep humans as safe as possible. [ . . . ]