Report by Jan van Leeuwen and Jiˇr´ı Wiedermann. Dept. of Information and Computing Sciences, Utrecht University, The Netherlands. 37 pages.
We consider scenarios in which autonomous robots are expected to interact in accordance with certain rules of law or ethics, or any other set of formally expressed constraints. Can an ‘observer’ actually tell from inspecting a robot’s program and interactions whether the robot will always follow the rules it is said to obey in its interactions with other robots? We argue that, under reasonable assumptions about robot programming and about an observer’s capabilities, no (deterministic) module, algorithmic or otherwise, will enable an observer to do so for all robots in any feasible way. It means that, in general, guarantees about the legal or ethical behaviour of autonomous robots are not verifiable at runtime by any means, given the assumptions we made. The result holds for all ‘non-trivial’ robot interaction properties.