Conference Papers  |    |  January 3, 2018

From Machine Ethics To Machine Explainability and Back

Conference paper by Kevin Baum, Holger Hermanns and Timo Speith. Presented at the 2018 International Symposium on Artificial Intelligence and Mathematics. Special Session on Formalising Robot Ethics.

Abstract:

We find ourselves surrounded by a rapidly increasing number of autonomous and semi-autonomous systems. Two grand challenges arise from this development: Machine Ethics and Machine Explainability. Machine Ethics, on the one hand, is concerned with behavioral constraints for systems, set up in a formal unambiguous, algorithmizable, and implementable way, so that morally acceptable, restricted behavior results; Machine Explainability, on the other hand, enables systems to explain their actions and argue for their decisions, so that human users can understand and justifiably trust them.

In this paper, we stress the need to link and cross-fertilize these two areas. We point out how Machine Ethics calls for Machine Explainability, and how Machine Explainability involves Machine Ethics. We develop both these facets based on a toy example from the context of medical care robots. In this context, we argue that moral behavior, even if it were verifiable and verified, is not enough to establish justified trust in an autonomous system. It needs to be supplemented with the ability to explain decisions and should thus be supplemented by a Machine Explanation component.

Conversely, such explanations need to refer to the system’s model- and constraint-based Machine Ethics reasoning. We propose to apply a framework of formal argumentation theory for the task of generating useful explanations of the Machine Explanation component and we sketch out how the content of the arguments must use the moral reasoning of the Machine Ethics component.