National Science Foundation Award #1849348.
Investigators: Thomas Williams, Hao Zhang and Neil Dantam. Sponsor: Colorado School of Mines.
Robots are being increasingly used across many sectors of society, including education, eldercare, search and rescue, and space robotics. In all these domains, it is crucial that robots be able to accept commands through natural language, so as to enable natural, effective, and understandable interaction. When commanded through natural language, robots must be able to ensure that the way that they achieve users’ commands aligns with human expectations, especially the social and moral rules humans agree to by community consensus, known as social and moral norms. Moreover, if a robot is commanded in a way that cannot be fully achieved while complying with these social and moral norms, that robot must be able to explain to the user why it must reject the given command, offering acceptable alternatives when available. While there has been some previous work on ethical planning and command rejection, it has largely not accounted for cases in which robots are uncertain about social or moral norms or cases in which social and moral norms change between contexts. Research is needed to give robots the perceptual capabilities they need to identify such contexts, as well as the rich language understanding and generation abilities needed to communicate about social and moral norms.
This research will develop an Intelligent Physical System capable of (1) performing ethical reasoning using a dynamic set of norms that changes along with the robot’s context, (2) using these reasoning capabilities to effectively reject or offer alternatives to inappropriate commands, and (3) learning rich representations of the contexts relevant to its set of moral and social norms. These capabilities are crucial as robots move into the real world, in which they (1) may be given unethical commands, either due to malfeasance or ignorance, (2) may be required to operate in not one context, but a variety of contexts, each which may have their own relevant social and moral norms, and (3) may need to learn about new contexts from both human instruction and their own perception. In order to develop this IPS in consideration of these real-world challenges, this research will produce the first algorithms for identifying and rejecting inappropriate commands in uncertain, dynamic, and realistically perceived contexts, using techniques form (1) natural language understanding and generation in uncertain and open worlds and Dempster-Shafer Theory; (2) task and motion planning through constrained inference and constrained optimization; and (3) representation learning for long-term autonomy and simultaneous localization and mapping.
- Start date: February 15, 2019
- End date: January 31, 2022 (Estimated)
- Amount: $570,000.00
Publications Produced as a Result of This Research
- Wen, Ruchen and Siddiqui, Mohammed Aun and Williams, Tom. “Dempster-Shafer Theoretic Learning of Indirect Speech Act Comprehension Norms,” Proceedings of the AAAI Conference on Artificial Intelligence, 2020
- Williams, Tom and Zhu, Qin. “An Experimental Ethics Approach to Robot Ethics Education,” Proceedings of the AAAI Conference on Artificial Intelligence, 2020
- Siva, Sriram and Wigness, Maggie and Rogers, John and Zhang, Hao. “Robot Adaptation to Unstructured Terrains by Joint Representation and Apprenticeship Learning,” Robotics: Science and Systems (RSS), 2019