Report by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. 20 pages.
Affect is a core aspect of intelligence. Drives and emotions, such as excitement and depression, are used to coordinate action throughout intelligent life, even in species that lack a nervous system. We are coming to realize that emotions are not an impediment to rationality, arguably they are integral to rationality in humans. Emotions are one evolved mechanism for satisficing — for getting what needs to be done in the time available with the information at hand. Emotions are core to how individuals and societies coordinate their actions. Humans are therefore susceptible to emotional influence both positively and negatively.
We would like to ensure that AI will be used to help humanity to the greatest extent possible in all contexts. In particular, artifacts used in society could cause harm either by amplifying or damping human emotional experience. It is quite possible we have reached a point where AI is affecting humans psychologically more than we realize. Further, even the rudimentary versions of synthetic emotions already in use have significant impact on how AI is perceived by policy makers and the general public.
This subcommittee addresses issues related to emotions and emotion-like control in both humans and artifacts. Our working groups have put forward candidate recommendations on a variety of concerns: considering how affect varies across human cultures, the particular problems of artifacts designed for intimate relations, considerations of how intelligent artifacts may be used for “nudging,” how systems can support (or at least not interfere with) human flourishing, and appropriate policy concerning artifacts designed with their own affective systems.
• Systems Across Cultures
• When Systems Become Intimate
• System Manipulation/Nudging/Deception
• Systems Supporting Human Potential (Flourishing)
• Systems with Their Own Emotions