Doctoral Thesis by Heather Elizabeth Gary.
University of Washington.
People frequently collaborate with one another – they work on intellectual, artistic, or other pursuits to generate ideas or products that are different, and ideally better, than an individual alone could create. In our evolving technological landscape, the collaborative process is changing. Not only can we communicate with one another and access and share information through our technologies, but we can also interact directly with our technologies, like social robots. Provocative and important questions emerge with regard to the creative products we may generate with them: Will robots be credited for their contributions? What characteristics does a robot need to have in order to beget attributions of credit? Finally, would it be unfair to a robot not to attribute credit to the robot when it makes a contribution, and if so, what is the nature of that unfairness? This set of four studies sought to address these questions by investigating adults’ attributions of psychological agency, credit, and fairness to a humanoid social robot, Robovie.
In Study 1, 24 adults interacted with Robovie for approximately 30-minutes, and were then interviewed about creditworthiness and fairness. In Study 2, 80 adults were randomly assigned to 1 of 2 conditions (40 per condition) in which Robovie’s behavior was described either as internally or externally generated (i.e., remotely controlled). In Study 3, 240 adults were randomly assigned to 1 of 8 conditions (40 per condition) that involved manipulating three aspects of Robovie’s behavior: speech, movement, and eye gaze (a 2 x 2 x 2 design). All participants from Studies 2 and 3 then watched a 4-minute video of a Study 1 participant interacting with Robovie, and answered questions about Robovie’s psychological agency and creditworthiness. In Study 4, 48 adults were randomly assigned to 1 of 2 conditions (24 per condition) in which Robovie’s behavior was described as either internally or externally generated. All participants watched a 7-minute human-robot interaction video, then were interviewed to ascertain their attributions of psychological agency, creditworthiness, and fairness to Robovie.
Across all four studies, results indicated that adults do attribute credit to a social robot that engages with a person on a collaborative task. Results from Studies 2, 3, and 4 show that adults attribute significantly more agency to Robovie when its behaviors, specifically speech (Study 3), are self-generated, and further demonstrate that attributions of agency fully mediate the relationship between self-generated behavior and the attribution of credit to Robovie (Study 2).
Finally, results from Study 4 suggest that adults are willing to commit to Robovie as the kind of entity that can find itself in an unfair situation –significantly more so when its behaviors are self-generated than when they are remotely controlled –while simultaneously reporting that Robovie cannot experience unfairness. Results are discussed in light of the possibility that social robots are part of a novel category of beings, about which we reason differently than we do about canonical agents. Further discussion addresses the implications of this work for robot design; what it means for an entity to have a mind; and whether, in creating technologies that appear to think and feel, we are engaging in a form of deception. Also discussed are future directions for research in this exciting new area of investigation.
Peter Kahn and his students Solace Shen and Heather Gary (and other collaborators) won Best Paper award at the International Conference of Human-Robot Interaction 2012: “Do People Hold a Humanoid Robot Morally Accountable for the Harm it Causes.” Solace Shen presented the paper at the conference.