Books  |  ,   |  November 13, 2018

Robot Rights

Book by David J. Gunkel.
Published by MIT Press.
256 pages.

A provocative attempt to think about what was previously considered unthinkable: a serious philosophical case for the rights of robots.

We are in the midst of a robot invasion, as devices of different configurations and capabilities slowly but surely come to take up increasingly important positions in everyday social reality—self-driving vehicles, recommendation algorithms, machine learning decision making systems, and social robots of various forms and functions. Although considerable attention has already been devoted to the subject of robots and responsibility, the question concerning the social status of these artifacts has been largely overlooked. In this book, David Gunkel offers a provocative attempt to think about what has been previously regarded as unthinkable: whether and to what extent robots and other technological artifacts of our own making can and should have any claim to moral and legal standing.

In his analysis, Gunkel invokes the philosophical distinction (developed by David Hume) between “is” and “ought” in order to evaluate and analyze the different arguments regarding the question of robot rights. In the course of his examination, Gunkel finds that none of the existing positions or proposals hold up under scrutiny. In response to this, he then offers an innovative alternative proposal that effectively flips the script on the is/ought problem by introducing another, altogether different way to conceptualize the social situation of robots and the opportunities and challenges they present to existing moral and legal systems.

Table of Contents

1 Thinking the Unthinkable

  • 1.1 Robot
    • 1.1.1 Science Fiction
    • 1.1.2 Indeterminate Determinations
    • 1.1.3 Moving Target
    • 1.1.4 Results/Summary
  • 1.2 Rights
    • 1.2.1 Definition
    • 1.2.2 Theories of Rights
  • 1.3 Robot Rights or the Unthinkable
    • 1.3.1 Ridiculous Distractions
    • 1.3.2 Justifiable Exclusions
    • 1.3.3 Literal Marginalization
    • 1.3.4 Exceptions that Prove the Rule
  • 1.4 Summary

2: Robots Cannot Have Rights; Robots Should Not Have Rights

  • 2.1 Default Understanding
  • 2.2 Literally Instrumental
    • 2.2.1 Being vs. Appearance
    • 2.2.2 Ontology Precedes Ethics
    • 2.2.3 Limited Rights
  • 2.3 Instrumentalism at Work
    • 2.3.1 Expertise
    • 2.3.2 Robots Are Tools
    • 2.3.3 Is/Ought Inference
  • 2.4 Duty Now and for the Future
  • 2.5 Complications, Difficulties, and Potential Problems
    • 2.5.1 Tool != Machine
    • 2.5.2 Not Just Tools
    • 2.5.3 Ethnocentrism
  • 2.6 Summary

3: Robots Can Have Rights; Robots Should Have Rights

  • 3.1 Evidence, Instances, and Examples
    • 3.1.1 Philosophical Arguments
    • 3.1.2 Legal Arguments
    • 3.1.3 Common Features and Advantages
  • 3.2 Complications, Difficulties, and Potential Problems
    • 3.2.1 Infinite Deferral
    • 3.2.2 Is/Ought Inference
  • 3.3 Summary

4: Although Robots Can Have Rights, Robots Should Not Rights

  • 4.1 The Argument
    • 4.2 Complications, Difficulties, and Potential Problems
    • 4.2.1 Normative Proscriptions
    • 4.2.2 Ethnocentrism
    • 4.2.3 Slavery 2.0
  • 4.3 Summary

5: Even If Robots Cannot Have Rights, Robots Should Have Rights

  • 5.1 Arguments and Evidence
    • 5.1.1 Anecdotes and Stories
    • 5.1.2 Scientific Studies
    • 5.1.3 Outcomes and Consequences
  • 5.2 Complications, Difficulties, and Potential Problems
    • 5.2.1 Moral Sentimentalism
    • 5.2.2 Appearances
    • 5.2.3 Anthropocentrism, or “It’s Really All About Us”
    • 5.2.4 Critical Problems
  • 5.3 Summary

6: Thinking Otherwise

  • 6.1 Levinas 101
    • 6.1.1 A Different Kind of Difference
    • 6.1.2 Social and Relational
    • 6.1.3 Radically Superficial
  • 6.2 Applied (Levinasian) Philosophy
    • 6.2.1 The Face of the Robot
    • 6.2.2 Ethics Beyond Rights
  • 6.3 Complications, Difficulties, and Potential Problems
    • 6.3.1 Anthropocentrism
    • 6.3.2 Relativism and Other Difficulties

About the Author

David J. Gunkel is Distinguished Teaching Professor of Communication Technology at Northern Illinois University and the author of The Machine Question: Critical Perspectives on AI, Robots, and Ethics.