Article by James Dawes.
Published in Nature Human Behavior.
The development of autonomous weapon systems, by removing the human element of warfare, could make war crimes and atrocities a thing of the past. But if these systems are unable to respect the principles of humanitarian law, we might create a super-intelligent predator that is beyond our control.
Stephen Hawking, Bill Gates, Henry Kissinger and Elon Musk have all warned that the emergence of artificial intelligence (AI) could, in Hawking’s words, “spell the end of the human race”. Most of the excited discussion of their apocalyptic predictions hovers somewhere between philosophical thought experiments and science fiction. What if, philosopher Nick Bostrom asks, AI emerges with a paperclip-making machine? With its vast super-intelligence and its desire to problemsolve, might such a machine transform all matter, including humans, into resources for maximizing production of paperclips? Isn’t it possible and even likely, Musk asks, that AI has already emerged and we are trapped inside the matrix it is using to enslave us?
Meanwhile, as these conversations continue, a much more quiet and relentlessly practical techno-revolution is occurring. Governments and militaries around the world are investing billions of dollars in developing autonomous weapon systems (AWS). The US Department of Defense defines AWS as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator”. Proponents of AWS research argue that the atrocities of war are the natural consequence of basic human behaviours or emotions, such as rage, fear and survival instinct, and that by removing the human element AWS could, to paraphrase Hawking, spell the end of war crimes. However, critics of AWS argue that removing the human element means removing non-algorithmic moral intuition and feeling, including empathy, mercy and the humility of doubt — the distinctly human features that mitigate the horrors of war. Unless a pre-emptive ban on AWS is implemented now, they argue, we face a future of unrestrained and even unrestrainable ‘killer robots’. Who is right? [ . . . ]
About the Author
James Dawes, professor at Macalester College, is the author of The Novel of Human Rights (Harvard, 2018), Evil Men (Harvard, 2013), That the World May Know: Bearing Witness to Atrocity (Harvard, 2007), and The Language of War (Harvard, 2002).