News article by Jacob Brogan.
Published in Slate.
This seems like a rough time to be human: Artificial intelligences are beating us at Go, getting better at driving cars, and doing all sorts of other stuff. How much longer until they just rise up and kill us?
Longer than you might think, and though there are good reasons for caution and concern, a lot of the talk you hear about Terminator-type scenarios is excessively alarmist. Read an article on, say, the rise of robot butchers, and you’ll inevitably find commenters worrying that the system is going to go haywire and attack its human masters. Even when they’re a little joke-y, these responses tend to bear the trace of the old Luddite anxiety that machines are somehow fundamentally opposed to humanity.
If you really get into it with A.I. researchers, you’ll find that most of them aren’t really worried about murder-bots actively looking to KILL ALL HUMANS. Instead, they’re concerned that we don’t really know what we’re getting into as we rapidly engineer systems that we can barely comprehend, let alone control. It’s this concern that’s led Elon Musk—who’s supported all sorts of A.I. research—to describe artificial intelligence as an “existential threat.” He seems concerned that we may not be able to direct the forces that we’re calling into being. [ . . . ]