News article by Leah Worthington.
Published UC Berkely, California Magazine.
Asked if the race to achieve superhuman artificial intelligence was inevitable, Stuart Russell, UC Berkeley professor of computer science and leading expert on AI, says yes.
“The idea of intelligent machines is kind of irresistible,” he says, and the desire to make intelligent machines dates back thousands of years. Aristotle himself imagined a future in which “the plectrum could pluck itself” and “the loom could weave the cloth.” But the stakes of this future are incredibly high. As Russell told his audience during a talk he gave in London in 2013, “Success would be the biggest event in human history … and perhaps the last event in human history.”
For better or worse, we’re drawing ever closer to that vision. Services like Google Maps and the recommendation engines that drive online shopping sites like Amazon may seem innocuous, but advanced versions of those same algorithms are enabling AI that is more nefarious. (Think doctored news videos and targeted political propaganda.)
AI devotees assure us that we will never be able to create machines with superhuman intelligence. But Russell, who runs Berkeley’s Center for Human-Compatible Artificial Intelligence and wrote Artificial Intelligence: A Modern Approach, the standard text on the subject, says we’re hurtling toward disaster. In his forthcoming book, Human Compatible: Artificial Intelligence and the Problem of Control, he compares AI optimists to the bus driver who, as he accelerates toward a cliff, assures the passengers they needn’t worry—he’ll run out of gas before they reach the precipice . [ . . . ]