News Article by Edmund L. Andrews.
Published on the Stanford Institute for Human-Centered Artificial Intelligence website.
In the rush to develop national strategies on artificial intelligence, a new report finds, most governments pay lip service to civil liberties.
More than 25 governments around the world, including those of the United States and across the European Union, have adopted elaborate national strategies on artificial intelligence — how to spur research; how to target strategic sectors; how to make AI systems reliable and accountable.
Yet a new analysis finds that almost none of these declarations provide more than a polite nod to human rights, even though artificial intelligence has potentially big impacts on privacy, civil liberties, racial discrimination, and equal protection under the law.
That’s a mistake, says Eileen Donahoe, executive director of Stanford’s Global Digital Policy Incubator, which produced the report in conjunction with a leading international digital rights organization called Global Partners Digital.
“Many people are unaware that there are authoritarian-leaning governments, with China leading the way, that would love to see the international human rights framework go into the dustbin of history,” Donahoe says.
For all the good that AI can accomplish, she cautions, it can also be a tool to undermine rights as basic as those of freedom of speech and assembly. The report calls on governments to make explicit commitments: first, to analyze human rights risks of AI across all agencies and the private sector, as well as at every level of development; second, to set up ways of reducing those risks; and third, to establish consequences and vehicles for remediation when rights are jeopardized. [ . . . ]