Report published by Access Now. Lindsey Andersen, lead author. 40 pages.
As artificial intelligence continues to find its way into our daily lives, its propensity to interfere with human rights only gets more severe. With this in mind, and noting that the technology is still in its infant stages, Access Now conducts this preliminary study to scope the potential range of human rights issues that may be raised today or in the near future.
Many of the issues that arise in examinations of this area are not new, but they are greatly exacerbated by the scale, proliferation, and real-life impact that artificial intelligence facilitates. Because of this, the potential of artificial intelligence to both help and harm people is much greater than from technologies that came before. While we have already seen some of these consequences, the impacts will only continue to grow in severity and scope. However, by starting now to examine what safeguards and structures are necessary to address problems and abuses, the worst harms—including those that disproportionately impact marginalized people—may be prevented and mitigated.
There are several lenses through which experts examine artificial intelligence. The use of international human rights law and its well-developed standards and institutions to examine artificial intelligence systems can contribute to the conversations already happening, and provide a universal vocabulary and forums established to address power differentials.
Additionally, human rights laws contribute a framework for solutions, which we provide here in the form of recommendations. Our recommendations fall within four general categories: data protection rules to protect rights in the data sets used to develop and feed artificial intelligence systems; special safeguards for government uses of artificial intelligence; safeguards for private sector uses of artificial intelligence systems; and investment in more research to continue to examine the future of artificial intelligence and its potential interferences with human rights.
Our hope is that this report provides a jumping off point for further conversations and research in this developing space. We don’t yet know what artificial intelligence will mean for the future of society, but we can act now to build the tools that we need to protect people from its most dangerous applications. We look forward to continuing to explore the issues raised by this report, including through work with our partners as well as key corporate and government institutions. [ . . . ]
Table of Contents
- Executive Summary
- How does bias play out in AI?
- What makes the risks of AI different?
- Helpful and harmful AI
- AI and human rights
- Why do human rights matter?
- How AI impacts human rights
- Robotics and AI
- Recommendations: How to address AI-related human-rights harms
- The role of comprehensive data protection laws
- AI-specific recommendations for government and the private sector
- The need for more research of future uses of AI
- Rebuttal: Transparency and Explainability will not kill AI innovation
Licensed under a Creative Commons Attribution 4.0 International License.