Blog post by Daniel Leufer. Published by Access Now.
From October 2019 to July 2020, I was hosted by Access Now as a Mozilla Fellow. During this time, I worked on a project to develop resources to counter the hype, misconceptions, myths, and inaccuracies about “artificial intelligence.” The result of that research is a new website called AI Myths, which was launched during RightsCon 2020.
This website provides resources to debunk eight of the most harmful myths and misconceptions about artificial intelligence, from the idea that AI can solve any problem to the misguided belief that AI systems can be objective or unbiased. As AI systems are leveraged across domains — from detecting hate speech to allocating social welfare benefits — civil society organizations increasingly have to address the role of AI in their work. This often means having to combat hype and overselling on the part of companies pushing their products and governments looking for quick-fix solutions. The goal of this project is to help civil society organizations and others cut through the most common misconceptions, so they can understand how these systems work and ensure they don’t undermine people’s rights.
In this post, I reflect on how the project came about, and explain how it links into the work I took part in during my fellowship and will now engage in as I join the staff at Access Now.
Birth of a project: coordinating civil society’s work on AI and human rights
At RightsCon Tunis in 2019, I took part in a “Solve my Problem” session that brought together representatives from civil society, international institutions, governments, and companies. What connected everyone in the room was that we were all working to ensure that AI development and deployment respects human rights. [ . . . ]