Guidelines  |    |  October 23, 2019

Mozilla’s Approach to Trustworthy Artificial Intelligence

Guidelines adopted by the Mozilla Foundation.

Excerpt:

Many people do not understand how AI regularly touches our lives, and feel powerless in the face of these systems. Mozilla is dedicated to making sure the public understands that we can and must have a say in when machines are used to make important decisions – and shape how those decisions are made.

Our guiding principles:

  • Mozilla believes we need to ensure that the use of AI in consumer technology enriches the lives of human beings rather than harms them. We need to build more trustworthy AI.
  • For us, this means two things: human agency is a core part of how AI is built and integrated and corporate accountability is real and enforced.
  • The best way to make this happen is to work like a movement: collaborating with citizens, companies, technologists, governments and organizations around the world working to make ‘trustworthy AI’ a reality. This is Mozilla’s approach.
  • Mozilla’s roots are as a community-driven organization that works with others. We are constantly looking for allies and collaborators to partner with on our trustworthy AI efforts.

What’s at stake for users around the world?

AI is playing a role in nearly everything these days — from directing our attention, to deciding who gets mortgages, to solving complex human problems. This will have a big impact on humanity. The stakes include:

Privacy: Our personal data powers everything from traffic maps to targeted advertising. Trustworthy AI should let people decide how their data is used and what decisions are made with it.

Fairness: We’ve seen time and again that historical bias can show up in automated decision making. To effectively address discrimination, we need to look closely at the goals and data that fuel our AI.

Trust: Algorithms on sites like YouTube often push people towards extreme and misleading content. Overhauling these content recommendation systems could go a long way to curbing misinformation.

Safety: Experts have raised the alarm that AI could increase security risks and cyber crime. Platform developers will need to create stronger measures to protect our data and personal security.

Transparency: Automated decisions can have huge personal impact, yet the reasons for decisions are often opaque. We need breakthroughs in explainability and transparency to protect users.

[ . . . ]