Tool that maps and categorizes approximately 500 AI ethics and governance stakeholders and actors. Its goals are both practical and artistic: to help the global community interested in AI ethics and governance discover new organizations, and encourage a broader, more nuanced perspective on the AI ethics and governance landscape.
The map was developed as an art and research project by Şerife Wong, created in partnership with the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford University with support from The Stanford Institute for Human-Centered Artificial Intelligence (HAI).
For the purposes of this project, AI ethics and governance includes private and public actors working on: fairness, transparency, and accountability; governance, national frameworks, and labor disruption; societal impacts in issue areas such as privacy, criminal justice, and human and civil rights; AI safety, security, and control; and advocacy, collaboration, and democratization initiatives. A small number of actors – primarily artists – whose work demonstrates creative exploration in AI ethics are also included. “AI for good” companies or organizations advocating for the beneficial use of AI are not included unless they also work on harmful societal impacts (real and potential) of AI. Educational programs, such as those featuring classes on AI ethics, are not included; those educating underrepresented groups in AI are included if the organization’s mission statement centers on education as part of an ethical goal.
About the Map
The map was created on network mapping platform Kumu and was edited and refined with feedback from over 20 individuals from a variety of sectors. It is shared publicly through Creative Commons license CC BY-SA 4.0 and encourages collaboration and building upon the work.Kumu users may make a replica of the map and edit or add their own data for analytical studies. The data is also available as a spreadsheet.