Projects  |  ,   |  January 1, 2020

Soft-Law Governance of Artificial Intelligence

Project of the Center for Law, Science and Innovation at Arizona State University’s Sandra Day O’Connor College of Law.

The governance problem

The central question facing policy makers around the world is how to manage these concerns. While overly restrictive government regulation could stifle innovation and block AI’s potential benefits, a governance vacuum can create regulatory uncertainty that discourages investment even while leaving citizens vulnerable to potential harms. Ideally, governance of AI would effectively address its risks and reassure public confidence, while being capable of evolving with rather than impeding its progress. Traditional legal and regulatory approaches, such as legislation and administrative agency rulemaking, take far too long to respond effectively to changes in the technology, with new rules growing obsolete even before they come into effect.

Soft Law: A new approach

An alternative approach that may hold promise is known as “soft law”—mechanisms that set forth substantive expectations but are not directly enforceable by government. Soft law offers some important advantages as a governance strategy for AI—it is flexible and adaptive, it is cooperative and inclusive, it incentivizes rather than punishes, and it can apply internationally. A number of interesting soft law proposals for AI have already been proposed, including private standards, voluntary programs, professional guidelines, codes of conduct, principles and other similar instruments.

The project in three stages – past, present, and future

In this project, leading experts and scholars in governance and in AI technology will research, analyze, and debate various soft law mechanisms as potential governance approaches for AI. This effort includes three stages of research and analysis, focusing respectively on the past, the present and the future. In the first stage, focusing on the past, four leading scholars analyze the rich history of previous soft-law governance of technology. Their research provides a substantive analysis of the strengths and weakness, successes and failures, and lessons for AI from past soft-law approaches to governance of biotechnology, nanotechnology, information and communication technologies, and environmental technologies.

For the second stage, focusing on the present, we created a publicly-accessible database—an invaluable resource for breakthrough research—in which we collect, compare, analyze, and organize over 600 soft law programs directed at AI. We identify key substantive themes and recommendations that are common to most of the proposals, and evaluate how the wording of the substantive provisions affects their interpretation, implementation and compliance. The database provides a typology of the structural or procedural dimensions of each program, including the format of the governance instrument (e.g., standard, principle, code), the type of entity that proposed the program, the entities that are subject to the program, how it will be implemented, sources of funding and support, and any incentives or assurances of compliance, among others.

Finally, the third stage focuses on the future of soft law governance of AI. Nearly thirty contributing scholars provide in-depth analysis, recommendations and guidance as to the substantive content and procedural posture of the best soft law approaches for AI going forward. At the completion of the three research stages, the overall draft findings and recommendations of the project will be presented and debated at a special workshop to provide feedback and guidance on next steps. All the data and published materials produced as part of this project will be made publicly available. With these new freely available resources, researchers, practitioners, and policy makers will be able to make real progress on the central challenges of how to govern AI for the benefit of all.