Guidelines prepared by the Future of Humanity Institute, University of Oxford.
Written by Peter Cihon, Research Affiliate, Center for the Governance of AI.
In light of the strengths and limitations of standards, this paper offers a series of recommendations. They are summarized below:
- Leading AI labs should build institutional capacity to understand and engage in standardization processes. This can be accomplished through in-house development or partnerships with specific third-party organizations.
- AI researchers should engage in ongoing standardization processes. The Partnership on AI and other qualifying organizations should consider becoming liaisons with standards committees to contribute to and track developments. Particular standards may benefit from independent development initially and then be transferred to an international standards body under existing procedures.
- Further research is needed on AI standards from both technical and institutional perspectives. Technical standards desiderata can inform new standardization efforts and institutional strategies can develop paths for standards spread globally in practice.
- Standards should be used as a tool to spread a culture of safety and responsibility among AI developers. This can be achieved both inside individual organizations and within the broader AI community.
[ . . . ]