Conference paper by Jessica Morley, Luciano Floridi, Libby Kinsey and Anat Elhalal.
Presented at AI for Social Good workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019)
Awareness of the potential ethical issues arising from the development and deployment of machine learning applications is growing at a fast rate and has resulted in a number of AI ethics codes and principles. However, there’s a gap between aspiration and viability, and between principle and practice. To fill this gap, methodologies, techniques and processes (‘tools’) are being developed that seek to operationalise and automate adherence to, and monitoring of, good ethical practices when developing and deploying AI-driven products and services. When should they be used, and what is (or is not) covered? Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers ‘apply ethics’ at each stage of the AI development pipeline, and to signal to researchers where further work is needed. We found that there is an uneven distribution of effort in the applied AI ethics space, and that the stage of maturity (readiness for widespread use) of the identified tools is mostly low.