Article by Ben Zevenbergen, Allison Woodruff and Patrick Gage Kelley from Google.
Published on arXiv.org.
Explainability is one of the key ethical concepts in the design of machine learning systems. However, attempts to operationalize this concept thus far have tended to focus on new software for model interpretability or guidelines with checklists. Rarely do existing tools and guidance incentivize the designers of AI systems to think critically and strategically about the role of explanations in their systems. We present a set of case studies of a hypothetical machine learning-enabled product, which serves as a pedagogical tool to empower product designers, developers, students, and educators to develop a holistic explainability strategy for their own products.