Conference paper by Debjani Saha, Candice Schumann, et al.
Presented at the 2020 AAAI/ACM Conference on AI, Ethics, and Society.
Bias in machine learning has manifested injustice in several areas, with notable examples including gender bias in job-related ads, racial bias in evaluating names on resumes, and racial bias in predicting criminal recidivism. In response, research into algorithmic fairness has grown in both importance and volume over the past few years. Different metrics and approaches to algorithmic fairness have been proposed, many of which are based on prior legal and philosophical concepts. The rapid expansion of this field makes it difficult for professionals to keep up, let alone the general public. Furthermore, misinformation about notions of fairness can have significant legal implications.
Computer scientists have largely focused on developing mathematical notions of fairness and incorporating them in fielded ML systems. A much smaller collection of studies has measured public perception of bias and (un)fairness in algorithmic decision-making. However, one major question underlying the study of ML fairness remains unanswered in the literature: Does the general public understand mathematical definitions of ML fairness and their behavior in ML applications? We take a first step towards answering this question by studying non-expert comprehension and perceptions of one popular definition of ML fairness, demographic parity. Specifically, we developed an online survey to address the following:
- Does a non-technical audience comprehend the definition and implications of demographic parity?
- Do demographics play a role in comprehension?
- How are comprehension and sentiment related?
- Does the application scenario affect comprehension?