Report from the Nuffield Foundation and the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. By Jess Whittlestone, Rune Nyrup, Anna Alexandrova, Kanta Dihal and Stephen Cave. 59 pages.
The aim of this report is to offer a broad roadmap for work on the ethical and societal implications of algorithms, data, and AI (ADA) in the coming years. It is aimed at those involved in planning, funding, and pursuing research and policy work related to these technologies. We use the term ‘ADA-based technologies’ to capture a broad range of ethically and societally relevant technologies based on algorithms, data, and AI, recognising that these three concepts are not totally separable from one another and will often overlap. A shared set of key concepts and concerns is emerging, with widespread agreement on some of the core issues (such as bias) and values (such as fairness) that an ethics of algorithms, data, and AI should focus on. Over the last two years, these have begun to be codified in various codes and sets of ‘principles’. Agreeing on these issues, values and highlevel principles is an important step for ensuring that ADA based technologies are developed and used for the benefit of society.
However, we see three main gaps in this existing work: (i) a lack of clarity or consensus around the meaning of central ethical concepts and how they apply in specific situations; (ii) insufficient attention given to tensions between ideals and values; (iii) insufficient evidence on both (a) key technological capabilities and impacts, and (b) the perspectives of different publics.
In order to address these problems, we recommend that future research should prioritise the following broad directions (more detailed recommendations can be found in section 6 of the report):
1. Uncovering and resolving the ambiguity inherent in commonly used terms (such as privacy, bias, and explainability), by:
- Analysing their different interpretations.
- Identifying how they are used in practice in different disciplines, sectors, publics, and cultures.
- Building consensus around their use, in ways that are culturally and ethically sensitive.
- Explicitly recognising key differences where consensus cannot easily be reached, and developing terminology to prevent people from different disciplines, sectors, publics, and cultures talking past one another.
2. Identifying and resolving tensions between the ways technology may both threaten and support different values, by:
- Exploring concrete instances of the following tensions central to current applications of ADA:
- Using algorithms to make decisions and predictions more accurate versus ensuring fair and equal treatment.
- Reaping the benefits of increased personalisation in the digital sphere versus enhancing solidarity and citizenship.
- Using data to improve the quality and efficiency of services versus respecting the privacy and informational autonomy of individuals.
- Using automation to make people’s lives more convenient versus promoting self-actualisation and dignity.
- Identifying further tensions by considering where:
- The costs and benefits of ADA-based technologies may be unequally distributed across groups, demarcated by gender, class, (dis)ability, or ethnicity.
- Short-term benefits of technology may come at the cost of longer-term values.
- ADA-based technologies may benefit individuals or groups but create problems at a collective level.
- c. Investigating different ways to resolve different kinds of tensions, distinguishing in particular between those tensions that reflect a fundamental conflict between values and those that are either illusory or permit practical solutions.
3. Building a more rigorous evidence base for discussion of ethical and societal issues, by:
- Drawing on a deeper understanding of what is technologically possible, in order to assess the risks and opportunities of ADA for society, and to think more clearly about trade-offs between values.
- Establishing a stronger evidence base on the current use and impacts of ADA-based technologies in different sectors and on different groups – particularly those that might be disadvantaged, or underrepresented in relevant sectors (such as women and people of colour) or vulnerable (such as children or older people) – and to think more concretely about where and how tensions between values are most likely to arise and how they can be resolved.
- Building on existing public engagement work to understand the perspectives of different publics, especially those of marginalised groups, on important issues, in order to build consensus where possible. [ . . . ]