Conference Paper by Jess Whittlestone, Rune Nyrup, Anna Alexandrova and Stephen Cave. Presented at AIES ’19: the 2019 AAAI/ACM Conference on AI, Ethics, and Society.
The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fairness, privacy, and autonomy. While articulating and agreeing on principles is important, it is only a starting point. Drawing on comparisons with the field of bioethics, we highlight some of the limitations of principles: in particular, they are often too broad and high-level to guide ethics in practice. We suggest that an important next step for the field of AI ethics is to focus on exploring the tensions that inevitably arise as we try to implement principles in practice. By explicitly recognising these tensions we can begin to make decisions about how they should be resolved in specific cases, and develop frameworks and guidelines for AI ethics that are rigorous and practically relevant. We discuss some different specific ways that tensions arise in AI ethics, and what processes might be needed to resolve them.
Four Key Tensions
Given the wide range of tensions that may arise from applications of AI, now or in the future, there is unlikely to be an exhaustive list of all possible tensions. However, we believe that the following four tensions will be particularly central to thinking about the ethical issues arising from the applications of AI systems in society today. These capture a range of issues which are
- Using data to improve the quality and efficiency of services vs. respecting privacy and autonomy of individuals
- Using algorithms to make decisions and predictions more accurate vs ensuring fair and equal treatment
- Reaping the benefits of increased personalisation in the digital sphere vs enhancing solidarity and citizenship
- Using automation to make people’s lives more convenient and empowered vs promoting selfactualisation and dignity
Identifying Further Tensions
The above tensions are important and represent areas where exploring tensions is likely to be fruitful for AI ethics. Going forward, further such areas can and should be identified. In order to do so, it is helpful to ask a range of questions, including:
- Where AI is being used to serve a particular goal or value, or for ‘social benefit’ in general, what risks to other values are introduced?
- Where might uses of AI that benefit one group, or the population as a whole, have negative consequences for a specific subgroup? How do we balance the interests of different groups?
- Where might applications of AI that are beneficial in the near-term introduce risks in the long-term? How do we trade-off short and long-term impacts of society?
- Where might future developments in AI either enhance or threaten important values, depending on the direction they take?