Guidelines  |    |  November 26, 2018

AI4People’s Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

Report produced by the Scientific Committee of AI4People. This White Paper reports the findings of AI4People, an Atomium – EISMD initiative designed to lay the foundations for a “Good AI Society” through the creation of an ethical framework. 33 pages.

Excerpt:

The opportunities and risks of AI for Society

Establishing an ethical framework for AI in society requires an explanation of the opportunities and risks that the design and use of the technology presents. We identify four ways in which, at a high level, AI technology may have a positive impact on society, if it is designed and used appropriately. Each of these four opportunities has a corresponding risk, which may result from its overuse or misuse. There is also an overarching risk that AI might be underused, relative to its potential positive impact, creating an opportunity cost. An ethical framework for AI must be designed to maximise these opportunities and minimise the related risks.

A unified framework of principles for AI

Several multistakeholder groups have created statements of ethical principles which should guide the development and adoption of AI. Rather than repeat the same process here, we instead present a comparative analysis of several of these sets of principles. Each principle expressed in each of the documents we analyse is encapsulated by one of five overarching principles. Four of these – beneficence, nonmaleficence, autonomy, and justice – are established principles of medical ethics, but a fifth – explicability – is also required, to capture the novel ethical challenges posed by AI.

Twenty recommendations for a Good AI Society

We offer 20 concrete recommendations tailored to the European context which, if adopted, would facilitate the development and adoption of AI that maximises its opportunities, minimises its risks, and respects the core ethical principles identified. Each recommendation takes one of four forms: to assess, to develop, to incentivise, or to support good AI. These recommendations may in some cases be undertaken directly by national or supranational policy makers, and in others may be led by other stakeholders. Taken together with the opportunities, risks and ethical principles we identify, the recommendations constitute the final element of an ethical framework for a good AI society.

Authors

Luciano Floridi, Josh Cowls, Monica Beltrametti , Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena.


AI4People is a multi-stakeholder forum, bringing together all actors interested in shaping the social impact of new applications of AI, including the European Commission, the European Parliament, civil society organisations, industry and the media. Launched in February 2018 with a three year roadmap, the goal of AI4People is to create a common public space for laying out the founding principles, policies and practices on which to build a “good AI society”. For this to succeed we need to agree on how best to nurture human dignity, foster human flourishing and take care of a better world. It is not just a matter of legal acceptability, it is really a matter of ethical preferability.