Guidelines  |    |  December 22, 2017

IEEE: Ethically Aligned Design – A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, v2

Guidelines prepared by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. EADv2 is the most comprehensive, crowd-sourced global treatise regarding the Ethics of Autonomous and Intelligent Systems available to date (Dec 2017). Created by more than 250 global cross-disciplinary thought leaders, EADv2 is comprised of more than one hundred pragmatic recommendations for technologists, policy makers and academics to utilize right away.

The document’s purpose is to:

  • Advance a public discussion about how we can establish ethical and social implementations for intelligent and autonomous systems and technologies, aligning them to defined values and ethical principles that prioritize human well-being in a given cultural context.
  • Inspire the creation of Standards (IEEE P7000™ series and beyond) and associated certification programs.
  • Facilitate the emergence of national and global policies that align with these principles.

Outline of the Issues Presented in EAD v2

Terminology Update
There is no need to use the term artificial intelligence in order to conceptualize and speak of technologies and systems that are meant to extend our human intelligence or be used in robotics applications. For this reason, we use the term, autonomous and intelligent systems (or A/IS) in the course of our work. We chose to use this phrase encapsulating multiple fields (machine learning, intelligent systems engineering, robotics, etc.) throughout Ethically Aligned Design, Version 2 to ensure the broadest application of ethical considerations in the design of these technologies as possible.

I. General Principles

  1. Human Rights
    — How can we ensure that A/IS do not infringe upon human rights?
  2. Prioritizing well-being
    — Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being.
  3. Accountability
    — How can we assure that designers, manufacturers, owners, and operators of A/IS are responsible and accountable?
  4. Transparency
    — How can we ensure that A/IS are transparent?
  5. Misuse and awareness of it
    — How can we extend the benefits and minimize the risks of A/IS technology being misused?

II. Embedding Values into Autonomous Intelligent Systems

  1. Identifying norms for AI/S
    — Which norms should be identified?
    — The need for norm updating
    — A/IS will face norm conflicts and need methods to resolve them
  2. Implementing norms in AI/S
    — Many approaches to norm implementation are currently available, and new ones are being developed.
    — The need for transparency from implementation to deployment.
    — Failures will occur.
  3. Evaluating the Implementation of A/IS
    — Not all norms of a target community apply equally to human and artificial agents.
    — A/IS can have biases that disadvantage specific groups.
    — Challenges to evaluation by third parties.

III. Methodologies to Guide Ethical Research and Design

  1. Interdisciplinary Education and Research
    — Inadequate integration of ethics in A/IS-related degree programs.
    — The need for more constructive and sustained interdisciplinary collaborations to address ethical issues concerning autonomous and intelligent systems (A/IS).
    — The need to differentiate culturally distinctive values embedded in AI design.
  2. Corporate Practices and A/IS
    — Lack of value-based ethical culture and practices for industry.
    — Lack of values-aware leadership.
    — Lack of empowerment to raise ethical concerns.
    — Organizations should examine their cultures to determine how to flexibly implement value-based design.
    — Lack of ownership or responsibility from the tech community.
    — Need to include stakeholders for adequate ethical perspective on A/IS.
  3. Research Ethics for Development and Testing of A/IS Technologies
    — Institutional ethics committees are under-resourced to address the ethics of R&D in the A/IS fields.
  4. Lack of Transparency
    — Poor documentation hinders ethical design.
    — Inconsistent or lacking oversight for algorithms.
    — Lack of an independent review organization.
    — Use of black-box components.

IV. Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI)

  1. Technical
    — As A/IS become more capable, as measured by the ability to perform with greater autonomy across a wider variety of domains, unanticipated or unintended behavior becomes increasingly dangerous.
    — Designing for safety may be much more difficult later in the design lifecycle rather than earlier.
  2. General Principles
    — Researchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment of increasingly capable A/IS.
    — Future A/IS may have the capacity to impact the world on a scale not seen since the Industrial Revolution.

V. Personal Data and Individual Access Control

  1. Digital Personas
    — Individuals do not understand that their digital personas and identity function differently than in real life. This is a concern when personal data is not accessible by an individual and the future iterations of their personas or identity cannot be controlled by them, but by the creators of the A/IS they use.
    — How can an individual define and organize his/her personal data and identity in the algorithmic era?
  2. Regional Jurisdiction
    — Country-wide, regional, or local legislation may contradict an individual’s values or access and control of their personal data.
  3. Agency and Control
    — To understand the role of agency and control within A/IS, it is critical to have a definition and scope of personally identifiable information (PII).
    — What is the definition of control regarding personal data, and how can it be meaningfully expressed?
  4. Transparency and Access
    — It is often difficult for users to determine what information a service provider or A/IS application collects about them at the time of such aggregation/collection (at the time of installation, during usage, even when not in use, after deletion). It is difficult for users to correct, amend, or manage this information.
    — How do we create privacy impact assessments related to A/IS?
    — How can AI interact with government authorities to facilitate law enforcement and intelligence collection while respecting rule of law and transparency for users?
  5. Symmetry and Consent
    — Could a person have a personalized privacy AI or algorithmic agent or guardian?
    — Consent is vital to information exchange and innovation in the algorithmic age. How can we redefine consent regarding personal data so it respects individual autonomy and dignity?
    — Data that is shared easily or haphazardly via A/IS can be used to make inferences that an individual may not wish to share.
    — Many A/IS will collect data from individuals they do not have a direct relationship with, or the systems are not interacting directly with the individuals. How can meaningful consent be provided in these situations?
    — How do we make better user experience and consent education available to consumers as standard to express meaningful consent?
    — In most corporate settings, employees do not have clear consent on how their personal information (including health and other data) is used by employers. Given the power differential between employees and employers, this is an area in need of clear best practices.
    — People may be losing their ability to understand what kinds of processing is done by A/IS on their private data, and thus may be becoming unable to meaningfully consent to online terms. The elderly and mentally impaired adults are vulnerable in terms of consent, presenting consequence to data privacy.

VI. Reframing Autonomous Weapons Systems

  1. Confusions about definitions regarding important concepts in artificial intelligence (AI), autonomous systems (AS), and autonomous weapons systems (AWS) stymie more substantive discussions about crucial issues.
  2. The addition of automated targeting and firing functions to an existing weapon system, or the integration of components with such functionality, or system upgrades that impact targeting and automated weapon release should be considered for review under Article 36 of Additional Protocol I of the Geneva Conventions.
  3. Engineering work should conform to individual and professional organization codes of ethics and conduct. However, existing codes of ethics may fail to properly address ethical responsibility for autonomous systems, or clarify ethical obligations of engineers with respect to AWS. Professional organizations should undertake reviews and possible revisions or extensions of their codes of ethics with respect to AWS.
  4. The development of AWS by states is likely to cause geopolitical instability and could lead to arms races.
  5. The automated reactions of an AWS could result in the initiation or escalation of conflicts outside of decisions by political and military leadership. AWS that engage with other AWS could escalate a conflict rapidly, before humans are able to intervene.
  6. There are multiple ways in which accountability for the actions of AWS can be compromised.
  7. AWS offer the potential for severe human rights abuses. Exclusion of human oversight from the battlespace can too easily lead to inadvertent violation of human rights. AWS could be used for deliberate violations of human rights.
  8. AWS could be used for covert, obfuscated, and non-attributable attacks.
  9. The development of AWS will lead to a complex and troubling landscape of proliferation and abuse.
  10. AWS could be deployed by domestic police forces and threaten lives and safety. AWS could also be deployed for private security. Such AWS may have very different design and safety requirements than military AWS.
  11. An automated weapons system might not be predictable (depending upon its design and operational use). Learning systems compound the problem of predictable use.

VII. Economics/Humanitarian Issues

  1. Economics
    — A/IS should contribute to achieving the UN Sustainable Development Goals.
    — It is unclear how developing nations can best implement A/IS via existing resources.
    — The complexities of employment are being neglected regarding A/IS.
    — Automation is often viewed only within market contexts.
    — Technological change is happening too fast for existing methods of (re)training the workforce.
  2. Privacy and Safety
    — There is a lack of access and understanding regarding personal information.
  3. Education
    — How best to incorporate the “global dimension of engineering” approach in undergraduate and postgraduate education in A/IS.
  4. Equal Availability
    — AI and autonomous technologies are not equally available worldwide.

VIII. Law

  1. Legal Status of A/IS
    — What type of legal status (or other legal analytical framework) is appropriate for application to A/IS, given the legal issues raised by deployment of such technologies?
  2. Governmental Use of A/IS: Transparency and Individual Rights
    — International, national, and local governments are using A/IS. How can we ensure the A/IS that governments employ do not infringe on citizens’ rights?
  3. Legal Accountability for Harm Caused by A/IS
    — How can A/IS be designed to guarantee legal accountability for harms caused by these systems?
  4. Transparency, Accountability, and Verifiability in A/IS
    — How can we improve the accountability and verifiability in autonomous and intelligent systems?

IX. Affective Computing

  1. Systems Across Cultures
    — Should affective systems interact using the norms appropriate for verbal and nonverbal communication consistent with the societal norms where they are located?
    — Long-term interaction with affective artifacts lacking cultural sensitivity could alter the way people interact in society.
    — When affective systems are inserted across cultures, they could affect negatively the cultural/socio/religious values of the community where they are inserted.
  2. When Systems Become Intimate
    — Are moral and ethical boundaries crossed when the design of affective systems allows them to develop intimate relationships with their users?
    — Can and should a ban or strict regulations be placed on the development of sex robots for private use or in the sex industry?
  3. System Manipulation/Nudging/Deception
    — Should affective systems be designed to nudge people for the user’s personal benefit and/or for the benefit of someone else?
    — Governmental entities often use nudging strategies, for example to promote the performance of charitable acts. But the practice of nudging for the benefit of society, including through the use of affective systems, raises a range of ethical concerns.
    — A nudging system that does not fully understand the context in which it is operating may lead to unintended consequences.
    — When, if ever, and under which circumstances is deception performed by affective systems acceptable?
  4. Systems Supporting Human Potential (Flourishing)
    — Extensive use of artificial intelligence in society may make our organizations more brittle by reducing human autonomy within organizations, and by replacing creative, affective, empathetic components of management chains.
    — The increased access to personal information about other members of our society, facilitated by artificial intelligence, may alter the human affective experience fundamentally, potentially leading to a severe and possibly rapid loss in individual autonomy.
    — A/IS may negatively affect human psychological and emotional well-being in ways not otherwise foreseen.
  5. Systems With Their Own Emotions
    — Synthetic emotions may increase accessibility of AI, but may deceive humans into false identification with AI, leading to overinvestment of time, money, trust, and human emotion.

X. Policy Objectives

  1. Ensure that A/IS support, promote, and enable internationally recognized legal norms.
  2. Develop and make available to government, industry, and academia a workforce of well-qualified A/IS personnel.
  3. Support research and development needed to ensure continued leadership in A/IS.
  4. Provide effective regulation of A/IS to ensure public safety and responsibility while fostering a robust AI industry.
  5. Facilitate public understanding of the rewards and risks of A/IS.

XI. Classical Ethics in A/IS

  1. Definitions for Classical Ethics in Autonomous and Intelligent Systems Research
    — Assigning foundations for morality, autonomy, and intelligence.
    — Distinguishing between agents and patients.
    — There is a need for an accessible classical ethics vocabulary.
    — Presenting ethics to the creators of autonomous and intelligent systems.
    — Access to classical ethics by corporations and companies.
    — Impact of automated systems on the workplace.
  2. Classical Ethics From Globally Diverse Traditions
    — The monopoly on ethics by Western ethical traditions.
    — The application of classical Buddhist ethical traditions to AI design.
    — The application of Ubuntu ethical traditions to A/IS design.
    — The application of Shinto-influenced traditions to A/IS design.
  3. Classical Ethics for a Technical World
    — Maintaining human autonomy.
    — Applying goal-directed behavior (virtue ethics) to autonomous and intelligent systems.
    — A requirement for rule-based ethics in practical programming.

XII. Mixed Reality in Information and Communication Technology (ICT)

  1. Social Interactions
    — Within the realm of A/IS-enhanced mixed reality, how can we evolve, harness, and not eradicate the positive effects of serendipity?
    — What happens to cultural institutions in a mixed reality, AI-enabled world of illusion, where geography is largely eliminated, tribe-like entities and identities could spring up spontaneously, and the notion of identity morphs from physical certainty to virtuality?
    — With alternative realities at reach, we will have alternative ways of behaving individually and collectively, and perceiving ourselves and the world around us. These new orientations regarding reality could enhance an already observed tendency toward social reclusiveness that detaches many from our common reality. Could such a situation lead to an individual opting out of “societal engagements?”
    — The way we experience (and define) physical reality on a daily basis will soon change.
    — We may never have to say goodbye to those who have graduated to a newer dimension (i.e., death).
    — Mixed reality changes the way we interact with society and can also lead to complete disengagement.
    — A/IS, artificial consciousness, and augmented/mixed reality has the potential to create a parallel set of social norms.
    — An MR/A/IS environment could fail to take into account the neurodiversity of the population.
  2. Mental Health
    — How can AI-enhanced mixed reality explore the connections between the physical and the psychological, the body and mind for therapeutic and other purposes? What are the risks for when an AI-based mixed-reality system presents stimuli that a user can interact with in an embodied, experiential activity? Can such MR experiences influence and/or control the senses or the mind in a fashion that is detrimental and enduring? What are the short- and long-term effects and implications of giving over one’s senses to software? Moreover, what are the implications for the ethical development and use of MR applications designed for mental health assessment and treatment in view of the potential potency of this media format compared to traditional methodologies?
    — Mixed reality creates opportunities for generated experiences and high levels of user control that may lead certain individuals to choose virtual life over the physical world. What are the clinical implications?
  3. Education and Training
    — How can we protect worker rights and mental well-being with the onset of automation-oriented, immersive systems?
    — AR/VR/MR in training/operations can be an effective learning tool, but will alter workplace relationships and the nature of work in general.
    — How can we keep the safety and development of children and minors in mind?
    — Mixed reality will usher in a new phase of specialized job automation.
    — A combination of mixed reality and A/IS will inevitably replace many current jobs. How will governments adapt policy, and how will society change both expectations and the nature of education and training?
  4. The Arts
    – There is the possibility of commercial actors to create pervasive AR/VR environments that will be prioritized in user’s eyes/vision/experience.
    — There is the possibility that AR/VR realities could copy/emulate/hijack creative authorship and intellectual and creative property with regard to both human and/or AI-created works.
  5. Privacy Access and Control
    — Data collection and control issues within mixed realities combined with A/IS present multiple ethical and legal challenges that ought to be addressed before these realities pervade society.
    — Like other emerging technologies, AR/VR will force society to rethink notions of privacy in public and may require new laws or regulations regarding data ownership in these environments.
    — Users of AI-informed mixed-reality systems need to understand the known effects and consequences of using those systems in order to trust them.

XIII. Well-being

  1. An Introduction to Well-being Metrics
    — There is ample and robust science behind well-being metrics and use by international and national institutions, yet many people in the A/IS field and corporate communities are unaware that well-being metrics exist, or what entities are using them.
  2. The Value of Well-being Metrics for A/IS
    — Many people in the A/IS field and corporate communities are not aware of the value well-being metrics offer.
    — By leveraging existing work in computational sustainability or using existing indicators to model unintended consequences of specific systems or applications, well-being could be better understood and increased by the A/IS community and society at large.
    — Well-being indicators provide an opportunity for modeling scenarios and impacts that could improve the ability of A/IS to frame specific societal benefits for their use.
  3. Adaptation of Well-being Metrics for A/IS
    — How can creators of A/IS incorporate measures of well-being into their systems?
    — A/IS technologies designed to replicate human tasks, behavior, or emotion have the potential to either increase or decrease well-being.
    — Human rights law is sometimes conflated with human well-being, leading to a concern that a focus on human well-being will lead to a situation that minimizes the protection of inalienable human rights, or lowers the standard of existing legal human rights guidelines for non-state actors.
    — A/IS represents opportunities for stewardship and restoration of natural systems and securing access to nature for humans, but could be used instead to distract attention and divert innovation until the planetary ecological condition is beyond repair.
    — The well-being impacts of A/IS applied to human genomes are not well understood.

Source:
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.
Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems,
Version 2. IEEE, 2017.

Additional information at:

Related articles: