Curriculum  |    |  August 1, 2019

An Ethics of Artificial Intelligence Curriculum for Middle School Students

Curriculum developed by Blakeley H. Payne at the MIT Media Lab. 94 pages.

This document includes a set of activities, teacher guides, assessments, materials, and more to assist educators in teaching about the ethics of artificial intelligence. These activities were developed at the MIT Media Lab to meet a growing need for children to understand artificial intelligence, its impact on society, and how they might shape the future of AI. 

This curriculum was designed and tested for middle school students (approximately grades 5th-8th). Most activities are unplugged and only require the materials included in this document, although unplugged modifications are suggested for the activities which require computer access.


Learning Objectives

  1. Understand the basic mechanics of artificial intelligence systems. 
    1. Recognize algorithms in the world and be able to give examples of computer algorithms and algorithms in everyday contexts (for example, baking a cake).
    2. Know three parts of an algorithm: input, steps to change input, output. 
    3. Know that artificial intelligence is a specific type of algorithm and has three specific parts: dataset, learning algorithm, and prediction. 
      1. Understand the problem of classification in the supervised machine learning context.
      2. Understand how the quantity of training data affects the accuracy and robustness of a  supervised machine learning model. 
    4. Recognize AI systems in everyday life and be able to reason about the prediction an AI system makes and the potential datasets the AI system uses.
  2. Understand that all technical systems are socio-technical systems. Understand that socio-technical systems are not neutral sources of information and serve political agendas.
    1. Understand the term “optimization” and recognize that humans decide the goals of the socio-technical systems they create.
    2. Reason about the goals of socio-technical systems in everyday life and distinguish advertised goals from true goals (for example, the YouTube recommendation algorithm aims to make profit for the company, while it is advertised as a way to entertain users). 
      1. Map features in existing socio-technical systems to identified goals.
    3. Know the term “algorithmic bias” in the classification context.
      1. Understand the effect training data has on the accuracy of a machine learning system.
      2. Recognize that humans have agency in curating training datasets.
      3. Understand how the composition of training data affects `the outcome of a supervised machine learning system. 
  3. Recognize there are many stakeholders in a given socio-technical system and that the system can affect these stakeholders differentially.
    1. Identify relevant stakeholders in an socio-technical system.
    2. Justify why an individual stakeholder is concerned about the outcome of a socio-technical system. 
    3. Identify values an individual stakeholder has in an socio-technical system, e.g. explain what goals the system should hold in order to meet the needs of a user. 
    4. Construct an ethical matrix around a socio-technical system.
  4. Apply both technical understanding of AI and knowledge of stakeholders in order to determine a just goal for a socio-technical system. 
    1. Analyze an ethical matrix and leverage analysis to consider new goals for a socio-technical system.
    2. Identify dataset(s) needed to train an AI system to achieve said goal.
    3.  Design features that reflect the identified goal of the socio-technical system or reflect the stakeholder’s values. 
  5. Consider the impact of technology on the world.
    1. Reason about secondary and tertiary effects of a technology’s existence and the circumstances the technology creates for various stakeholders.
News video by WGBH Boston. Runtime 4 minutes.

Related article:
Teaching Kids The Ethics Of Artificial Intelligence

Excerpt:

Research suggests kids growing up on Alexa and Google Home think that these devices are smarter than them. But a new kind of summer camp wants kids to know that artificial intelligence (AI) is far from perfect. Welcome to AI Ethics camp.

In a classroom on the second floor of the Massachusetts Institute of Technology’s Media Lab, two dozen middle school-aged kids wearing neon green t-shirts sat in clusters around tables. Standing at the front of the room was MIT researcher Blakeley H. Payne, who’s devoted her graduate studies in the Media Lab to the ethics of artificial intelligence.

“How many of you use YouTube?” Payne asked. The answer: just about everyone in this class. Some 81% of all parents with children age 11 or younger let their child watch videos on YouTube, with 34% indicating that they allow their child to do this regularly, according to a 2018 survey by the Pew Research Center. The more you watch, the more YouTube is able to target all kinds of content, including advertising. [ . . . ]