Article by Chris Piech and Lisa Einstein.
Published in Scientific American.
Here’s how we can avert the dangers and maximize the benefits of this powerful but still emerging technology .
In a 2013 post, Facebook CEO Mark Zuckerberg sketched out a “rough plan” to provide free, basic internet to the world and thus spread opportunity and interconnection. However, the United Nations Human Rights Council reported that, in Myanmar, Facebook’s efforts to follow through on such aspirations accelerated hate speech, fomented division, and incited offline violence in the Rohingya genocide. Free, basic internet now serves as a warning of the complexities of technological impact on society. For Chris, an AI researcher in education, and Lisa, a science educator and student of international cyber policy, this example gives pause: What unintended consequences could AI in education have?
Many look to AI-powered tools to address the need to scale high-quality education and with good reason. A surge in educational content from online courses, expanded access to digital devices, and the contemporary renaissance in AI seem to provide the pieces necessary to deliver personalized learning at scale. However, technology has a poor track record for solving social issues without creating unintended harm. What negative effects can we predict, and how can we refine the objectives of AI researchers to account for such unintended consequences?
For decades the holy grail of AI for education has been the creation of an autonomous tutor: an algorithm that can monitor students’ progress, understand what they know and what motivates them, and provide an optimal, adaptive learning experience. With access to an autonomous tutor, students can learn from home, anywhere in the world. However, autonomous tutors of 2020 look quite different from this ideal. Education with auto-tutors usually engages students with problems designed to be easy for the algorithm to interpret—as opposed to joyful for the learner. [ . . . ]
About the Authors
- Chris Piech is an assistant professor of computer science at Stanford University. He was raised in Kenya and Malaysia. His research uses machine learning to understand human learning.
- Lisa Einstein is a cyber policy master’s student at Stanford University. Previously, she was a physics educator with the Peace Corps’ Let Girls Learn program in Guinea, West Africa.