News  |  ,   |  February 2, 2019

This is how AI bias really happens—and why it’s so hard to fix

News article by Karen Hao.
Published by MIT Technology Review.


Bias can creep in at many stages of the deep-learning process, and the standard practices in computer science aren’t designed to detect it.

Over the past few months, we’ve documented how the vast majority of AI’s applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data. We’ve also covered how these technologies affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system.

But it’s not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place.

How AI bias happens

We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process. For the purposes of this discussion, we’ll focus on three key stages. [ . . . ]