News  |    |  February 14, 2020

Emotion AI researchers say overblown claims give their work a bad name

News article by Angela Chen and Karen Hao.
Published in MIT Technology Review.

Excerpt:

A lack of government regulation isn’t just bad for consumers. It’s bad for the field, too.

Perhaps you’ve heard of AI conducting interviews. Or maybe you’ve been interviewed by one yourself. Companies like HireVue claim their software can analyze video interviews to figure out a candidate’s “employability score.” The algorithms don’t just evaluate face and body posture for appearance; they also tell employers whether the interviewee is tenacious, or good at working on a team. These assessments could have a big effect on a candidate’s future. In the US and South Korea, where AI-assisted hiring has grown increasingly popular, career consultants now train new grads and job seekers on how to interview with an algorithm. This technology is also being deployed on kids in classrooms and has been used in studies to detect deception in courtroom videos.

But many of these promises are unsupported by scientific consensus. There are no strong, peer-reviewed studies proving that analyzing body posture or facial expressions can help pick the best workers or students (in part because companies are secretive about their methods). As a result, the hype around emotion recognition, which is projected to be a $25 billion market by 2023, has created a backlash from tech ethicists and activists who fear that the technology could raise the same kinds of discrimination problems as predictive sentencing or housing algorithms for landlords deciding whom to rent to. [ . . . ]