News  |  ,   |  May 21, 2021

Towards Human-Centered Explainable AI: the journey so far

News article by Upol Ehsan. Published online in The Gradient.

Excerpt:

“So, the machine has high accuracy and explains its decisions, but we still don’t have engagement with our users?” I asked seeking clarification on a rather perplexing situation. Aware of my prior work in Explainable AI (XAI) around rationale generation, a prominent tech company had just hired me to solve a unique problem. They invested significant resources to build an AI-powered cybersecurity system that aims to help analysts manage firewall configurations, especially “bloat” that happens when people forget to close open ports. Over time, these open ports accumulate and create security vulnerability. Not only did this system have commendable accuracy, it also tried to explain its decision via technical (or algorithmic) transparency. But, there was almost zero to no traction amongst its users. The question was—why?

“Yeah, that’s the confusing part, isn’t it? I think we just need better models…we need to build better rationales [natural language explanations] … guess that’s why we brought you in!” the team’s director chuckled as we continued the meeting.

Even though I was brought in to solve the problem, there was an underlying assumption that the solution to this AI problem was to “build better AI”. This dominant techno-centric assumption stems from a mythology around Explainable AI. For our purposes, we will call this the “algorithm-centered Explainable AI” myth. It goes something like this: If you can just open the black box, everything else will be fine.

In this article, I will challenge this myth and offer an alternative version of XAI, one that is sociotechnically informed and human-centered. I will use my prior work as well as my experience with the aforementioned cybersecurity project to share the journey to the Human-Centered XAI perspective. This human-centered stance emerges from two key observations. [ . . . ]

For the academically inclined version of this piece, here’s the paper