News article by Mark Hammond (bonsai).
Published by the Medium.
How can we explain why machine learning systems make the predictions that they do?
Before we can answer this question of explainable AI — one that Will Knight recently described as “The Dark Secret at the Heart of AI” — we need to take a long hard look at what exactly we mean by ‘explaining’ things.
Expert systems vs modern machine learning
If we look back at the expert systems of the 80’s, we had what we would consider complete explainability: an inference engine leveraged a knowledge base to make assertions that it could explain using the chain of reasoning that led to the assertion.
These systems were completely built on subject matter expertise and while powerful, were somewhat inflexible. Expert systems were largely an artificial intelligence endeavor and not a machine learning endeavor.
Modern machine learning algorithms go to the opposite end of the spectrum, yielding systems capable of working purely from observations and creating their own representations of the world on which to base their predictions. But there is no ability to deliver explainable AI, or to present those representations in a meaningful way to a human who asks, “why?” [ . . . ]
About the Author
Mark Hammond is the Microsoft general manager for Business AI and former Bonsai CEO. He developed a platform that uses machine teaching to help deep reinforcement learning algorithms tackle real-world problems.