Tool developed by IBM Research Trusted AI. This extensible open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecycle. Containing eight state-of-the-art algorithms for interpretable machine learning as well as metrics for explainability, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. We invite you to use it and improve it.
AI Explainability and Fairness
By adding transparency throughout AI systems, explanations can help people examine, identify, and ultimately correct biases and discrimination in machine learning models. When the model is unbiased, effective explanations can assure people of the model fairness and foster trust.
Research shows that people need a diverse set of explanation capabilities to fully scrutinize model biases, which can be supported by algorithms provided in this toolkit. For example, one may want to inspect if there is discrimination in the overall logic of the model. Boolean Rule Column Generation and Generalized Linear Rule Model can support such global understanding. Others may want to ensure that they are not being unfairly treated by comparing the model’s decisions for them to other individuals. CEM and ProtoDash can help one perform such inspection.
To learn more about AI Fairness and techniques to address biases in AI systems, visit IBM Research AI Fairness 360, an open source toolkit to help you examine, report, and mitigate biases in machine learning models.