Tools  |    |  September 19, 2018

AI Fairness 360 Open Source Toolkit

Tool developed by IBM Research Trusted AI. This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. Containing over 70 fairness metrics and 10 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. IBM invites you to use it and improve it.

AI Fairness and Explainability

To inspect model biases, a complementary approach to fairness metrics is to add transparency to the AI system through explainability. By directly exposing how the model makes its predictions, explanations can help people examine, identify, and ultimately correct biases and discrimination in machine learning models. When the model is unbiased, for example after applying the bias mitigation algorithms provided in this toolkit, effective explanation can assure people of the model fairness and foster trust. 

Research shows that people need a diverse set of explanation capabilities to fully scrutinize model biases.  For example, one may want to inspect if there is discrimination in the overall logic of the model. Others may want to ensure that they are not being unfairly treated by comparing the model’s decisions for them to other individuals.

To learn more about the effectiveness and user preferences of explanation capabilities for supporting fairness judgment of machine learning models, read a recent paper:

To learn more about AI Explainability and try state-of-the-art algorithms that provide a diverse set of explanation capabilities, visit IBM Research AI Explainability 360, an open source toolkit for interpretable machine learning.