Tools  |  ,   |  April 17, 2018

Fairness Measures: Datasets and software for detecting algorithmic discrimination

The risen prevalence of automated decision-making process is increasing the risk associated with models that can potentially discriminate against disadvantaged groups. The Fairness Measures Project contributes to the development of fairness-aware algorithms and systems by providing relevant datasets and software.

You may find here a series of datasets we have collected and/or prepared. These datasets are from various fields and applications (e.g., finance, law, and human resources). We also provide common fairness definitions in machine learning. In addition, you can find a few fairness algorithms, both in the area of ranking and classification.

The Fairness Measure website was created by Meike Zehlike, Carlos Castillo, Francesco Bonchi, Mohamed Megahed, Lin Yang, Ricardo Baeza-Yates and Sara Hajian, while they were working at TU Berlin (Zehlike, Megahed, Yang), Eurecat (Castillo, Bonchi, Hajian), and NTENT (Baeza-Yates).

The website is currently maintained by Meike Zehlike and students at TU Berlin, and Carlos Castillo at UPF.

Related videos: http://www.francescobonchi.com/algorithmic_bias_tutorial.html