Back Fairness-Measures.org: a new resource of data and code for algorithmic fairness

Fairness-Measures.org: a new resource of data and code for algorithmic fairness

21.11.2017

 

Decisions that are partially or completely based on the analysis of large datasets are becoming more common every day. Data-driven decisions can bring multiple benefits, including increased efficiency and scale. Decisions made by algorithms and based on data also carry an implicit promise of "neutrality." However, this supposed algorithmic neutrality has been brought into question by both researchers and practitioners.

Algorithms are not really "neutral." They embody many design choices, and in the case of data-driven algorithms, include decisions about which datasets to use and how to use them. One particular area of concern are datasets containing patterns of past and present discrimination against disadvantaged groups, such as hiring decisions made in the past and containing subtle or not-so-subtle discriminatory practices against women or minority races, to name just two main concerns. These datasets, when used to train new machine-learning based algorithms, can contribute to deepen and perpetuate these disadvantages. There can be potentially many sources of bias, including platform affordances, written and unwritten norms, different demographics, and external events, among many others.

The study of algorithmic fairness can be understood as two interrelated efforts: first, to detect discriminatory situations and practices, and second, to mitigate discrimination. Detection is necessary for mitigation and hence a number of methodologies and metrics have been proposed to find and measure discrimination. As these methodologies and metrics multiply, comparing across works is becoming increasingly difficult.

We have created a new website, where we would like to collaborate with others to create benchmarks for algorithmic fairness. To start, we have implemented a number of basic and statistics measures in Python, and prepared several example datasets so the same measurements can be extracted across all of them.

We invite you to check the data and code available in this website, and let us know what do you think. We would love to hear your feedback: http://fairness-measures.org/.

Contact e-mail: Meike Zehlike, TU Berlin [email protected].

Meike Zehlike, Carlos Castillo, Francesco Bonchi, Ricardo Baeza-Yates, Sara Hajian, Mohamed Megahed (2017): Fairness Measures: Datasets and software for detecting algorithmic discrimination. http://fairness-measures.org/

Multimedia

Categories:

SDG - Sustainable Development Goals:

Els ODS a la UPF

Contact