Back Mitigating Harm in AI and Data-driven Applications

Mitigating Harm in AI and Data-driven Applications

Mitigating Harm in AI and Data-driven Applications
To strengthen the research activities for the Web Science and Social Computing (WSSC) research group on detecting and preventing bias, both in data and algorithms.

This project aims to strengthen the research activities for the Web Science and Social Computing (WSSC) research group, extending the capacities of collaborating with other groups and departments within UPF. During the last four years, the WSSC group has developed new research topics focusing on detecting and preventing bias, both in data and algorithms, that are very attractive in terms of research and for the wellbeing of society. Two topics of research confluence on this vision:

Research Topic 1 (RT1): Rethinking ML Evaluation towards Responsible AI

The impact of AI is the focus of recent discussions in the media and the policy sector worldwide. In Europe, the regulation of AI is still in debate, and the Spanish Secretaría de Estado de Digitalización e Inteligencia Artificial (SEDIA) took advantage to set up a regulatory sandbox and create the first agency to supervise the harm of algorithms. Nowadays, machine learning engineers can use performance and fairness metrics but lack metrics to assess the impact of errors in human-machine collaboration. This RT looks for a deep understanding of how errors are defined and understood, its impact on the life cycle of algorithms, and its harmful consequences on society. It aims to generate and strengthen the knowledge of Responsible AI and work with public bodies to develop guidelines to assess algorithms and develop procurement processes to avoid direct impact on individuals and collectives.

 

Research Topic 2 (RT2): Governing Data for Fairer Public Policies

AI is one of the current trends in the smart cities and urban policy sector. This trend is defined as having a twin based on data-driven models that can replicate cities’ infrastructure and dynamics. However, the lack of data availability makes such a concept dangerous in terms of unrepresentative data and unfair urban models, while it can pose a risk to individual and collective privacy. RT2 provides mechanisms for enhancing spatio-temporal data interoperability through anonymization and aggregation algorithms. One of the outcomes is the DATALOG project that establishes a Data Trust as a data governance model by engaging stakeholders interested in sharing and using urban data.

The project will be supported by the PhD Fellowship program at the Department of Information and Communication Technologies at UPF.