AI Ethics and Fairness in Link-Based Recommender Systems
This is a two-day short course that will take place at DTIC-UPF on Thursday 10 at 17:00-19:30 (CET) and Friday 11 at 15:30-17:00 (CET).
The first lecture has three parts. In the first part we cover five current specific challenges through examples: (1) discrimination (e.g., facial recognition, justice, sharing economy, language models); (2) phrenology (e.g., biometric based predictions); (3) unfair digital commerce (e.g., exposure and popularity bias); (4) stupid models (e.g., Signal, minimal adversarial AI) and (5) indiscriminate use of computing resources (e.g., large language models). These examples do have a personal bias but set the context for the second part where we address four generic challenges: (1) too many principles (e.g., principles vs. techniques), (2) cultural differences; (3) regulation and (4) our cognitive biases. In the final part we discuss what we can do to address these challenges in the near future.
The second lecture covers two topics: the link prediction problem, and disparate effects of recommender systems. First, we will describe how link-level recommendation works considering both scoring-based methods and supervised learning-based methods, as well as describing evaluation methods for these recommenders. Second, we will study how recommender systems might have disparate effects. For instance, in the case of link-based recommenders, these disparate effects are evident if we consider groups that might be homophilic (i.e., having a strong tendency to link among them), as link-based recommender systems might generate more recommendations towards that group. We will cover various application scenarios for social media sites, from friend-recommendations to what-to-watch-next recommendations.
- Session 1:
- slides: here
- previous talk: Ethics in AI: A Challenging Task
- Session 2:
- Ricardo Baeza-Yates, Universitat Pompeu Fabra.
- Carlos Castillo, Universitat Pompeu Fabra.