Information and Communication Technologies (ICT) are continuously shaping the everyday life of billion of people. The impacts on public and private organizations and the products that they offer are already tangible before our eyes, both in terms of benefits and risks. However, while the enthusiasm about the positive effects of developing new technologies has been always at the center of attention, concerns about their impact on humans and society have less been part of the public and scientific debate. Recently, we have seen a change of this trend, thanks also to several initiatives of many institutions (FAT/ML, ACM FAT*  among the others), which are encouraging the various communities to discuss AI-related ethical, legal and economic issues.

The aim of this group is to stimulate the debate related to FATE in ICT within the UPF, and in particular the DTIC community. The rich diversity of research fields presents in our department provides an heterogeneous space, and at the same time the FATE framework provides a common ground for debating, from which we imagine that every participant can benefit.

FATE Seminars

Although the María de Maeztu program has finalised on June 2020, FATE (Fairness, accountability, transparency and ethics) seminars at the Department of Information and Communication Technologies at UPF continue in 2021 and 2022.

2022

Trustworthy AI -Part 1- : The limits of human supervision

May 12th 15:00 - 16:30h. Room 52.223 and online (Registration required)

Slides

Technologies that intend to make an improvement in management and automated decision making but that are based on technical failures. The European proposal for regulating Artificial Intelligence seeks for Human Oversight, or supervision, to override algorithm failures and mitigate its harms. Nevertheless, a large research corpus on Human-Computer Interaction proves that the reality is much more complicated. In this seminar we will explore how humans interact with algorithms at different stages, what are the opportunities and limitations to successfully develop systems with human-in-the-loop and other governance schemes.  


Manuel Portela Charnejovsky. DTIC-UPF
Manuel Portela is a postdoctoral researcher at the Web Science and Social Computing research group at the UPF-DTIC. He obtained his PhD in GeoInformatics as a Joint European Degree, part of the H2020-MSCA’s GEO-C project, at Universitat Jaume I (UJI) of Castellón, Spain. Manuel is interested in the impact of technology in society, including topics such as artificial intelligence, citizen science, and smart cities. Currently, he researches the development of Explainable and Participatory approaches to Artificial Intelligence. In parallel, he co-directs the project Algorithmic-societies.org that seeks for engage society in the development of AI. 

 

May 26th. 15:00h - 16:30. Room 51.100 and online (Registration required)

Trustworthy AI -Part 2- : Auditing algorithms

Slides
We consider algorithms as socio-technical systems because its creation, implementation and supervision depends on a network of actors and is constrained by cultural and social contexts. In the EU, the approach to Trustworthy AI encompasses a set of expectations over these systems: accountability, explainability, interpretability, transparency and oversight. However, from a socio-technical perspective there are several limitations that could fail to comply with these principles. We will explore the main concepts and requirements regarding discrimination and biases in AI from this high-level point of view and present diverse toolkits for algorithmic auditing, and the discussion of the main concepts of the so-called Trustworthy AI. 
 
Manuel Portela Charnejovsky. DTIC-UPF
Manuel Portela is a postdoctoral researcher at the Web Science and Social Computing research group at the UPF-DTIC. He obtained his PhD in GeoInformatics as a Joint European Degree, part of the H2020-MSCA’s GEO-C project, at Universitat Jaume I (UJI) of Castellón, Spain. Manuel is interested in the impact of technology in society, including topics such as artificial intelligence, citizen science, and smart cities. Currently, he researches the development of Explainable and Participatory approaches to Artificial Intelligence. In parallel, he co-directs the project Algorithmic-societies.org that seeks for engage society in the development of AI.
 

 
Registration

2021

----

Online free seminar (registration required).

Link to access to the talk https://meet.google.com/gxk-qnfu-eaq 

The social construction of algorithms: A dilemma in fairness, accountability and transparency. Manuel Portela Charnejovsky. DTIC-UPF
Time: March 25th, 2021 03:30 PM Barcelona

Slides

Recording

Abstract: It is said that no algorithm can be fair to everyone. Rather than mere technical tools, algorithms should be considered as socio-technical assemblages. Such consideration can open the door to the analysis of algorithms as creations born from the mutual relationship with those who enters in contact by direct or indirect manners. We will focus this seminar on using concepts from  Science and Technology Studies to discuss the controversies of current debates on fairness, accountability and transparency in Machine Learning and Artificial Intelligence for automating the social life. The goal is to understand how to design and implement algorithms taking into account social dilemmas and problematics. 

Manuel Portela Charnejovsky. DTIC-UPF
Manuel Portela is a postdoctoral researcher at the Web Science and Social Computing research group at the UPF-DTIC. He obtained his PhD in GeoInformatics as a Joint European Degree, part of the H2020-MSCA’s GEO-C project, at Universitat Jaume I (UJI) of Castellón, Spain. Manuel is interested in the impact of technology in society, including topics such as artificial intelligence, citizen science, and smart cities. Currently, he researches the development of Explainable and Participatory approaches to Artificial Intelligence. In parallel, he co-directs the project Algorithmic-societies.org that seeks for engage society in the development of AI.

Registration required

 

----

Online free seminar.

Ethics in AI: A Challenging Task
Ricardo Baeza-Yates

Time: Feb 12, 2021 04:00 PM Madrid

Slides

Recording (just for private use)

Abstract:

In the first part we cover current specific challenges: (1) discrimination (e.g., facial recognition, justice, sharing economy, language models); (2) phrenology (e.g., biometric based predictions); (3) unfair digital commerce (e.g., exposure and popularity bias, antitrust); and (4) non-rational stock market (e.g., Signal and Gamestop) . These examples do have a personal bias but set the context for the second part where we address three generic challenges: (1) cultural differences (e.g., Christian vs. Muslim); (2) legal issues (e.g., privacy, regulation) and (3) too many principles (e.g., principles vs. techniques).

Short bio:

Ricardo Baeza-Yates is Research Professor at the Institute for Experiential AI of Northeastern University. He is also part-time Professor at Universitat Pompeu Fabra in Barcelona and Universidad de Chile in Santiago. Before, he was VP of Research at Yahoo Labs, based in Barcelona, Spain, and later in Sunnyvale, California, from 2006 to 2016. He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 1999 and 2011 (2nd ed), that won the ASIST 2012 Book of the Year award. From 2002 to 2004 he was elected to the Board of Governors of the IEEE Computer Society and between 2012 and 2016 was elected for the ACM Council. Since 2010 is a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow, among other awards and distinctions. He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989, and his areas of expertise are web search and data mining, information retrieval, bias on AI, data science, and algorithms in general.

Seminar supported by the ACM Distinguished Speaker Program.

2020

FATE Seminars and Hackfest 2020

One among the top-tier venues where this debate is carried out is the ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*, website), which this year will take place in Barcelona from January 27th to January 30th, 2020. On the sidelines of this event (and with no official affiliation to it), the DTIC-UPF Fairness, Accountability, and Transparency Seminars and Hackfest is imagined as a moment in which to confront in the interdisciplinary context of the Department of Information and Communication Technologies (DTIC) of the Universitat Pompeu Fabra (UPF), reporting part of the discussion made outside the UPF community. 

Program

13:00 - 15:00 DivinAI Hackfest: Presentation + hackfest + free lunch

15:00 - 15:50 Seminar by Asia J. Biega - Wanted and Unwanted Exposure: Designing Ethically and Socially Responsible Information Systems

15:50 - 16:10 Coffee break

16:10 - 17:00 Seminar by Solon Barocas - Privacy Dependecies

17:00 - 18:00 Science & Beers (FAT edition)

 

2019

In the first iteration (Spring/Summer 2019), we have invited five speakers who will present how in different fields the use of “intelligent systems” are already impacting our society.  

Registration

Calendar

May, 23rd 15:30. Room 55.309 Automation of personal data and consumer law enforcement using AI Francesca Lagioia
May, 31st 15:30. Room 52.S29 Human behaviour and machine intelligence Emilia Gomez
June, 6th 15:00h. Auditorium Big Data, Machine Learning and Justice Ricardo Baeza-Yates
June, 12th 15:30. Room 52.217 Profiling and automated decision making under the General Data Protection Regulation Antoni Rubí-Puig
June, 20th 15:30. Room 52.S29 Impact of machine intelligence in healthcare (*) Sergio Sánchez-Martínez

(*) In addition, on July 5th BCN MedTech at DTIC-UPF organises with EC's JRC the full-day "Workshop on the impact of artificial intelligence in healthcare" (Details and registration)

In 2020, we will host our first FAT Seminars and Hackfest, with external invited speakers coming to Barcelona for the ACM FAT* Conference.

Videos

 

Automation of personal data and consumer law enforcement using AI

by Francesca Lagioia, postdoctoral research fellow at Interdepartmental Centre for Research in the History, Philosophy, and Sociology of Law and in Computer Science and Law (CIRSFID) at the University of Bologna, and Research Associate at the European University Institute (EUI)

Slides

Abstract:
Despite the European Consumer Law and EU General Data Protection Regulation (GDPR) are in place, and despite enforcers’ competence for abstract control, Terms of Services and Privacy Policies of online services  still often fail to comply with regulations. Artificial intelligence and in particular machine learning methods can be used for automating the legal evaluation of both terms of services and privacy policies, in order to empower the civil society representing the interests of consumers.

References:
- Lippi, M., Pałka, P., Contissa, G., Lagioia, F., Micklitz, H. W., Sartor, G., & Torroni, P. (2019). CLAUDETTE: an automated detector of potentially unfair clauses in online terms of service. Artificial Intelligence and Law, 27(2), 117–139. https://doi.org/10.1007/s10506-019-09243-2

- Contissa, G., Docter, K., Lagioia, F., Lippi, M., Micklitz, H. W., Palka, P., … Torroni, P. (2018). Automated processing of privacy policies under the EU general data protection regulation. Frontiers in Artificial Intelligence and Applications, 313(see 4), 51–60. https://doi.org/10.3233/978-1-61499-935-5-51

 

 

Human behaviour and machine intelligence

by Emilia Gomez, lead scientist of the HUMAINT project at the Centre for Advanced Studies, Joint Research Centre, European Commission, and head of the MIR (Music Information Research) lab of the Music Technology Group (MTG) at UPF.

Abstract:
AI systems are already embedded in our daily life, e.g. when finding our way in the city or listening to music. In this lecture we will provide some examples of the impact that AI, in particular data-driven machine learning algorithms and social robots, have in human behavior. We will focus on four main use cases: ML algorithms for decision making, child-robot interaction, AI impact on tasks that we do at work, and music & well being. We will explain the challenges of assessing this impact and making sure those systems are developed “with” and “for” people's welfare

References:
- Gomez Gutierrez, E. et al. (2018). Assessing the impact of machine intelligence on human behaviour: an interdisciplinary endeavour. Joint Research Centre Conference and Workshop Reports.  https://arxiv.org/abs/1806.03192

 

 

Big Data, Machine Learning and Justice

by Ricardo Baeza-Yates, CTO of NTENT, and Professor and founder of the Web Science and Social Computing Research Group at UPF

Slides

Abstract:
In this presentation we start with the main challenges of using big data and machine learning, including scalability and fairness. We exemplify these challenges analyzing how machine learning has been applied in justice and how human biases are exposed by models that learn from human data. However, even though these models are not perfect, they are many times better than humans because they are always coherent in their decisions. We will finish with some bad and good practices that should be avoided or enforced, respectively, when training machine learning models.

References:
- Berk at al. Fairness in Criminal Justice Risk Assessments: The State of the Art. Sociological Methods & Research 1-42, 2018. https://arxiv.org/abs/1703.09207

- Kleinberg et al. Human Decisions and Machine Predictions, Quarterly Journal of Economics, 237-293, 2018. https://www.nber.org/papers/w23180

 

 

Profiling and automated decision making under the General Data Protection Regulation

by Antoni Rubí-Puig,  Associate Professor in Civil Law and Associate Director for Research at the Law Department at UPF (research group).

Slides and proposed use cases

Abstract:

The session aims at discussing the rules on profiling and automated individual decision-making under the General Data Protection Regulation and at assessing their adequacy from the standpoint of the Fairness, Accountability, Transparency and Ethics (FATE) framework. Our focus will be on profiling, that is, the automated processing of personal data with the goal of evaluating certain personal aspects relating to an individual, such as for instance analyzing or predicting her performance at work, economic situation, health, personal preferences, interests, willingness to pay, reliability, behavior, location or movements.

Progresses in technology and the use of big data analytics, artificial intelligence and machine learning have made it easier to elaborate profiles. Profiling can be in the advantage of individuals and organizations as it may lead to increased efficiencies and cost savings. However, it also involves significant risks for individuals’ rights and freedoms, not only to privacy. Rules in the GDPR aim at finding a balance between those advantages and risks but the final trade-off may not be the most adequate one for the FATE framework.

References:
- Edwards, L. & Veale, M. (2018), Enslaving the Algorithm: From a ‘Right to an Explanation’ to a ‘Right to Better Decisions’?, IEEE Security & Privacy (2018) 16(3), pp. 46-54, DOI: 10.1109/MSP.2018.2701152. Available at SSRN: https://ssrn.com/abstract=3052831 or http://dx.doi.org/10.2139/ssrn.3052831

- Wachter, S. & Mittelstadt, B. (2019), A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, Columbia Business Law Review, 2019(1) (forthcoming). Available at SSRN: https://ssrn.com/abstract=3248829 (especially, pp. 4-9 and 120-130).

 

Impact of machine intelligence in healthcare

by Sergio Sánchez-Martínez, Postdoctoral Research Fellow at Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS)

Slides

Abstract:
This talk aims at advancing the scientific understanding of machine learning (ML) related to healthcare and at studying the impact of ML algorithms on humans, focusing on clinical decision-making. The talk will be articulated around the essential building blocks to achieving the high-level task of clinical decision-making, namely data acquisition, feature extraction, interpretation and decision support. For each of these blocks, the speaker will provide a concise review of state-of-the-art applications, followed by the challenges still to overcome and the potential benefits of their application in clinical practice. At the end, there will be a discussion on the main problems to tackle when creating algorithms to analyze clinical data and also implementation challenges, such as which interaction paradigms we should use, or the competences medical doctors should have.

References:
- Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56. https://doi.org/10.1038/s41591-018-0300-7

- D’hooge, J., & Fraser, A. G. (2018). Learning About Machine Learning to Create a Self-Driving Echocardiographic Laboratory. Circulation, 138(16), 1636–1638. https://doi.org/10.1161/CIRCULATIONAHA.118.037094

- Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. Retrieved from http://arxiv.org/abs/1702.08608

Take part

We are aware that the complexity and interdisciplinarity of FATE topics are undoubtedly a big issue when imaging “short” sessions of about one hour. In addition, the lack of a common vocabulary and background might not facilitate the discussion. We aim that participants will arrive with an open view, already informed and proactive to bring about a constructive debate, because most of the aspects which will be discussed cannot be resolved as “True/False” or “Right/Wrong”. If you want to participate and you have any suggestion about readings, materials, speakers to be invited, etc., please contact [email protected]