Atrás UPF-DTIC FAT Workshop 2020

Fairness, Accountability, and Transparency (FAT) Seminars and Hackfest at DTIC-UPF

Research on Information and Communication Technology (ICT) has exponentially gained attention in the last decades, and nowadays its applications are influencing constantly our daily life. In this process, the consequences of the introduction of a broad range of new technologies in different fields are at the center of discussion of a growing community of practitioners, who analyze the impact of ICT, and more in general Artificial Intelligence (AI), on ethical, legal, social and economic aspects of our society.  

 

One among the top-tier venues where this debate is carried out is the ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*, website), which this year will take place in Barcelona from January 27th to January 30th, 2020. On the sidelines of this event (and with no official affiliation to it), the DTIC-UPF Fairness, Accountability, and Transparency Seminars and Hackfest is imagined as a moment in which to confront in the interdisciplinary context of the Department of Information and Communication Technologies (DTIC) of the Universitat Pompeu Fabra (UPF), reporting part of the discussion made outside the UPF community. 

 

We invite researchers from all disciplinary backgrounds to participate at the event, which will take place at the Tanger Building in the UPF Poblenou campus, from 13:00 to 18:00 January 31st 2020

 

Program

13:00 - 15:00 DivinAI Hackfest: Presentation + hackfest + free lunch

15:00 - 15:50 Seminar by Asia J. Biega - Wanted and Unwanted Exposure: Designing Ethically and Socially Responsible Information Systems

15:50 - 16:10 Coffee break

16:10 - 17:00 Seminar by Solon Barocas - Privacy Dependecies

17:00 - 18:00 Science & Beers (FAT edition)

 

Where and When

13:00 - 18:00, January, 31st, 2020

Room 55.003, Tànger Building, Carrer de Roc Boronat, 138, 08018 Barcelona

 

Registration 

 

 

Event Information

 


 

DivinAI Hackfest + lunch

Are you curious about how diverse are top Artificial Intelligence conferences? If you want to contribute to build and publish diversity indexes of these conferences by gathering data about keynotes, authors and organisers, you are more than welcome to join us! 

DivinAI (Diversity in Artificial Intelligence) is an initiative of the HUMAINT project at Joint Research Center (EC) and the DTIC at UPF, Barcelona. The goal of DivinAI is to research and develop a set of diversity indicators, related to Artificial Intelligence developments, with a special focus on gender balance, geographical representation and presence of academia vs companies. 

The results of this hackfest will be included in the website of the project: https://divinai.org. Please, bring your own laptop!

 

 


 

Wanted and Unwanted Exposure: Designing Ethically and Socially Responsible Information Systems

Asia J. Biega (Microsoft Research)

Information systems have the potential to enhance or limit opportunities when ranking people and products in systems such as job portals or two-sided economy platforms. They also have the potential to violate privacy by accumulating queries into detailed searcher profiles or returning ranked subjects as answers to sensitive queries. This talk will cover various measures and mechanisms for mitigating the aforementioned threats to fairness and privacy. In particular, I’m going to focus on the dual nature of ranking exposure and argue that platforms need to develop fairness mechanisms in cases where exposure is wanted, as well as privacy awareness mechanisms in cases where exposure is unwanted.

 

Bio

Asia J. Biega is a postdoctoral researcher in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) Group at Microsoft Research Montréal. A common theme in her research is that of protecting user rights and well-being. She design ethically and socially responsible information and social computing systems and study how they interact with and influence their users.

 

Her background is in information retrieval, information extraction, and data mining. She completed her PhD summa cum laude at the Max Planck Institute for Informatics and Saarland University advised by Gerhard Weikum and Krishna P. Gummadi. During that time, she was also a member of the Max Planck Institute for Software Systems. Her doctoral work focused on the issues of privacy and fairness in search systems. She hold a B.Sc. and an M.Sc. in Computer Science from the University of Wrocław, Poland. Outside of academia, she worked as an engineering intern in the privacy infrastructure team at Google and as a software developer in e-commerce.

 

Privacy Dependencies

Solon Barocas (Cornell University / Microsoft Research)

This seminar offers a comprehensive survey of privacy dependencies—the many ways that our privacy depends on the decisions and disclosures of other people. What we do and what we say can reveal as much about others as it does about ourselves, even when we don’t realize it or when we think we’re sharing information about ourselves alone.

 

We identify three bases upon which our privacy can depend: our social ties, our similarities to others, and our differences from others. In a tie-based dependency, an observer learns about one person by virtue of her social relationships with others—family, friends, or other associates. In a similarity-based dependency, inferences about our unrevealed attributes are drawn from our similarities to others for whom that attribute is known. And in difference-based dependencies, revelations about ourselves demonstrate how we are different from others—by showing, for example, how we “break the mold” of normal behavior or establishing how we rank relative to others with respect to some desirable attribute.

 

We elaborate how these dependencies operate, isolating the relevant mechanisms and providing concrete examples of each mechanism in practice, the values they implicate, and the legal and technical interventions that may be brought to bear on them. Our work adds to a growing chorus demonstrating that privacy is neither an individual choice nor an individual value—but it is the first to systematically demonstrate how different types of dependencies can raise very different normative concerns, implicate different areas of law, and create different challenges for regulation.

Reference: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3447384

 

Bio

Solon Barocas is a Principal Researcher in the New York City lab of Microsoft Research and an Assistant Professor in the Department of Information Science at Cornell University. He is also a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University.

 

His research explores ethical and policy issues in artificial intelligence, particularly fairness in machine learning, methods for bringing accountability to automated decision-making, and the privacy implications of inference.

 

He co-founded the annual workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) and later established the ACM conference on Fairness, Accountability, and Transparency (FAT*).

 

He was previously a Postdoctoral Researcher at Microsoft Research as well as a Postdoctoral Research Associate at the Center for Information Technology Policy at Princeton University. He completed his doctorate at New York University, where he remains a Visiting Scholar at the Center for Urban Science + Progress.

 

 

 


 

Science & Beers FAT Edition

“Science & Beers” is an initiative started at the DTIC aimed at connecting PhD students and researchers from different disciplinary backgrounds. In the form of brief talks, participants will have the possibility to present their ongoing work, for then discussing and networking while sharing some beers. In this edition, all researchers working on topics related to (but not limited to) Fairness, Accountability, and Transparency are invited to participate. If you want to participate, please contact [email protected]

Presenters

Andres Ferraro (MTG/UPF): The artists side of music recommender systems

Jess Smith (University of Colorado, Boulder): How can we make algorithms "fair" when we all define "fairness" differently?

Juho Vaiste (University of Turku): Ethics of AI from the perspective of tech developers.

Walid Iguider (University of Cagliari): Fair Performance-based User Recommendation in eCoaching Systems

 

 

Acknowledgments

The event is supported by the Maria de Maeztu Strategic Research Program of the Department of Information and Communication Technologies (DTIC) at Universitat Pompeu Fabra. The organizers would like to thank the HUMAINT project at Joint Research Center (EC), and the Web Science and Social Computing Research Group (WSSC, DTIC-UPF).

 

Previous Activities

Fairness, accountability, transparency and ethics (FATE) Reading & Debate Group 2019:

https://www.upf.edu/web/mdm-dtic/fairness-accountability-transparency-and-ethics-fate-reading-debate-group