Back CLAIM Barcelona 2024 - A Christmas Laboratory on AI and misinformation

CLAIM Barcelona 2024 - A Christmas Laboratory on AI and misinformation

24.01.2025

Horacio Saggion and Nataly Buslón took part in CLAIM Barcelona, A Christmas Laboratory on AI and misinformation. Luis Espinosa-Anke, PhD alumnus of the program, also participated

CLAIM Barcelona is an European Media and Information fund (EMIF) funded event aimed to present and discuss advances of AI and NLP research for combating misinformation, and took place in Blanquerna - Universitat Ramon Llull on December 19th 2024.

Misinformation, Cognitive Biases, and Responsible AI: Insights from a Social Computing Perspective

Nataly Buslón

This presentation aims, from a social computing perspective, to analyze various factors impacting the battle against misinformation and the importance of approaching this phenomenon through a cross-disciplinary lens. The study explores cognitive biases, particularly the "nobody-fools-me perception"—the overconfidence of the ability to detect false information, often paired with a sense of immunity to misinformation. By examining Spanish digital publication users, the research assesses how demographics (age, education, technological literacy) influence the ability to distinguish true from false content, especially regarding health topics like COVID-19. Additionally, it will be present an analysis categorizes misinformation into different types of items—joke, exaggeration, decontextualization, and deception—mapping a severity scale to assess its social impact. The findings highlight the need for AI literacy and ethical AI approaches to mitigate misinformation’s effects, underscoring how cognitive biases and social contexts play a role in the spread of manipulated information across social media like WhatsApp.

On the Robustness of Credibility Assessment with Adversarial Examples

Horacio Saggion

This presentation will provide an overview on our recent work aiming at assessing the robustness of misinformation detection solutions implemented as text classification models.  In this context we aim at  assessing adversarial examples (AEs) techniques,  small modifications to the provided text  input, such that the original text meaning is preserved, however changing the classification outcome. Our work is framed in the BODEGA framework developed in the ERINIA project and  implemented in the recent InCrediblAE shared task at CLEF-2024.