Publicador de contenidos Publicador de contenidos

Volver a la página índice
Atrás

New project: COURAGE, A Social Media Companion Safeguarding and Educating Students

New project: COURAGE, A Social Media Companion Safeguarding and Educating Students

 

In UPF news (select languague in the menu to read the press release in Catalan, Spanish or English)

 

COURAGE, A Social Media Companion Safeguarding and Educating Students, Volkswagen Stiftung - ref. 95 566, PI: Davinia Hernández-Leo, 4/2019 – 3/2023.

 

Consortium: Sabrina Eimler (HS Rhurwest, Germany), Ulrich Hoppe (RIAS, Duisburg, Germany), Davinia Hernández-Leo (UPF, Barcelona), Udo Kruschwitz (Essex, UK), Davide Taibi (CNR-ITD, Palermo, Italy)

 

Total budget: 1,499,400 euros

 

UPF team: TIDE, and with participation of Viçenc Gòmez, Anders Jonsson (AI-ML), and Carlos Scolari (D.Com).

 

 

 

 

This project aims to develop a Social Media User's Virtual Companion that educates and supports teenage school students facing the threats of social media such as discrimination and biases also escalating to hate speech, bullying, fake news and other toxic content that can strongly affect the real world. The Companion will raise awareness of potential threats in social media while still providing a satisfactory experience through the use of novel gamification strategies and educative information selection algorithms.

 

Using hand defined gamification strategies based on the concept of interactive counter-narrative together with a learnt model of diffusion of biased, hateful and general toxic content on the social media, the companion will bootstrap learning of strategies for interaction and information selection. These strategies will be refined and personalized for each user and his social niche with the dual aims of (A) improving and creating healthy social relationships between the user, his peers and the targets of bias, as well as, (B) increasing their understanding of the social effects of toxic content in social media and the user role in its propagation. Experimentation will use data mined from real social media and re-enact them for testing in restricted and controllable conditions (eg. schools classes).

 

The Companion will set the use of social media inside a social game fostering a counter-narrative which directly challenges biased content and discrimination, highlights what is wrong with such messages and attitudes, challenges theirs assumptions, uncover limits and fallacies, and dismantles associated conspiracy and pseudo-science theories. Through this social game setup the companion will bridge the “us” versus “them” gap that is fostered by hate speech and other expressions of bias (e.g. gendered) and  bringing forward the positive aspects of an open society and focus more on “what we are for” and less on “what we are against”. In this game the users will not only be informed but will be requested to actively and socially contribute creating and sharing content and material that fosters and support the idea of an open, unbiased and tolerant society. Thus the game will also offer the chance of building connections between the users, that when isolated are more vulnerable to online toxic content.

 

A part from exploiting the tools of social media to fight biases and hate speech, the companion will also tackle the specific issues that allow these toxic phenomena to thrive in such virtual environments. In fact, being immersed in (i) biased social content for an extended period of time, (ii) confirmation bias, and, more importantly, (iii) social forms of reinforcement, are some of the stronger causes of polarization and acceptance of toxic ideologies that leverage on the echo-chambers caused by the filter bubbles due to the social media personalization algorithms. The Companion will expose the mechanisms for information filtering and explicitly provide the users with an evaluation of the bias, hatefulness, veracity, polarization, and sensationalism of content using state-of-the-art AI algorithms. By popping filter bubbles often associated with toxic content, it will “renormalise” the view that social media offer and help users see an unbiased perspective. In addition, the companion will use specific algorithms to detect different forms of covert, other than overt, hate speech, biased content and behaviours on the social media. This content is usually based on the use of toxic associations to bypass the regulations of the social media platform. The companion will actively counteract this kind of content, balancing it with opposite perspectives and proposing specifically themed challenges in the game.

 

With its powerful mechanisms together with innovative educational strategies, it will help users to have a wider and more balanced perspective of social media content and overcome today's critical issues about toxic content which supports a circle of discrimination and hate crimes and seriously threaten social security and  cohesion.

 

The project has three strategic aims:

- develop social media user interfaces that educate and support the users of social media in adopting empathy as ethical self governance principles on the network to counteract the diffusion of toxic content;

- developing, advancing and integrating methods and tools to automatically recognise and counteract threats to the health of a connected society;

- understanding and modelling what is a healthy and hyper-connected society to provide support for future risks and detect novel threats.

24.04.2019

 

Multimèdia

Multimedia

Multimedia

Categorías: