10. Our alumni

‘Current artificial intelligence (AI) learning models act like “black boxes”. We do not know what goes on between the input of the data and the output of a prediction.’

Berta Benet and Javier Rando, alumni of the bachelor’s degree programme in Mathematical Engineering in Data Science at UPF and members of the start-up EXPAI

min

Detect sexist, racist or LGBTI-phobic algorithms, explain how AI works, and boost confidence in mathematical models. This is the revolutionary idea behind EXPAI, a start-up consisting of Berta Benet and Javier Rando, alumni of the bachelor’s degree programme in Mathematical Engineering in Data Science at UPF, with the participation of Carles Soler, an adjunct lecturer in the Department of Information and Communication Technologies. It is the first start-up in Europe specialized in combatting the dangers, lack of transparency and challenges of AI.

We spoke with them to learn more about their project and find out how they got where they are. EXPAI is one of the projects selected in the second call for the ‘In residence in the Tallers area’ programme. With the help of UPF, it also participated in 4YFN.

What is EXPAI? What do you want to solve with your start-up?

Artificial intelligence (AI) is what we call technology that allows us to replicate behaviours that, when performed by humans, are considered to demonstrate intelligence: understanding natural language, interpreting images, reading, learning, decision-making, etc. Companies are increasingly using these systems to expedite their processes and streamline their operations.

More and more AI systems are replacing or providing support to humans for sensitive decision-making, such as whether to grant a loan. The rapid adoption of this technology without prior regulation has resulted in numerous scandals. For example, Amazon implemented an automated recruiting tool that passed over women. In the US, an algorithm that helped judges decide whether to keep people awaiting trial in custody was found to discriminate against African Americans.

Current artificial intelligence (AI) learning models act like ‘black boxes’. We do not know what goes on within them between the input of the data and the output of a prediction

Additionally, companies are reluctant to adopt these systems because they are difficult to understand. Whilst these might seem like very different problems, they have a common origin: current AI learning models act like ‘black boxes’. They are mathematically complex systems in which we do not know what goes on within the model between the input of the data and the output of a prediction. This way of operating has always been viewed as problematic.

Explainable artificial intelligence (XAI) is the field of AI that allows us to understand the reasoning that models use to reach their predictions. Making these decision-making processes transparent to developers and users is essential to ensure a fairer future and make businesses more efficient. That is what led us to found EXPAI.