5. Kaleidoscope

The ethical challenges of artificial intelligence (AI)

min
Ricardo Baeza-Yates

Ricardo Baeza-Yates, professor in the UPF Department of Information and Communication Technologies and Director of Research at the Institute for Experiential AI at Northeastern University (USA)

Artificial intelligence, mainly in the form of deep learning, is everywhere. From our mobile phones to our cars, as well as many of our household appliances and all the computers around us. This trend, which some call exponential, accentuates or generates new social problems that, obviously, grow the same way. The first case to make the headlines was the 2016 ProPublica study showing that a system used to inform parole decisions in the US was racist. In 2018, a person lost her life: an Arizona woman was hit by a self-driving Uber car as she jaywalked her bicycle across the street at night, a situation not provided for in the training data. There are already more than a thousand known incidents of this kind, which is why at UPF we are working on mitigating these types of problems, for example, by reducing gender bias in staff selection.

Certainly, the first ethical challenge is to prevent discrimination against people, whether due to biases in the data (the most common case), biases stemming from an algorithm, or biases occurring in the feedback loop of the interaction of AI systems with their users, where people’s cognitive biases wreak havoc. The example reported by ProPublica or that of the more than 27 thousand vulnerable families wrongly accused of child welfare fraud that ended in the Dutch government’s resignation in January 2021 are related to data; while that of Deliveroo’s discrimination against less employed riders that same month in Italy was due to an algorithm. Interaction biases include exposure bias and popularity bias, which affect anyone who reads the news or shops online. In this case, misleading or outright false news exploits cognitive confirmation bias.

The second ethical challenge is to make sure that algorithms meet certain basic conditions, as we already do with food or medicine. This might have prevented the death in Arizona or absurd situations such as that of the town of Bitche, whose Facebook page was suspended for three weeks because an algorithm for checking offensive language in English was used in France.

The third ethical challenge is to stop the use of AI to predict personal characteristics without a clear scientific basis that, even if they do exist, can affect many people. The best example of this modern-day phrenology is the use of facial features to predict sexual preferences published in 2017 by a Stanford researcher.

A fourth ethical challenge is to rationalize the indiscriminate use of computational and energy resources to train algorithms that generate ethical problems, such as language models that do not really understand the semantics of the text and then discriminate against people, for example, if they are Muslim.

The final challenge, and perhaps the most important in the long run, is to prevent the inequality in society from continuing to grow. It has been said that AI is enjoyed by the rich and suffered by the poor; this is compounded by the fact that 40% of the world’s inhabitants do not have access to the Internet and 30% do not have a mobile phone (although they do retain their privacy!). One possible dystopia is a future world with two classes: AI-enhanced humans and everyone else. And we already know that when people are not considered equal, the problems are much more serious, leading to revolutions or even the extermination of one of the classes.