Back The FINDHR project for the responsible use of artificial intelligence technologies is underway

The FINDHR project for the responsible use of artificial intelligence technologies is underway

An interdisciplinary consortium of universities and companies, coordinated by UPF, seeks to promote the responsible use of artificial intelligence algorithms in companies’ search and recruitment processes.

12.12.2022

Imatge inicial

The presence of artificial intelligence (AI)-based technologies is increasing in all data-intensive processes. Such automation is not without errors, irresponsible uses, or ethical problems. One example of this is discrimination in recruitment processes.

The Web Science and Social Computing (WSSC) group of the UPF Department of Information and Communication Technologies (DTIC) is to coordinate an interdisciplinary consortium of universities and companies in the project FINDHR: Fairness and Intersectional Non-Discrimination in Human Recommendation. The consortium will study the technical, legal and ethical problems that exist when using AI and will show how to manage the risks of discrimination in applications that make some kind of human recommendation. These include recruiting candidates for jobs, university admissions or prioritizing grants, scholarships or other public allowances, as well as online markets where service providers sell their expertise, such as medical professionals, language tutors and freelancers.

“The project seeks to detect and mitigate discriminatory biases in the use of AI”, explains Carlos Castillo, project coordinator, ICREA research professor and director of UPF’s WSSC. “The project is part of an area of study on discrimination and algorithmic equity and the European Union is very interested in its development”.

In 2021, the European Commission proposed regulating artificial intelligence by means of the AI Act, which encompasses various computational methods for tasks that normally require human intelligence. The law defines a framework for the governance of high-risk AI applications, which can be classified into two groups:

  • systems dealing with critical infrastructure, emergency response services, criminal evidence, judicial processes and migration documents
  • systems that make inferences to categorize, classify, or recommend people

It is within this second group that technologies that facilitate job search are explicitly included as high-risk applications.

For more than 20 years, the EU has recognized that equal employment opportunity is a force for social cohesion, dignity and equality. Directive 2000/78/EC prohibits discrimination in the labour market on grounds of religion or belief, disability, age and sexual orientation. However, this has not been fully achieved as various forms of structural discrimination, both institutional and individual, remain in the workplace.

For example, women's participation in the labour force remains unequal. According to data from the European Institute for Gender Equality 2021, full-time equivalent employment rates (FTE) are 41% for women and 57% for men. Covid-19 has posed a setback as during the first wave of the pandemic 2.2 million Europeans lost their jobs. Similarly, it has been repeatedly demonstrated that ethnic minorities, people of African descent, women of colour, and LGBTQ+ people continue to be discriminated against.

“What we have proposed is that building AI that is less skewed is not only a technological problem, but is related to laws, ethics and the industrial use that is given to this technology”, Castillo continues.

Hence they allied themselves with expert partners in legislation and data protection, in cross-cultural digital ethics, in digital services auditing, in technology regulation, as well as representatives of workers in Europe and two NGOs dedicated to combating discrimination against women and vulnerable populations.

“Good solutions have to be contextualized, they have to take into account specific aspects of each industry and, in this case, with their own processes, standards and routines. On this basis, we wish to propose technologies that allow detecting discrimination, and processes that are not discriminatory”, he concludes.

The FINDHR project will carry out various activities. One is to study how recruiters perceive the ordering of the candidates. “This is a socio-technological system, in which both people and software collaborate, and the bias in hiring is not going to end because you have an algorithm that orders candidates differently, you have to take the entire system into account”, the researcher clarifies.

The project will build on the regulation and policy that the European Union has in place as addressing the risks of discrimination in AI requires processing sensitive data that is protected by the Union’s General Data Protection Regulation (GDPR). FINDHR will conduct a specific legal analysis of the tensions between data protection regulation and anti-discrimination regulation in Europe.

Another goal for the researchers is the construction of a database of synthetic curricula because given that it is such personal data, there is a shortage of CVs to do research. For this reason they are running a CV donation campaign.

The partners of the project

One of the partners in the project is Adevinta Spain, one of the leading companies in the country’s technology sector. “At InfoJobs we strive every day to ensure that the selection processes that are carried out on the platform are not discriminatory”, comment Justine Devos, Data Director of Jobs at Adevinta Spain and company representative in the project: “To improve our product, we develop algorithms using artificial intelligence, always ensuring that they are free of bias on grounds of gender, age or ethnicity”.

The other Spanish partner is Eticas Research & Consulting, which is dedicated to identifying algorithmic vulnerabilities and is a pioneer in performing algorithmic audits.

Funded by the Horizon Europe programme, FINDHR started work on 1 November 2022 and will run until October 2025. The consortium partners are:

  • Pompeu Fabra University (coordinator)
  • University of Amsterdam 
  • University of Pisa 
  • Max Planck Institute for Security and Privacy, Germany
  • AlgorithmWatch Switzerland
  • ETICAS Research & Consulting
  • Adevinta Spain - InfoJobs
  • Randstad Netherlands BV
  • Radboud University
  • Erasmus University Rotterdam
  • European Trade Union Confederation
  • Women in Development in Europe (WIDE+)
  • PRAKSIS Association

Progress on this project can be followed on Mastodon and Twitter.

 

 

Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.

Multimedia

Categories:

SDG - Sustainable Development Goals:

08. Decent work and economic growth
10. Reduced inequalities
Els ODS a la UPF

Contact

For more information

News published by:

Communication Office