Back Horizon Europe project FINDHR (Fairness and Intersectional Non-Discrimination in Human Recommendation) launches the first part of the training program

Horizon Europe project FINDHR (Fairness and Intersectional Non-Discrimination in Human Recommendation) launches the first part of the training program

FINDHR (Fairness and Intersectional Non-Discrimination in Human Recommendation) is a project funded by the EU framework program Horizon Europe and unites 13 partner organizations in developing anti-discrimination methods, algorithms, and training for algorithmic hiring.

17.03.2023

Imatge inicial

The FINDHR project deals with algorithmic hiring, meaning the use of artificial intelligence (AI) in the recruitment of job candidates. AI may be the hiring tool of the future, but it could come with the old relics of discrimination. In order to awaken awareness of discrimination risks in AI, the FINDHR consortium will prepare a specialized training course and a masterclass. The course is free and your interest can be registered in this form (see details at the end).

In this course future participants will learn the fundamentals of preventing, detecting, and mitigating discrimination risks in AI-based human recommendation. The syllabus will include conceptualizations of power and discrimination, followed by foundations and main concepts of trustworthy AI.

The first part of the course has already been completed and includes three core macro sections covering 14 hours of theory on understanding, mitigation, and accounting for bias and discrimination. The first section on bias understanding will describe sources of bias and discrimination, how they can manifest in data and decision models and will introduce an ethical and legal perspective on discrimination, focusing on employment and recruitment. The section on mitigation will discuss algorithmic strategies for fairness in ranking: pre-processing (modifying training data), in-processing (modifying algorithms), and post-processing (modifying outputs). The last section will present methods for explaining AI decisions and models behavior and methodologies for auditing algorithms for biases.

This specialized course (in English) will be totally free and will be delivered in two ways: with face-to-face and online attendance options. The course might be interesting for researchers, developers, product managers, quality assurance engineers, HR professionals, activists, workers´ representatives, and all other public.

If you are interested in participating in this program sign up here (https://docs.google.com/forms/d/e/1FAIpQLScg6odCmjIXmd0bsbsKQXvvIMGmQB_bMfuurGYoou5tjs8I3w/viewform).

Multimedia

Categories:

SDG - Sustainable Development Goals:

Els ODS a la UPF

Contact