Detecting and Preventing Bias in AI Hiring Systems: Research Project Presents New Tools
FINDHR – Fairness and Intersectional Non-Discrimination in Human Recommendation HORIZON EUROPE project coordinated in UPF and that we’re partners of, is releasing new tools, approaches, and concrete recommendations aimed at tackling discrimination in recruitment caused by AI hiring systems.
An increasing number of companies are using AI-assisted recruiting systems to preselect candidates or rank applicants. While these tools can save time, they also carry significant risks of discrimination.
Proven Risks of Discrimination in AI-Assisted Hiring
AI-assisted hiring systems promise time savings for HR professionals. However, real-world experiences show that these systems can reinforce existing patterns of discrimination—or create new ones—often without the awareness of those using them. The FINDHR project focuses especially on intersectional discrimination, where combinations of personal characteristics (such as gender, age, religion, origin, or sexual orientation) generate new or multiplied forms of discrimination.
The research demonstrates that discrimination in automated hiring is not a theoretical concern but a lived reality for many. Interviews conducted with affected individuals in seven European countries (Albania, Bulgaria, Germany, Greece, Italy, the Netherlands, and Serbia) revealed feelings of powerlessness and frustration, with applicants often receiving only automated rejections outside working hours, despite strong qualifications and repeated applications.
Solutions and Methods to Counter Algorithmic Discrimination
How can organizations reduce discrimination risks in AI hiring systems?
“Tackling algorithmic discrimination requires action across software development, HR, and policy. It’s not just a technical issue—social, cultural, and political contexts must also be considered.”
– Carlos Castillo, ICREA professor
FINDHR Tools and Solutions
Everyone can access the following freely available resources from FINDHR:
- Toolkits with concrete recommendations for software developers, HR professionals, and policymakers.
- Guidelines and methods for inclusive software design and responsible use, auditing, and monitoring of algorithmic recruiting systems:
The Equality Monitoring Protocol is a technical guide that proposes how to monitor algorithmic recruitment systems after they have been deployed, in line with European law.
Impact assessment and auditing framework addressed at product managers and algorithmic auditors, elaborated by Eticas.ai’s team.
- Technical tools and software to reduce the risk of algorithmic discrimination in AI hiring systems.
- Training programs for professionals to raise awareness about the risks of algorithmic discrimination in hiring.
- Insights from affected individuals and a practical manual for jobseekers to navigate and highlight invisible barriers in AI-driven recruitment.
The FINDHR project represents a comprehensive, interdisciplinary approach to making AI hiring systems fairer, more accountable, and more transparent. For more information please visit the website: www.findhr.eu
Graphics: Robert Báez / Fábrica Memética for FINDHR / CC BY 4.0
