Eticas

What We Learned While Automating Bias Detection in AI Hiring Systems for Compliance with NYC Local Law 144

What We Learned While Automating Bias Detection in AI Hiring Systems for Compliance with NYC Local Law 144

By Gemma Galdon-Clavell and Rubén González Sendino

As AI continues to shape critical decisions in hiring and beyond, ensuring these systems are fair and unbiased is more crucial than ever. At Eticas.ai, we are committed to this mission. Our latest work, encapsulated in our paper on New York City’s Local Law 144, shares key insights from our journey in automating bias detection for AI hiring systems. 

The significance of Local Law 144

Since July 2023, New York City has required independent bias audits for automated employment decision tools (AEDTs) used in hiring processes. These audits are essential for compliance, aiming to eliminate discrimination and ensure fairness. To support organizations in navigating these new requirements, Eticas.ai developed ITACA_144, a streamlined compliance tool built on our broader ITACA_OS platform.

This initiative reflects our belief that compliance should not be a barrier but an opportunity to enhance AI systems, reduce error rates, and improve fairness and accuracy.

Key challenges and learnings

While working to automate compliance, we encountered several challenges that underscore the complexities of regulating AI systems effectively: 

Data requirements: The current framework lacks specificity, allowing companies to use datasets that may not represent relevant demographics or timeframes. We recommend incorporating clear guidelines to ensure datasets are timely, geographically appropriate, and comparable across audits.

Demographic inclusiveness: The law’s 2% rule excludes underrepresented groups from analysis, potentially overlooking populations most vulnerable to bias. Removing this threshold and refining definitions would better protect marginalized communities. 

Metrics and fairness: Sole reliance on the impact ratio as a fairness metric is insufficient. While useful for measuring selection rate disparities, it fails to capture nuanced biases or systemic inequities. Broader metrics, including counterfactual and intersectional analyses, are critical for a complete picture of AI fairness. 

Bias Assessments: Bias extends beyond the model itself, influenced by human and systemic factors before and after the AI’s involvement. To address this, our ITACA_OS tool evaluates bias at every stage, providing actionable insights to improve outcomes. 

Data reliability: Ensuring the integrity of data used in audits is paramount. Independent verification processes, such as random sampling and oversight, can prevent dishonest reporting and build trust in audit results. 

The road ahead

NYC Local Law 144 is a groundbreaking step toward standardizing independent AI audits, serving as a precedent for similar legislation worldwide. However, to fully realize its potential, the law—and others inspired by it—must evolve to address its current limitations. 

At Eticas.ai, we continue to refine our tools and methods to ensure they not only meet regulatory requirements but also set a high bar for AI fairness and accountability. By sharing our insights, we aim to help policymakers, developers, and auditors create systems that are truly equitable and transparent.