Eticas

AI’s Dark Secret: It’s Rolling Back Progress on Equality

AI systems all function the same way, by identifying patterns. The truth is that machine learning systems struggle with difference. 

An opinion article by Gemma Galdon-Clavell, Founder and CEO of Eticas.ai, for Context

My life has never fit a pattern. My grandparents were refugees, my mother had me when she was 14 years-old, and I developed huge behavioural issues as a teenager.

I did not grow up in typical circumstances. But I had an opportunity to beat the odds.

If I’d been born into the age of artificial intelligence (AI), though, could I still have got to where I am today? I’m doubtful.

You see, while I never fit a pattern, AI is all about them.

AI systems, whether predictive or generative, all function in the same way: they process vast amounts of data, identify patterns, and aim to replicate them. The hidden truth of the world’s fastest-growing tech is that machine learning systems struggle with difference.

Pattern really is the key word here – something that happens repeatedly. In a dataset, that means an attribute or feature that is common. In life, it means something that is shared by a majority.

For example, a large-language-model such as OpenAI’s ChatGPT “learns” grammatical patterns and uses them to generate human-like sentences. AI hiring systems analyze patterns in the résumés of high-performing employees and seek similar traits in job applicants.

Similarly, AI image-screening tools used in medical diagnosis are trained on thousands of images depicting a specific condition, enabling them to detect comparable characteristics in new images. All of these systems identify and reproduce majority patterns.

So, if you write like most, work like most and fall ill like most, AI is your friend. But, if you’re in any way different from the majority patterns in the data and AI models, you become an outlier and, over time, you become invisible. Unhirable. Untreatable.

Women of colour have known this for a long time and have exposed AI bias in image recognition and medical treatment. My own work has looked at how AI systems fail to properly identify and provide opportunities to women with Down Syndrome, people living in low-income neighborhoods, and women victims of domestic violence.

In light of this growing body of evidence, it’s surprising that we haven’t yet fully faced the fact that bias is not a bug in AI systems. It’s a feature.

Bias is the challenge

Without specific interventions meant to build fairness, identify and protect outliers and make AI systems accountable, this technology threatens to wipe out decades of progress towards non-discriminatory, inclusive, fair and democratic societies.

Almost every single effort to fight inequality in our world is currently being eroded by the AI systems used to make decisions about who gets a job, a mortgage, a medical treatment, who gets access to higher education, who makes bail, who is fired or who is accused of plagiarism.

And it could get worse: history tells us that the road to authoritarianism has been paved with discriminatory practices and the establishment of a majority “us” versus a minority “them”.

We are putting our trust in systems that have been built to identify majorities and replicate them at the expense of minorities. And that impacts everyone. Any of us can be a minority in specific contexts: you may have a majority skin colour but a minority combination of symptoms or medical history, and so still be invisible to the systems deciding who gets medical treatment. You may have the best job qualifications but that gap in a CV, or that uncommon name, makes you an outlier.

This is not to say we shouldn’t use AI. But we cannot and should not deploy AI tools that do not protect outliers.

Bias in AI is like gravity for the aerospace industry. For aircraft manufacturers, gravity is the single, greatest challenge to overcome. If your plane can’t deal with gravity, you don’t have a plane.

For AI that challenge is bias. And for the technology to take off safely, its developers and implementors must start building mechanisms that mitigate the irresistible force of the average, the common – the force of the pattern.

As an outlier, working in this space is not just a gift – it’s a responsibility. I have the privilege of standing alongside trailblazing women like Cathy O’Neil, Julia Angwin, Rumman Chowdhury, Hilke Schellmann, and Virginia Eubanks, whose groundbreaking work exposes how current AI dynamics and priorities fail innovation and society.

But, more importantly, my work on AI bias allows me to honour the tiny me I once was. The clumsy, lost, awkward girl who got a chance to defy and beat the odds because they were not set in algorithmic stone.

That is why reclaiming choice and chance from AI should not be a technical discussion, but the fight of our generation.