Eticas

High-performance AI without the risks. 

Resources

Case Studies

Curious how an audit of your AI applications can help improve your company’s performance while mitigating substantial risks? Our work with clients who are pioneering AI use to dominate their industry will illuminate Eticas’s expertise and abilities that are truly bar none.

Knowledge Center

Since 2012, the Eticas Foundation has worked with the world’s leading policy and government organizations, influencing both policy as well as auditing their complex systems to ensure accuracy, fairness, and outputs that benefit society at large.

Case Studies

Navigating the US AI regulatory landscape

We’re pleased to introduce our latest resource: a downloadable infographic meticulously crafted to shed light on the diverse regulatory landscape of AI in the U.S. This infographic serves as a comprehensive roadmap, offering clarity on the AI regulations at the state level.

Read More

Demystifying AI: What is it, actually?

Much has been said lately about artificial intelligence (AI). Incredible applications are at our fingertips, we can talk to a machine just like we do with our friends who are miles away. We can ask machines to write down poetry, create images out of nothing (but a huge database of information) and even give quick […]

Read More

The adversarial audit of VioGén: Three years later & new system version

Three years have passed since Eticas Foundation, in collaboration with the Ana Bella Foundation, conducted an adversarial audit of VioGén. Key insights from the analysis of the VioGén system reveal significant concerns regarding accountability and transparency

Read More

Adversarial Algorithmic Auditing Guide

Algorithmic auditing is an instrument for dynamic appraisal and inspection of AI systems. This guide focuses on adversarial or third-party algorithmic audits, where independent auditors or communities thoroughly examine the functioning and impact of an algorithmic system, when access to the system is restricted. 

Read More

The European AI Office: The AI Act roadmap for enforcement

In recent years, AI has taken center stage in the European Union’s digital agenda, sparking discussions on its far-reaching impact on our economic landscape and societal fabric. While the benefits of AI are undeniable, concerns about its potential effects on fundamental rights and safety have prompted a proactive response from the EU. The recently adopted […]

Read More

Guide to Algorithmic Auditing

The Guide to Algorithmic Auditing is aimed at the people responsible for the use of algorithms, and data processing algorithm audits. Although, this guide is a tool that also seeks to reach the general public, increasingly interested in the effects of the application of algorithms in their daily lives.

Read More

The DSA explained

The European Union implemented far-reaching legislation to regulate the digital landscape: The Digital Services Act (DSA) aims to overhaul the rules for online services in the EU, which were last updated over two decades ago. Here’s what you need to know about the DSA and its potential impact.  What is the DSA?  The Digital Services […]

Read More

Inside the algorithms of dating apps

In the ever-evolving landscape of modern romance, dating apps have become the gatekeepers of our romantic destiny. What once relied on serendipity and face-to-face encounters has now been replaced by algorithms, quietly working behind the scenes to curate our potential matches. While success stories often make headlines, the intricate workings of these algorithms and their […]

Read More

Algorithmic audit of Laura Robot

Since 2016, the Laura Robot has analyzed more than 8.6 million visits in 40 clinical and hospital centers in several Brazilian states. Eticas examined the Laura application in its version 1.0, created in 2017. 

Read More

Algorithmic impact assessment of the predictive system for risk of homelessness for the Allegheny County

Eticas audited the algorithmic system that predicts homelessness risk in Allegheny County, Pennsylvania, USA. The model took into consideration 964 variables coming from different departments and regulatory bodies of the county (demographic and mental health variables, justice, prison, hospital visits and substance use registers, amongst others).

Read More

Algorithmic audit of Koa Health: A success story

The results of our 2022 – 2023 algorithmic audit of Koa Health are available and we’re excited to share them with you!   

Read More

Eticas conducts an external and independent audit of the VioGen system

Case Studies Eticas conducts an external and independent audit of the VioGen system recommender In Spain, the level of risk to which a victim of gender violence is subjected is determined by an algorithm. The system to which it belongs is VioGén which, with more than 3 million risk evaluations, is the risk assessment system […]

Read More

The impact of facial recognition on people with disabilities

An In-Depth Audit of Biases in Facial Recognition Technology Impacting Individuals with Disabilities 

Read More

Eticas’s algorithmic audit finds compliance problems with ride-hailing platforms

Eticas, Taxi Project 2.0 and Observatorio TAS are the three organizations focused on promoting ethical and fair practices in the digital economy that published the results of their algorithmic auditing of ride-hailing platforms. The audit reviews Uber, Cabify and Bolt’s compliance with competition, labor and consumer protection laws in Spain. 

Read More

Deep learning for social services for the Barcelona City Council

With the collaboration of Universidad Pompeu Fabra (Barcelona), Eticas carried out an audit of the natural language processing (NLP) system from the Social Services area from the Barcelona City Council.

Read More

How are algorithms and IA being applied in healthcare?

The application of artificial intelligence in healthcare field provides extensive applications that help to support the system and the development of medical solutions.

Read More

Technology and big data at the border

In border control and management, technology can act as a double-edged sword. On one hand, increasing efficiency by digitising daily functions is a welcome change to travellers.

Read More

Authoritarian technological surveillance or fundamental rights?

In Europe and across the world, the use of remote biometric identification (RBI) and surveillance systems such as facial recognition, in our publicly accessible spaces, represents one of the greatest threats to fundamental rights and democracy that we have ever seen.

Read More

A deeper look into facial recognition

We humans are incredibly good at facial recognition: normally, we are able to tell whether we know someone by looking at their face for less than half a second, and we can recognize people’s faces even when we can’t remember other details about that person, like their name or their job.

Read More

Chinook: an algorithm to help migration officers in Canada decide who to let in

The Russian invasion of Ukraine, started on 24 February 2022, has provoked one of the single greatest refugee crisis of recent times. By 24 April 2022, there were more than 5.2 million refugees who had fled the war in Ukraine, according to data compiled by the UN High Commissioner for Refugees (UNHCR). Do you know how much a migration algorithm influences this?

Read More

Loot Boxes: How the gaming industry manipulates and exploits consumers

Manipulative design, aggressive marketing, and misleading probabilities. Loot boxes present consumers with an array of problematic practices. Consumer organizations all over the world are coming together to call for regulation.

Read More

European Commission must uphold privacy, security and free expression

In May, the European Commission proposed a new law: the CSA Regulation. If passed, this law would turn the internet into a space that is dangerous for everyone’s privacy, security and free expression. Today, 8 June, we join this initiative led by EDRi alongside other 73 organisations in calling instead for tailored, effective, rights-compliant and technically-feasible alternatives to tackle this grave issue through an Open Letter.

Read More

FemTech: My body, my data, their rules

Menstruation, despite having existed since the dawn of humanity and affecting half of the world’s population, remains largely unknown in many aspects. Paradoxically, however, for years, control over the menstrual cycle has been a lucrative source of income in different sectors.

Read More

Learning by Exposing BadData

Thanks to artificial intelligence, algorithms can be trained by and learn from data. But what an algorithm does depends a lot on how good the data is. Data could be corrupted, out of date, useless or illegal. In this way, bad data plays an important part of all kinds of decision-making processes and outcomes.

Read More

Is AI for or against LGBTQ+ community?

In 2011, an Android app had a test of twenty stereotyped questions through which parents could reportedly find out the sexual orientation of their children.

Read More

Location data is personal data, isn’t it?

A few days ago, the European privacy group NOYB (None of your business) filed an appeal against a decision of the Spanish Data Protection Authority (AEPD) regarding the phone provider Virgin Telco refusal to provide the location data it has stored about a customer after this person asked for it in December 2021. Now the case is in the hands of the Audiencia Nacional (Spain’s national court).

Read More

Why and how media curation by algorithm contributes

These days, many of us get informed about what’s going on in the world, in our countries and even around us in our towns by reading, watching or listening to online media: from online newspapers, to online video, to online radio and podcasts.

Read More

TikTok Pauses Changes to its Privacy Policy

Last week was the date chosen by TikTok to make the changes to its privacy policy effective. At the beginning of June, this popular social network announced “If you are 18 or over and in the EEA, the UK, or Switzerland, TikTok is making a legal change to how it will use your on-TikTok activity to personalise your ads.”

Read More

NarxCare, an algorithm to predict the risk of narcotic

Opioid abuse has, over the last few decades, grown to become a crisis. Different responses have been developed to handle the crisis with one solution, NarxCare having a positive intention but with concerning implementation.

Read More

Tips to understand how AI impacts young people

Alpha and Z generations have been born with AI as one of the tools that is present in their daily lives. They can be called AI natives, as they have never been in a world without it. But, even if AI has many positive uses, such as personalized educational trajectories, there are threats for these generations when it comes to technology. Are we educating young people and even their parents to use AI systems (and technology) responsibly?

Read More

Indiana welfare eligibility processes for welfare, food stamps

The 2006 Indiana welfare eligibility modernization experiment was fairly straightforward: the welfare-benefits system would transition to serving applicants through an online platform. Applications, income-levels, and personal information would all be processed remotely (Eubanks 2018).

Read More

High school admissions in New York City

The New York City Department of Education has deployed algorithms in order to process student admissions into New York City’s public high school system.

Read More

Organizaciones de la Sociedad Civil Reclaman al Gobierno

Más de 50 entidades piden la participación de la ciudadanía en el desarrollo de políticas relacionadas con la IA para garantizar el respeto a los derechos humanos

Read More

Externally Auditing Algorithms

Six months ago we published our first external audit of the VioGén algorithmic system used in Spain to protect victims of domestic violence, and we continue to stand for people’s rights, so we would like to share with you that we’re working on some projects that will undoubtedly have a great impact on public awareness when it comes to algorithmic accountability. 

Read More

What goes on in AI systems used in insurance?

Insurance companies are supposed to protect us from many situations. They are supposed to protect our properties, our health, our trips, our vehicles, and even our lives. But is it doing it in an equal way for everyone who buys an insurance policy?

Read More

¿Quién defiende tus datos?

¿Quieres saber si empresas de servicios básicos y que usamos de forma regular comprometen tus datos en sus plataformas?

Read More

PRECIRE, An algorithm for ranking potential employees using voice recognition

Mitchel Ondili, November 2022 Automation in commercial services has been touted as a cost and personnel-saving mechanism, as well as a way of enforcing ‘objective’ hiring standards. The hiring process is defined by two stages: recruitment, for identifying potential employees, and selection, for ranking applicant information. Recommender systems are often used for recruitment processes, primarily across […]

Read More

Xantura’s RBV, an algorithm to assess the fraud risk of benefit claimants

Xantura is a UK technology firm that provides automated ‘Risk-based verification’ to around 80 councils in the UK. Risk-based verification refers to the evaluation of the level of risk a certain option poses in order to recommend it for a certain process; it is very often used in insurance and benefits claims processes. Xantura’s RBV is intended […]

Read More

We’re at the SXSW!

Every year in mid-March, Austin, Texas, becomes the epicenter of the music, film, and interactive media world as South by Southwest (SXSW) takes place. Since its inception in 1987, this annual event has grown in both size and scope, attracting thousands of visitors from all over the world, and we’re sharing our project with the audience!

Read More

Eticas wins the Future of Data Challenge Award

We have been recognized by the Future of Data Challenge for our activism in support of responsible technology, and for reverse-engineering AI systems through external audits to demystify “black boxes”, expose bias, and train and empower impacted communities.

Read More

Auditing Algorithms live at the MozFest

Join Eticas’ guide for adversarial algorithmic auditing at MozFest, an interactive workshop where you’ll learn the intricacies of adversarial audits and identify the best approach for reverse engineering algorithms from a socio-technical perspective. 

Read More

European Parliament acknowledges risks posed

On the 11th of May the European Parliament voted on the EU’s draft AI Act, redefining the world’s first rules on Artificial Intelligence.

Read More

Tips to understand how AI impacts young people

Alpha and Z generations have been born with AI as one of the tools that is present in their daily lives. They can be called AI natives, as they have never been in a world without it.

Read More

What goes on in AI systems used in insurance?

Insurance companies are supposed to protect us from many situations. They are supposed to protect our properties, our health, our trips, our vehicles, and even our lives. But is it doing it in an equal way for everyone who buys an insurance policy?

Read More

European Parliament acknowledges risks posed

On the 11th of May the European Parliament voted on the EU’s draft AI Act, redefining the world’s first rules on Artificial Intelligence.

Read More

PRECIRE, An algorithm for ranking potential employees

Automation in commercial services has been touted as a cost and personnel-saving mechanism, as well as a way of enforcing ‘objective’ hiring standards. The hiring process is defined by two stages: recruitment, for identifying potential employees, and selection, for ranking applicant information.

Read More

Acceptability analysis of biometric identity

The MADRAS project has developed a biometric photosensor for the micro-mobility sector, which offers a materials-driven improvement of Organic and Large Area Electronics (OLAE) devices.

Read More

Knowledge Center

Since 2016, the Laura Robot has analyzed more than 8.6 million visits in 40 clinical and hospital centers in several Brazilian states. Eticas examined the Laura application in its version 1.0, created in 2017. 

The main objective of the Laura system is to provide early warning of a clinical deterioration susceptible to death, with the aim of reducing mortality and hospital service costs. It is an Artificial Intelligence system that provides a classification of the patient’s risk of clinical deterioration after analyzing the indicators of the patient’s last five vital sign collections. 

Laura’s algorithmic audit was focused on exploring possible risks of algorithmic bias or discrimination. 

Advances in AI, especially facial recognition, hold great promise, but the risks to diverse users need careful consideration. This audit aims to ensure empathetic and conscientious innovation, using AI to benefit everyone.

The Impact of Facial Recognition on People with Disabilities

Algorithmic auditing is an instrument for dynamic appraisal and inspection of AI systems. This guide focuses on adversarial or third-party algorithmic audits, where independent auditors or communities thoroughly examine the functioning and impact of an algorithmic system, when access to the system is restricted.

Adversarial Algorithmic Auditing Guide

The European Union implemented far-reaching legislation to regulate the digital landscape: The Digital Services Act (DSA) aims to overhaul the rules for online services in the EU, which were last updated over two decades ago. Here’s what you need to know about the DSA and its potential impact.

The DSA
Explained

What is the impact of social media on the representation and voice of migrants and refugees in Europe? What are the challenges and opportunities to avoid their invisibilization and promote a fair representation?

Social Media & the Representation of Migrants

The video platform perpetuates a dehumanizing image of migrants, and its recommender system rewards xenophobic narratives, which feeds back into a context of a rising far right political discourse.

Portrayal of Migrants on
YouTube

Along with Observatorio TAS and the Taxi Project, we unveil the hidden impacts of ride-hailing algorithms in Spain in the form of an adversarial audit, to identify potential harms to users, workers, and competitors in the platform economy.

A closer look at ride-hailing platforms

An analysis of the EU’s investment in AI development reveals a significant mismatch between EU’s ambition in leading on responsible AI and allocation of its own funds to deliver on this objective.

How public money is shaping the future of AI

An email instead of a letter, online shopping instead of driving to a mall, a video conference instead of an in-person meeting. Are these activities as green as we think, or do they hide an environmental footprint?

The Hidden Environmental Cost of Data

Spotting PRIVACY & ethical IMPLICATIONS:
Phones with facial recognition, administrations storing our bodily traits, voice recordings to determine our access to jobs or benefits… Are biometrics a security miracle, or a threat to human rights?

Biometric Technologies & Human Rights

Addressed to companies, public organisms and citizens, this guide to audit algorithms offers a general and replicable methodology.

Guide to Algorithmic Auditing

This company raised 1M€ in funding 3 years after its foundation. This Netherlands-based software startup promises to give users visibility and a choice over the personal information Google has of them. It also gives the choice to sell the data for rewards to the company partners. But how safe is it?

Rita Personal Data too good to be true?

Private companies offering direct-to-consumer (DTC) DNA services have increased their market significantly in recent years. It was estimated that by 2021 more than 100 million people would have provided their DNA to four leading commercial ancestry and health databases.

A ranking of direct-to-consumer DNA testing

After months of work, we are thrilled to present the first annual report of our Observatory of Algorithms with Social Impact (OASI). We want to share its key findings, and invite you to take a look at it to get a deeper understanding of this sample of the algorithmic landscape.

OASI 1st Annual Report

VioGén is an algorithm that determines the level of risk faced by a victim of gender-based violence and establishes her protection measures in Spain. It is the largest risk assessment system in the world, with more than 3 million registered cases.

The External Audit of the VioGén System

Eticas Foundation has developed the Observatory of Algorithms with Social Impact, or OASI.

OASI.

Relevant Industry Articles

Nov 22, 2023

New York Times: A.I. Belongs to the Capitalists Now

“In a larger sense, what’s happening at OpenAI is a proxy for one of the biggest fights in the global economy today: how to control increasingly powerful A.I. tools, and whether large companies can be trusted to develop them responsibly.”
October 30, 2023

Bloomberg: Regulate AI? How US, EU and China Are Going About It

“Some US cities and states have already passed legislation limiting use of AI in areas such as police investigations and hiring, and the European Union has proposed a sweeping law that would put guardrails on the technology. While the US Congress works on legislation, President Joe Biden is directing government agencies to vet future AI products for potential national or economic security risks.”
Sept 27, 2023

Axios: VC firms working with D.C. to "self-regulate" AI startup investing

“A group of venture capital firms have pledged to ensure that the startups they invest in adhere to responsible AI. to correct for the ‘move fast and break things; a philosophy that drove the rise of social media technologies, without considering second or third-order effects.”