Eticas

High-performance AI without the risks. 

Automating (In)Justice: An Adversarial Audit of RisCanvi

With the integration of predictive algorithms and AI, criminal justice is undergoing a profound transformation, and with it, the need to audit these AI systems. In the United States, systems like COMPAS and PredPol exemplify this shift and have sparked intense debates about transparency, bias, and the reliability of algorithmic decision-making.

At the forefront of this discussion in Europe is the RisCanvi tool, which has been used in Catalonia, Spain, since 2009 to assess inmates’ risk of reoffending and violence. At Eticas, we conducted the first-ever adversarial AI audit of RisCanvi to shed light on its effectiveness and fairness.

The RisCanvi Tool: An Overview

RisCanvi plays a crucial role in the Catalan criminal justice system, influencing parole and sentencing decisions. Despite its importance, many inmates, lawyers, and court officials are unaware of its existence or how it operates. Concerns have been raised about the fairness, accuracy, and transparency of RisCanvi. Our goal at Eticas was to fully understand these issues through an adversarial audit.

Conducting the Adversarial AI Audit

Our audit employed a dual-method approach: 

Ethnographic Audit

Comparative Output Audit

We conducted interviews with inmates and staff both within and outside the criminal justice system. This provided valuable insights into the lived experiences and perceptions of those directly impacted by RisCanvi.

We analyzed public data on the inmate population and recidivism rates, comparing this information with RisCanvi’s risk factors and risk scores to evaluate the system’s accuracy and fairness.

Key findings

Lack of Awareness and Trust

One of our most striking findings was that RisCanvi is largely unknown to those it impacts the most—inmates. Additionally, many individuals working with the system do not trust it and lack proper training on its functionality and weighting mechanisms.

Opacity and Non-Compliance

RisCanvi is criticized for its lack of transparency. It has not adhered to Spain’s regulations on the use of automated decision-making systems, which have required audits since 2016. This opacity raises significant concerns about accountability and fairness.

Reliability and Fairness Issues

Our data indicates that RisCanvi may not be fair or reliable. The system has failed to standardize outcomes and limit discretion—key benefits promised by AI. There is no clear relationship between risk factors, risk behaviors, and risk scores, undermining the system’s reliability.

Conclusions

Based on our findings, we conclude that RisCanvi does not currently provide the necessary guarantees to inmates, lawyers, judges, and criminal justice authorities. While our audit findings are not final due to limited access to system data, there is sufficient evidence to warrant further scrutiny.

When a low-risk inmate reoffends, it is unclear whether this is due to an unavoidable error rate or to flaws in an unreliable system. Similarly, when an inmate is denied an increased level of liberty due to a high-risk designation, the fairness of that decision remains in question. It is imperative that further investigation be conducted to ensure justice and fairness within the system.

Moving forward

The integration of predictive algorithms into criminal justice has great potential, but also poses significant risks. The RisCanvi case highlights the need for transparency, rigorous testing, and ongoing evaluation. At Eticas, we advocate for the informed and responsible use of AI to improve justice, not hinder it. Our work on RisCanvi is a step towards ensuring that technology serves as a tool for fairness and reliability in criminal justice systems worldwide.

Access the report now with the links below: