Eticas

High-performance AI without the risks. 

The adversarial audit of VioGén: Three years later & new system version

As the month of International Women’s Day, we aim to revisit the increasing utilization of algorithms and artificial intelligence (AI) systems in combating gender-based violence. Gender-based violence continues to exact a devastating toll, with 55 women murdered in Spain in 2023 –following the Government Delegation against Gender Violence-, marking a troubling increase from previous years. The figures for 2023 reverse a downward trend seen over the past three years, highlighting a pressing need for effective intervention. The Spanish Ministry of Interior is applying an algorithm called VioGén to determine the risk level of a victim of gender-based violence and establish their protection measures. Despite assertions from VioGén officials regarding the efficacy of surveillance and prevention mechanisms, recent data reveals significant gaps in protection measures. Since its launch in 2007, VioGén has aimed to centralize efforts to predict, monitor and protect victims of gender violence, yet critical issues persist. 

Three years have passed since Eticas Foundation, in collaboration with the Ana Bella Foundation, conducted an adversarial audit of VioGén. Key insights from the analysis of the VioGén system reveal significant concerns regarding accountability and transparency. While VioGén’s algorithm employs classical statistical models to assess risk based on predefined variables, it largely operates without accountability or transparency. Despite being designed as a recommendation system, police officers rarely deviate from the automatically assigned risk score, resulting in the system dictating protection measures for victims in the majority of cases. Shockingly, 45% of the risk assessments are labeled as “unappreciated”, with severe consequences evident in the 71 murdered women between 2003 and 2021 who have previously reported without receiving adequate protection. 

Additionally, over 80% of women reported various issues with VioGén, highlighting fundamental flaws in its implementation. The system’s reliance on victims’ emotional and traumatic states during reporting further exacerbates these concerns, potentially introducing biases and misrepresentation. Urgent revision of access conditions and questionnaire clarification are necessary. Moreover, the low rate of women informed about the risk score underscores the need for enhanced transparency, independent oversight, and end-user engagement. Notably, Eticas expresses apprehension over the system’s tendency to assign low-risk scores, particularly for cases involving psychological violence, thereby jeopardizing effective protection. These findings emphasize the imperative for comprehensive review and reform encompassing accountability, transparency, and end-user engagement within the VioGén system. See the completed audit results here. 

Last February the Spanish Ministry of Interior announced a series of new improvements aimed at fortifying the protection of victims. These changes included inaugurating training sessions for law enforcement personnel integrated into the VioGén System and preparing for the upcoming implementation of the VioGén II System. This new version is anticipated to introduce advanced functionalities to enhance the monitoring of ongoing cases and institute a more rigorous review process for deactivation criteria. However, from what we know, VioGén II fails to incorporate crucial insights and recommendations from our adversarial audit, neglecting to address the system’s significant shortcomings. Among the audit’s troubling findings is the revelation that police officers rely heavily on the system’s automated risk assessments, with little deviation from its recommendations, effectively delegating critical protection decisions to an algorithm. Given that the algorithm’s outcome holds substantial sway in final decisions, why are administrations hesitant to audit these systems? 

Despite proposing several recommendations in our adversarial audit to enhance the system, to our knowledge, VioGén remains unaudited.  

However, Spain is not the only country using algorithms and AI systems to combat gender-based violence. In this context, various initiatives encompassing different forms of technology have emerged globally to combat gender discrimination. Here are some of them: Bright Sky, developed by the Vodafone Foundation and Hestia in 2018, offers support for domestic abuse victims; TecSOS, used in several European countries, provides access to emergency services for domestic abuse survivors; Easy Rescue, an app in Turkey, aids women facing violence and includes features for sharing routes and communicating with contacts; Nokaneng is Lesotho’s app, offering information on gender-based violence, rights, support services, and a safe space for counseling; The Gender Based Violence Command Centre (GBVCC) in South Africa operates as a 24/7 support service connecting victims to digital educational content. 

These platforms, among others, play a vital role in addressing gender-based violence globally by providing support, information, reporting mechanisms, and training for both survivors and support staff.  

Ensuring the responsible and fair deployment, use, and procurement of technology is essential to avoid perpetuating discrimination, potential risks, and errors. This requires a commitment to transparency and accountability in the design and implementation of systems. Algorithmic audits play a vital role in this oversight framework, providing insights into the fairness and reliability of algorithms. Therefore, we advocate for public administrations to prioritize auditing these systems to uphold ethical standards and mitigate potential harm. By conducting thorough algorithmic audits, we can identify and address any shortcomings or biases within these technologies, fostering a more equitable and just society.  

The original report is available in Spanish in the following link.