Eticas

High-performance AI without the risks. 

Xantura’s RBV, an algorithm to assess the fraud risk of benefit claimants

Xantura is a UK technology firm that provides automated ‘Risk-based verification’ to around 80 councils in the UK. Risk-based verification refers to the evaluation of the level of risk a certain option poses in order to recommend it for a certain process; it is very often used in insurance and benefits claims processes. Xantura’s RBV is intended to screen welfare clients to determine their probability of committing fraud or error. The model utilizes around 50 variables, including the client’s area of residence.

The resulting assessment divides claims into three categories: low risk, medium risk, and high risk. Low risk cases are streamlined, and high-risk claims require additional verification. If assessments made about high-risk claims are steeped in proxy data that is as a result of bias, it creates further stigmatization and reinforces a negative feedback loop to those who might need the benefits the most. Concerningly, assessments of the system note that the system cannot be downgraded at any time, they can only be increased.

The company asserted that it did not rely on information protected under anti-discrimination law. However, it later admitted to using claimants’ age as a variable, a protected characteristic under the UK’s Equality Act. In response the company cited a legal exception allowing financial service providers to include age as a variable. This admission is only the beginning of the scrutiny. A report by Big Brother Watch indicated that proxy data such as the ethnic mix of the neighborhood they live in and their gender is used in the assessment, which often leads to biased associations and unfair generalizations.

The benefit system, often inundated with varying requests, is increasingly reliant on automation to process claims. However, there are cases indicating that these models are functioning on data skewed against different minority groups and are unfairly targeting the poor and making the process hostile to already vulnerable groups.

There are also several concerns around data protection, since the companies are independent third parties receiving information from the public, which often feels compelled to submit their data due to need. It is estimated that around 1.6 million people submit personal data for benefits claims in the UK. For instance, if a person seeks to obtain housing benefits, they need to provide identifying information, financial records and evidence to determine their eligibility or continued need. Additionally, the use of RBV can occur without a claimants knowledge or consent.This means that the person does not know they are subject to the algorithm’s scoring, nor any of the associated data used to determine their eligibility.

Given the potential for biased and discriminatory results, and the kinds of industries and processes they are applied to, Risk Based Verification systems should always be subject to Equality Impact Assessments and audits to ensure that the data considered is not over-representative of already marginalized groups. Additionally, mandatory disclosure for claimants should be enforced and a process of appeal should be instituted to allow people agency over automated decision making that can harm social safety nets. Lastly, it is critical that the data points that are considered in the assessment be made public and, if not, the RBV should be suspended.

Michele Gilman, professor of law at the University of Baltimore School of Law stated ‘Automated fraud detection is too often built on the assumptions that computers are magic and fraud among the poor is endemic’. The treatment of benefits claimants with varying degrees of over-representative datafication, subjects them to processes they know nothing about and cannot appeal, increasing surveillance. When a subset of citizens are increasingly and repeatedly datafied, their personal data is at higher risk of misuse, their informed consent often negated and they are subject to negative feedback loops in other algorithms. The United Nations Special Rapporteur on extreme poverty and human rights raised alarm on how the automation of social need is dehumanizing and inflicting misery on the working poor.

You can find more information about Xantura and many other algorithmic systems in the OASI Register, and you can read more about algorithmic systems and their potential social impacts on the OASI pages. Besides that, if you know of an algorithm that we haven’t yet included and which can have a social impact, you can let us know.