Insurance companies are supposed to protect us from many situations. They are supposed to protect our properties, our health, our trips, our vehicles, and even our lives. But is it doing it in an equal way for everyone who buys an insurance policy?
These companies are using algorithms for different purposes. One of them is to assign a credit score based on our profile and driving record.
When it comes to car insurers, Consumer Reports affirms that in the U.S. “your score is used to measure your creditworthiness—the likelihood that you’ll pay back a loan or credit-card debt. But you might not know that car insurers are also rifling through your credit files to do something completely different: to predict the odds that you’ll file a claim. And if they think that your credit isn’t up to their highest standard, they will charge you more, even if you have never had an accident”.
In the past, it was easy to understand how insurance companies calculate risks and then prices for their policy. The correlation was simple to find: if you have a higher risk because of diseases when it comes to health insurance, or if you have a long history of fines, when it comes to car insurance, your price to pay would be more expensive. However, once big data and artificial intelligence come into play, these calculations become more complicated and inexplicable.
The lack of equality is present throughout society. The challenge of ending it also applies to artificial intelligence. Discrimination in terms of origin, gender, age or socioeconomic status is also present in this type of system. Since there is no regulation that obliges this type of company to be transparent with its clients when informing them about what algorithms they are subjected to, we do not have information on how these systems are being used and how they influence decisions on prices or coverage.
The goal must be to achieve equality and delete bias, but it is a complicated task since these are present in those who develop this type of technology. Further than that, it must be taken into account that biases have always existed in insurance when calculating risks and prices, so it can be financially feasible, therefore the challenge is to carry it out fairly so that the risks are calculated without having to into account data that may discriminate against vulnerable groups.
One discriminatory data may be the ZIP code, as by knowing it you can have other information like the neighborhood unemployment rate or if it is a neighborhood with a main minority occupation. In 2017, Propublica released a report that showed that “minority neighborhoods pay higher car insurance premiums than white areas with the same risk”. They affirmed: “Despite laws in almost every state banning discriminatory rate-setting, some minority neighborhoods pay higher auto insurance premiums than do white areas with similar payouts on claims. This disparity may amount to a subtler form of redlining, a term that traditionally refers to the denial of services or products to minority areas. And, since minorities tend to lag behind whites in income, they may be hard-pressed to afford the higher payments”. For anyone who knows how an algorithm works, it is easy to see how much these tools can increase this kind of discrimination. Algorithms are trained based on historical data, which tends to be biased.
For example, we know for instance that when algorithms are involved, women have 10 and 20 times less credit than men just because they are women, just because of data issues. How is it possible that technology has gender-discriminatory behavior? Because the historical data tends to show that men were the ones bringing most of the financial support to their families, as women were removed from work environments and relegated to care tasks.
Same thing happens when we talk about insurance. Who was historically unable to pay insurance fees? Vulnerable collectives that were having a lower socioeconomic status, and it seems like algorithms are still discriminating against them.
A poorly designed algorithm or one that has not gone through an algorithmic audit to eliminate possible biases before its implementation can mean, as we have seen, a loss of clients for an insurance company and in addition risk to its reputation. One problem that arises in this situation is that these systems are often contracted without knowing how they have been designed and what lies behind their code.
Having responsible and non-discriminatory AI systems is a competitive advantage and the upcoming regulations, such as the European AI Act, are expected to point towards it.