Eticas

High-performance AI without the risks. 

Why and how media curation by algorithm contributes

These days, many of us get informed about what’s going on in the world, in our countries and even around us in our towns by reading, watching or listening to online media: from online newspapers, to online video, to online radio and podcasts. And many of us receive those articles and media clips through social media applications, like Facebook and Twitter, and through online media platforms like YouTube and Spotify; or maybe we get the links to those media items as Google Search results.

Something that all those online platforms and applications have in common is that they use algorithms to automate the ways in which they find and curate content and offer results and recommendation to us, the users.

Another thing those online platforms tend to have in common is that they offer their services seemingly for free: in most cases, the user doesn’t pay money to get content search results and recommendations. Instead, what happens is that online platforms monetise the time and attention that users spend on the platforms’ sites by collecting and selling data about those users to advertisers. That has turned out to be a very lucrative business model, both for media platforms and for advertisers, who that way are able to target very precise audiences with highly personalised advertising.

Because of that, online platforms have an incentive to promote content that will get users hooked for a longer time on their platforms, and that will make users come back again and again for more content. It also happens that human nature tends to be predictable along some general lines, and that most of us are attracted by, and find more interesting, content that’s emotional, sensationalistic, negative and simplistic. And as a result of both facts, that kind of content ends up being promoted and offered to users much more than content that’s fact-based, moderate and nuanced about complex issues.

In other words: the way that most online media platforms work, by algorithmically curating the content they offer to users, tends to spread misinformation, which here we define as “false or inaccurate information, especially [but not only] that which is deliberately intended to deceive”. And that’s the reason why Eticas’s Observatory of Algorithms with Social Impact (OASI) identified “disseminating misinformation” as one of the most potentially significant negative social impact of algorithms:

The use of algorithms may result in the production or distribution of online content that’s purposely untrue, wrong, partial or that in other way contributes to make people think or believe something that’s not true. That has been the case, for instance, regarding the climate crisis or the use of vaccines, about which there is scientific consensus.

If we look at the OASI Register of algorithms and filter the more than one hundred entries by “disseminating misinformation” as type of social impact, we find that –at the time of writing– there are eight entries, which already provide a good picture of the diverse array of cases in which media platforms curated by algorithmic systems may spread misinformation into the public conversations in our societies.

One of them is the content recommendation algorithm of TikTok, a social media application in which users share short videos and which has become very popular worldwide in the last years, especially among teenagers and young adults. Many videos are light and entertaining, and may show TikTok users doing something funny or interesting during their daily lives. But a big number of videos are about the news or current affairs, and also about topics like Covid-19 or the current war in Ukraine. And because of the way its algorithm works, TikTok has been found to promote sensationalistic and extremist content that includes misinformation.

Another interesting case if that of YouTube, the online video platform owned by Google’s parent company Alphabet. At the time of writing, YouTube is estimated to be the second most visited website on the internet (behind Google), and the one where users spend most time in average among the top 50 most visited websites globally; which makes it by far the most popular online video site worldwide. Because of all that, it’s highly problematic that it’s been consistently and repeatedly found that YouTube’s content recommendation algorithm favours and promotes extreme content, and that it tends to lead users step by step from more moderate videos to radical ones that are known to spread false information.

Something similar may happen to users of podcast platforms, like Spotify and its content curation and recommendation algorithm (known as BaRT). As described in the entry about BaRT in the OASI Register, a recent case showcased the dangers brought about by algorithmic curation and recommendation: the Spotify application kept recommending the podcast The Joe Rogan Experience, which was known for delivering misinformation about Covid-19, because its high number of followers made it a good source of advertising revenue for Spotify.

Due to the reasons described above, all social media platforms, as well as generally any other online service where users –people, institutions, companies and other actors– can share content on the platform and reach other users, need to be able to moderate such content, so that they don’t allow illegal or otherwise undesired items to proliferate.

However, moderating is easier said than done, in a big part because of the huge amount of content being shared through many platforms, and because of the velocity in which content is produced, distributed, shared and consumed. That means that having automated algorithmic moderation sounds the one solution to many companies. And while tech giants like Facebook and Twitter can afford to develop their own algorithms and also hire many human moderators, smaller companies may need to rely on third-party moderating algorithms, and that’s the case of another entry in the OASI Register. The Finnish company Utopia Analytics developed an algorithm to automate the task of moderating content, which was bought and used by some companies that had to outsource such service. The algorithm, which like other similar ones may be very well intentioned, and in this case Utopia Analytics and its customers have been more transparent than most, still proves to be insufficient when implemented in the real world, which means it too could be spreading misingformation.

But it’s not only social media: also something as common as searghing the internet by using Google, which is indeed used in more than 90% of all internet searches, may fall pray to misinformation. Google Search is the object of two entries in the OASI Register, one about the original PageRank algorithm developed in 1997 by Google co-founders Larry Page and Sergei Brin, and another more general one about the whole Google Search service. Here, the issue is in essence the same: because of the way rating and ranking algorithms work, when used to search the internet they may end up offering systematically biased results that discriminate against some particular social groups, and creating echo chambers by making some alternative sources of information effectively invisible; all of which may lead to disseminating misinformation.

While misinformation is a negative phenomenon per se, there are some instances in which it can be especially harmful, as it’s the case with public health and as it was shown during the Covid-19 pandemic. There have been many documented examples of social media, and other online media platforms, spreading misinformation about Covid-19. And the OASI Register contains one quite particular case: IATos, an algorithmic system designed to detect whether someone has Covid-19 by analysing an audio file of that person coughing. Developed by the Buenos Aires Municipality in Argentina, IATos worked by analysing audio messages on WhatsApp, and responding to the user with a recommendation of getting tested for Covid-19 or not. When researchers looked to the algorithm, they found it unreliable, which means it may have been contributing to spread false beliefs about Covid-19 symptoms and about how to get a proper diagnosys.

Finally, another particular algorithm in the OASI Register is Nutri-Score, a nutrition labelling initiative that ranks food and beverages based on their supposed nutritional composition. The algorithm, which is very simple and fully transparent and auditable, generates a particular score for every food product: from A (healthy) to E (unhealthy). The issue with Nutri-Score is different from all the other cases in this article. Here, the problem is that Nutri-Score is an algorithm designed to produce a very simple value judgement of how good or bad some food is for people’s health. But human nutrition is such a complex phenomenon, and what may be a good, bad or average diet for any given person depends on so many individual factors, that attempting to define whether one single food or drink product is “good” or “bad” in general is basically meaningless. And that’s why also Nutri-Socre may be inadvertedly spreading false information among the general public.

Hence, disseminating misinformation is not only a problem connected to content recommendation algorithms that promote sensationalistic or politically-extreme articles. Any attempt at oversimplifying the complex human world by using automated algorithms will be at risk of systematically spreading false information.

Because of those (and other) reasons, Eticas will keep advocationg for algorithms to be explainable, so that people affected by their use can understand how the algorithms work, and so that we can informed public conversations about the algorithmic field, and hold those responsible to account. And Eticas Foundation’s OASI project is an effort in that direction: you can read more about algorithms and their social impact on the OASI pages, and in the Register you can browse an ever-growing list of algorithms sorted by different kinds of categories. And if you know about an algorithmic system that’s not in the Register and you think it should be added, please let us know.

Written by Jose Miguel Calatayud