How are algorithm biases affecting our Social Media experience?
top of page
ONE-NOUGHT-ONE.png
Tanushree vaish

How are algorithm biases affecting our Social Media experience?

Machine learning and AI are relatively new additions to our repertoire of terms and technological developments that we need to be aware of, in order to stay on top of all things on social media. Nearly 48% of the world’s population are on some form of social media and given that it occupies such an important part of our daily lives, it is better to know the ins and outs of these platforms and how they function.


For the uninitiated, machine learning is a process of data analysis that follows an algorithm or to put it simply, a set of calculations, that automates analytical model building. It is a branch of artificial intelligence that stems from the idea that systems can learn from data, figure out patterns in that data, and also take decisions without requiring much or even in some cases, no human intervention.


Before the advent of algorithms, humans had been manually sifting through even big data, but now it is being partially or fully undertaken by machines whose scale and statistical rigour promise unprecedented efficiency and speed. Algorithms are analysing tremendous amounts of macro and micro-data to influence decisions affecting people in a range of tasks, right from making movie and book recommendations to advertising products someone is likely to engage with the most and even helping out the banks to determine the creditworthiness of individuals.


You might want to consider the amount of data produced every day by us humans in 2021. 90% of the world’s data today has been created in the last two years, and 2.5 quintillion bytes of data is being created every single day. The total amount of data created, captured, copied, and consumed globally is estimated to be around 64.2 zettabytes in 2020 and it is expected that the volume of data is to double every two years. Artificial intelligence and machine learning through algorithms, make it much easier to make sense of this humongous quantity of data.


What are algorithm biases?

Humans are known to have biases and are not only error-prone, but also can be inconsistent and relative. However, even though automating data analysis may be more efficient, it doesn’t mean that algorithms are necessarily better. In machine learning, algorithms rely on multiple data sets or training data, that specifies what the correct outputs are for some people or objects. From that training data, it then goes on to learn a model which can be applied to other people or objects and make predictions about what output would be best suited for them.


These systems, however, can be biased based on who builds them, how they’re developed, and how they’re ultimately used. This is what is commonly known as an algorithmic bias. It’s tough to figure out how exactly the systems get an algorithmic bias, especially since this technology is often a product of a corporate black box.


Bias in algorithms can emanate from inaccurate, unrepresentative or incomplete training data or the dependence on flawed information that reflects historical inequalities. If left unchecked, biased algorithms can lead to decisions that can have a collective, disparate impact on certain groups of people even without the programmer’s intention to discriminate. It is often the case that we don’t know how a particular artificial intelligence or algorithm was designed, what specific data was used to build it, or how it works and what its specific markers are. However, at the end of the day, this tech is already making many major decisions about our life and potentially ruling over which political advertisements you see, how police officers are deployed in your neighbourhood, how your application to your dream job is screened, and even predicting your home’s risk of fire.


How does algorithm bias affect Social Media?

The algorithms can very simply be called personalization technologies that are designed to display the content a user is likely to find most engaging and relevant. But in doing so, it may end up reinforcing either the cognitive and social biases of users, and/or the agenda based bias of a third party putting out triggering or inciting content, making the users more vulnerable to manipulation and discrimination. A simple example would be of how the very detailed advertising tools built into many social media platforms let the dissemination of false or polarised information, exploiting the confirmation bias by tailoring messages to people who are already inclined to believe them.


This so-called “filter bubble” effect may isolate people from diverse perspectives, strengthening confirmation bias. The algorithm bias also tends to reinforce a homogeneity bias, which is basically the fact that social media works primarily with the whole platform in mind and not just a single user. The categorisation of the data that is collected is based on a few specific markers and is true for all the users. A lot of what a user sees on their feed is also dependent on what is trending and viral at that moment. This is a reinforcement of what can be called a popularity bias. These algorithmic biases, most often than not, are manipulated and controlled to a large extent by bot accounts. While most bots are considered to be harmless, some hide their real nature and can be used for malicious activities like boosting and spreading misinformation, drowning out useful and accurate information through trolling.

Evidence of this type of manipulation was found during the 2016 U.S. presidential campaign. We found many bots exploiting both the cognitive, confirmation and popularity biases of their victims and Twitter’s algorithmic biases.


Traversing the entire ambit of social media can be scary and it might sometimes feel like a loss of agency, where you are only a cob in the grand scheme of things since it is complex and prone to manipulation. It, therefore, is of utmost importance to discover how these different biases interact with each other. To do so, not only will we have to consider technological solutions, but the cognitive and social aspect of this problem must also be acknowledged.



bottom of page