The arrival of social media has impacted how people consume, share and create information. It has given everyone around the world with access to the Internet a platform to share their views, a place to engage with a never-ending flood of information, access to personalised content, and spaces to debate from a distance with those who hold different views.
In the past, putting aside people’s personal networks, people used to get information from a limited number of sources: printed materials, such as books, newspapers and magazines, the radio and television. This meant a small group shaped the information people consumed and made it difficult for those without access to those organisations and corporations to challenge dominant narratives. However, it also meant that everyone had access to similar information, which made it easier to construct a collective story and a shared sense of reality.
The absence of the Internet and globalised communication systems also meant that people were more closely connected to those they encountered who held diverse perspectives – they were part of the same local or national community – so while there were differences, people could also see similarities.
This meant there was more of a social glue, disagreements tended to be managed more politely and people were not solely defined by their views.
The Internet and social media have brought in a new era: anyone can create content for others and there are countless online information platforms. This has made the sharing of information more democratic, has allowed anyone to express their views to people everywhere, and has meant that people can seek out information that aligns with their interests.
In this new information space, what people think has become an important part of their identity, of how they perceive themselves and how others perceive them.
These changes have also allowed misinformation to flourish in new ways. While some of the information available on these platforms comes from mainstream news outlets, which tend to prioritise certain codes of practice, like accuracy, truth and impartiality, other content comes from bloggers, influences, amateur journalists and alternative outlets, which do not necessarily abide by the same codes.
Moreover, unlike most traditional news outlets, social media companies do not charge people for the content they consume. Instead, they make money through targeted advertising and, therefore, have a vested interest in keeping people engaged on their platforms for as long as possible.
Everything an individual does online leaves a digital footprint in the form of data. Tech companies use that data to build psychological profiles of their users and their interests, so that they can target them with relevant content and adverts using algorithms. The more time you spend online, the more your behaviour can be analysed, so the more effective the targeting, the more adverts you will see and the more money tech companies can make. Targeted advertising goes beyond pushing people towards buying certain products; it can also push people towards certain behaviours and political viewpoints – a political advert might encourage someone whose psychological data profile suggests they are undecided on who to vote for in an election to vote a certain way, or not to vote at all.
To filter through the mass of online content and to try to keep people engaged online longer, tech companies use algorithms that promote clickbait and shocking content (people are more likely to click on/engage with content that generates an emotional response, particularly outrage), and that make it easy for people to find content that aligns with their interests.
But these approaches can come at a social cost. Promoting content that triggers emotional responses can increase the likelihood of people seeing harmful content (long-term engagement on social media has been shown to be particularly harmful to the mental health of girls)
and make online spaces divisive and abusive, which impacts how people feel about, and engage with, those who have different views offline, fueling polarisation.
Pushing people towards content that aligns with their interests means that they are not being exposed to a diverse range of views and ideas: this can lead to people accessing completely different information to others, possibly getting trapped in echo chambers, and to the emergence of competing versions of reality (the widespread popularity of the flat earth conspiracy theory highlights this risk). These opposing narratives can also impact social cohesion.
As Bliuc and Chidley note,
Mutually exclusive narratives create conditions for the formation of ideologically opposed camps. These narratives create ideal conditions for intergroup conflict and polarisation in the form of psychological and ideological distancing because they are based on mutually exclusive versions of social reality which are connected to norms and behaviours that aim to achieve competing group goals.
People’s engagement with social media platforms can, therefore, shape their understanding of the world, make them reject ideas that are different and/or that challenge their sense of reality, and make them more suspicious of those who hold different views.
These engagement practices can contribute to societal polarisation.
Social media algorithms can therefore shape user interests; shape and influence user behaviour through the use of notifications, recommendations and suggestions;
and can impact the content people post: content creators are encouraged to post shocking content as it is more likely to be engaged with and shared.
Tech giants have also made their platforms incredibly addictive in a bid to keep people online. Scrolling on social media platforms, and getting likes, views and notifications, releases dopamine, a brain chemical that makes us feel good.
The stimulation that apps offer can get people addicted to seeking a dopamine release, but, after that release, there is a ‘comedown’, which can drive us to seek more dopamine. Moreover, the more we binge on these platforms, the less we feel the effects of dopamine and the more we need to scroll and/or post to get the same ‘fix’.
For over a decade, social media companies have not been regulated in the same way as traditional sources of information. In the UK, if the print media, television and radio share something false or harmful, they are fined and/or have to apologise. The UK’s 2023 online safety bill makes social media companies more responsible for the content they show young people or risk being fined,
but there is still a lack of transparency on how their algorithms work:
many argue that algorithmic secrecy must end to prevent user manipulation.
Others argue that, given their capacity to boost content and shape behaviour, algorithms should be designed to counter polarisation and division.
One way could be for them to spread unifying narratives that help bring people from different groups together.
As Bliuc and Chidley note,
Narratives achieving intergroup unification are based on consensus that goes beyond group boundaries, so they come to be shared across social categories and group memberships. Because they highlight what unifies us as humans, they are connected to identification with superordinate, non-polarising social categories such as ‘humanity’ and speak to unifying emotions, such as compassion and care for those vulnerable. Because, in most cases, these narratives are based on universal principles and pro-social beliefs connected to the survival of us as a species, they speak to the most basic human values around cooperation. As a result, these narratives incorporate aspects of social reality on which people across groups, social categories, and political fault-lines can all agree on [...].
Both the call for transparency and for using social media to spread unifying narratives have their appeal. However, because the latter option could be said to pose ethical implications concerning the manipulation of people, total transparency and more algorithmic regulation may ultimately be preferable.