A few months ago I decided to open an Instagram account. I am not entirely sure why I felt compelled to do so, perhaps it was a combination of intrigue and to understand how it differs from Twitter and Facebook. Shortly after joining, the platform began to curate a feed for me that comprised, amongst other things, clips of Andrew Tate talking about women in a disparaging fashion and conservative religious clerics cursing the non-believers. A similar thing happened when I joined Facebook around 15 years ago, I was suddenly recommended content linked to extremist groups encouraging me to be outraged about various issues. Facebook was clearly making assumptions about what a male of my age with my name would find interesting since, at the time, I had never searched for such content on any online platform.
Social media companies have a common business model, they need to maximise the number of users and the amount of time their users spend on their platforms. As such, they are incentivised to retain user attention by recommending and promoting the type of content that allows them to do just that. In the political context, this translates to bombarding users with content that reinforces existing biases to the point where obsessions develop. It also means promoting content that is ‘attention grabbing’ and that kind of content is often polarising, abrasive and a gateway to more and more extreme points of view. In other words, content that sends you down a rabbit hole of lurid conspiracies and paranoid political ideas is good for business.
User attention is, of course, monetised through advertising and so more users, whose attention is retained, means more advertising revenue. As the saying goes, ‘if you don’t pay for the product, you are the product’. This means content removal and moderation, or any action that limits user numbers or attention, is a direct threat to the business model. It also means Big Tech needs to encourage solutions that involve users spending even more time on the platform. The new digital public square intends to prevent us from ever leaving the square, it is like Hotel California and you can log out any time you like but you can never leave.
When it comes to social media, the medium is the problem. That may sound like a Luddite position but it is important to note that different mediums encourage different human behaviours, responses and traits. Books and newspapers stimulate different parts of the brain to television and radio, the former require active engagement with words sparking the imagination whereas the latter are more passive and encourage absorption with less critical thinking. With social media you are being pushed to repeatedly consume tailored content that conforms to your existing biases and awakens your deepest fears, whilst not always having sight of content with the opposing view.
The medium by its very nature encourages polarisation, a lack of critical thinking, tribalism and a more emotional response to news and political messaging. As such, it is ideally suited to the cultivation of extremism and that can have devastating consequences. In 2022, Amnesty International published a report which alleged that Facebook’s algorithm substantially contributed towards atrocities committed by the Myanmar military against the Rohingya people. By actively promoting content that encouraged hatred towards this beleaguered minority, thousands were killed, tortured, raped and displaced in a campaign of genocide.
Big Tech companies have business models that encourage political extremism on one hand and a need to maintain a positive brand on the other. This tricky balancing act is further complicated by political biases amongst staff which do not allow them to act as neutral arbitrators of content, as well as pressures from governments to clean up their act. Regulation, however, is a double edged sword in that when it is heavy is discourages the emergence of new platforms and strengthens the monopoly of Big Tech. On the flip side it can also means fines, the loss of users, content and, thus, advertising revenue.
To deflect governmental pressure and to temper public outrage, Big Tech has embarked on a series of measures designed to transform its image with regards to online extremism in recent years. It must be stressed from the outset that identifying and lessening the impact of extremist content online is not easy and this difficulty is further compounded by factors such as the lack of a clear definition of extremism, changing social trends about what is considered extreme and, of course, free speech. Furthermore, it is difficult to measure the impact of activities since researchers are not in a position to identify and interview users who have been reached by counter extremism efforts.
However, putting aside these limitations, in my view much of the existing measures are ineffective and, in some cases, even counter-productive. For example, there is a project known as ‘The Redirect Method’ which claims to offer YouTube users, who are searching for extremist content, ads that promote counter-extremist messaging. Whilst there may be some limited utility to such a project, it is more likely to instigate a negative backlash in most users, especially when they know they are being targeted for their views. Researchers have labelled this phenomenon ‘reactance theory’ and in one 2020 study, researchers found that support for Jihadist extremism amongst those who are vulnerable to it goes up when they are exposed to ‘counter-content’.
A 2018 study, conducted by the Mozilla Foundation, found that the YouTube algorithm was four times more likely to recommend more extremist content then counter content for those doing ISIS related searches. Since the algorithm that encourages content the platform thinks the user will like is never switched off, any counter-content is drowning out by more extremist content. Commenting on the study, Dr Hany Farid (a professor at UC Berkeley) said “Algorithmic amplification is the root cause of the unprecedented dissemination of hate speech, misinformation, conspiracy theories, and harmful content online. Platforms have learned that divisive content attracts the highest number of users and as such, the real power lies with these recommendation algorithms.”
YouTube Shorts is even more dubious in this regard since it was found to be actively promoting and recommending misogynistic and extreme anti-feminist content to accounts purporting to be teenage boys in a 2022 study conducted by the Institute of Strategic Dialogue. It did not make exceptions for underage accounts and was much quicker to promote extreme content than YouTube itself, with little evidence of any counter-content. This suggests the Red Pill/Manosphere movement is being actively nurtured by Big Tech.
Facebook’s attempts to crackdown on QAnon related conspiracy theorists has also shown abysmal results according to a New York Times report. In 2020, after Facebook implemented a set of new rules that were specifically designed to crack down on accounts and pages that were supportive of QAnon messaging, content linked to this conspiracy movement increased. In fact, researchers even found evidence that the Facebook algorithm was actively promoting websites that held QAnon content to users. Worryingly, the same researchers cite this algorithmic promotion as one of the key catalysts for the growth of QAnon globally.
Facebook’s rules around hate speech and inciting violence do not seem to be very effective either. Global Witness conducted an investigation in 2021 in which incendiary ads targeted across the sectarian divide in Northern Ireland were sent to Facebook. Despite the fact that the ads clearly breached Facebook’s own rules on hate speech and inciting violence, they were still approved for going live until the researchers retracted them. Northern Ireland has seen a spike in sectarian tensions and violence in recent years.
Big Tech platforms have become a little better at removing content that is obviously illegal since that content can (a) result in hefty fines and (b) lead to advertisers withdrawing their business, as was the case in 2019 when the likes of HSBC and Vodafone pulled ads from Google because they were displayed alongside extremist content. However, content does not need to be illegal in order to have a radicalising effect and with algorithms that promote political extremism alongside regulation that is ineffective at even preventing incendiary advertising, the radicalisation of users is likely to continue unabated.
Ultimately, large corporations are by their very nature psychopathic, in that they are most successful when they act without empathy, remorse or pity in pursuit of their goals. As such, real world outcomes are secondary to the quest for profit and any hurdles that stand in the way must be dealt with in a manner that minimises disruption to the business model. Expecting Big Tech to meaningfully challenge online extremism is like asking MacDonald’s to tackle childhood obesity, the social harm is a by-product of their core business model. Fast food restaurants are never going to stop selling junk food and Big Tech platforms are never going to switch off algorithms that promote polarising and extreme content, so what can be done?
In my view, the solution lies in offline activities. We need to use schools and youth clubs to encourage critical consumption skills amongst young people, to engage in activities that expose people to a wider array of perspectives in the real world and to spend less time online overall. We also need to support parents to get better at regulating what their children are doing online and play a more active role in discussing content they are exposed to. We need to nurture a culture in which the dangers of mindless scrolling are well known and widely exposed, we need to know what we are dealing with. Ultimately, we need to build more resilience to this brave new digital world in which our attention is being unscrupulously exploited for commercial gain by entities that are largely oblivious to social consequences.
Very impactful (yet fitting) how you describe these companies as “psychopathic.”