Did the Pope endorse Trump? Did the late-Queen vote for Brexit? Do 5G towers cause Covid-19? These are all recent examples of disinformation, a phenomenon that is now proliferating on social media and causing a moral panic. Some argue that the rise of disinformation is a major threat to democracy and political stability. Others argue that it is a political weapon used to shut down dissent. In this article, I would like to examine the nature of disinformation, assess if it is really on the rise and what can be done about it.
Firstly, it is important to clarify some terminology. Disinformation is best defined as false information which has been deliberately put into the public realm in order to mislead or to achieve a certain goal. It differs from misinformation, which is false information that is not deliberately intended to mislead, and mal-information which refers to truth that is put out in order to inflict harm. Disinformation is also referred to as ‘fake news’ and is closely related to ‘political spin’, which is about putting a bias on certain bits of information in order to shape public opinion.
Secondly, it is important to point out that disinformation is not new or novel, it has been with us since the dawn of civilisation. Political figures and governments have always sought to influence and control public narratives and put out information that furthers their interests. In the mid-1700s, when the Jacobite rebellion was at its height in Britain, seditious publishers printed reports that King George the Second was ill, in an attempt to destabilise the establishment. These reports were eventually republished by others, making it difficult to tell fact from fiction. The Nazis, arguably, came to power on the back of disinformation by claiming that communists were planning a violent uprising in the aftermath of the burning of the Reichstag, with the subsequent emergency laws paving the way for Hitler’s rise. More recently, the Iraq War and Putin’s invasion of Ukraine were, arguably, launched on the back of disinformation campaigns.
However, the advent of social media has democratised disinformation and allowed for the mass production and proliferation of it on an unprecedented scale. Moreover, the speed at which disinformation now spreads can quickly outpace any efforts to counter it, thus the damage is often done before it can be fact-checked. Social media platforms alone generate an enormous amount of content every second, making verification and moderation practically impossible. Claims not only take time to be fact checked but sometimes can’t be checked at all, especially when they are built on a foundation of unverifiable claims and half-truths. Thus, it is very difficult to stop.
Identifying disinformation accurately is a herculean task since the line between opinion, satire and outright falsehood can be blurred. The challenge is to distinguish genuine freedom of expression and differing viewpoints from maliciously misleading information and that it not easy, especially when virtually every political faction has peddled disinformation at some point. Artificial intelligence and machine learning algorithms are being utilized to assist in this process, but they too are not infallible and often inadvertently flag legitimate content as false or miss sophisticated disinformation campaigns.
The internet also transcends geographical boundaries, making it challenging for any single government to address disinformation effectively. Disinformation campaigns can originate in one country and target another, complicating enforcement and jurisdictional issues. Cooperation between governments and tech platforms across borders is essential, but achieving this can be hindered by differing legal frameworks and political interests.
Anonymity is another complicating factor in that when it is afforded on the internet it can enable malicious actors to disseminate disinformation without accountability. When it is denied it cannot be used by whistle blowers or those who need to keep their identity concealed in order to speak out on an important issue. Therefore, unmasking users or denying privacy rights does not aid the battle against disinformation, yet anonymity does pave the way for the proliferation of false and deliberately misleading information.
Social media fact-checking and censorship efforts are also likely to reflect the biases and interests of the platforms in question. A recent study found that google has a strong left-wing bias in its news aggregation. Twitter, prior to the Elon Musk takeover, was accused of having a left-wing bias in its moderation policies and being more censorious towards conservative accounts. Since Musk’s takeover, it is now accused of having a more right-wing bias and amplifying conservative content. Mark Zuckerberg, in his now famous testimony to Congress, stated that ‘silicon valley is an extremely left-leaning place’, a statement that many interpreted as an admission of an inevitable and intractable political bias in his company.
Meta, along with other platforms, have also been accused of collaborating with government agencies in order to moderate content in line with the demands of the White House. This has become such a political hot potato in the US that a federal judge has now restricted the Biden administration from talking to social media platforms. The impact of this judgement is yet to be felt but the damage has already been done in that Big Tech moderation efforts will always lack credibility. This is because there are just so many different interests at play and their policies will always appear opaque and driven by a commercial or political agenda.
The rise of artificial intelligence (AI) powered chat bots is also likely to encourage the spread of disinformation. AI bot accounts on social media could churn out propaganda on a daily basis and interact with human users in order to persuade them towards a particular narrative. Deep fakes could also be used to create content that pass as ‘real’, and can be circulated around the world before anyone has time to authenticate them. Therefore, the line between real and fake will become increasingly blurred and difficult to establish. Even if we could get Big Tech players to crack down on this, new social media platforms are emerging all the time and the monopoly of the old guard is weakening.
Furthermore, studies have also shown that vulnerability to disinformation is positively correlated with political polarisation, access to wide variety of news sources and a reliance on social media for news. In other words, the factors that increase vulnerability to disinformation are not only ubiquitous but modern and on the rise in most countries around the world. Thus, we are likely entering an age of more disinformation and not less.
Tackling disinformation is, therefore, very problematic given the kind of world we are creating and the ethical and philosophical questions that lie at the heart of this debate. Who gets to define it? How do we identify it in time? How do we remove it from the public sphere? Can we avoid accusations of bias? Can we trust those who engage in it themselves to tackle it on our behalf? There are no simple solutions to these questions so maybe we should take a different approach.
Humans are, in essence, pattern and utility seeking mammals, in that we gravitate towards that which seems to make sense and helps us to navigate our social environment rather than that which is true. In fact, entire civilisations have been built on the back of assertions that were not true but served a purpose. The Roman Empire was built on the foundational belief that the gods favoured the Romans due to their moral virtues.
Disinformation also offers a social utility in that it gives people a reason to further their own political or social agenda. It is, thus, very natural and only deemed problematic in our eyes when it goes against our interests. As such, attempts to tackle disinformation become little more than a power struggle, and all such struggles are fought with a combination of fact and fiction. Ideally, we all want to live in a world in which we are only fed accurate information and not Orwellian doublespeak, I just don’t think that world is possible given human nature.
However, a number of studies have shown that those who are part of online echo-chambers are more vulnerable to disinformation campaigns. This is because they are more likely to be blind to their own biases since they are having them reinforced by an online community on a regular basis. Having our biases confirmed is very rewarding and so we are drawn to spurious headlines that do just that. In this way disinformation finds an easy access point.
If we want to tackle disinformation, we can start by building our own intellectual resilience and fact-checking our own beliefs. Disinformation is effective because many of us live in online filter bubbles in which we are not exposed to a wide range of information and sources. We attach ourselves to a politically homogenous, and carefully curated, news feed and then are surprised when we are misled or lied to, but that is what happens when you put ideology and bias before reality and facts. If we are not objective and open-minded ourselves we cannot expect to easily identify attempts to mislead us.
If we truly want to live in a world that is free from disinformation, we need to start by removing it from our minds first.
Hey Ghaffar. Great article on disinformation. I'm optimistic that we can develop better tools to tackle this issue and promote genuine information. Looking forward to more insightful pieces like this.