There was a curious, and not so subtle, detail at the recent inauguration of US President Donald Trump that certainly got my attention. As he was being sworn in, a row of Big Tech leaders were standing right behind him overseeing proceedings. Mark Zuckerberg, Jeff Bezos, Sundar Pichai, Elon Musk and Tim Cook, they were all there to show their support and to make it clear which side they were on. This raises a couple of questions – why were they so keen to show support for the incoming president and, given that they are all businessmen, what is in it for them?
For context, it is important to note that pressure to regulate Big Tech platforms more effectively has been building over the past few years. In late 2022, the European Union passed the Digital Services Act (DSA) which requires Big Tech platforms to remove illegal content and disinformation within a stipulated timeframe. It also seeks to limit targeted advertising, with platforms having to clearly label ads and provide information about how targeted content is directed at users.
Julia Demaree Nikhinson - Pool/Getty Images.
Britain has passed similar legislation, called the Online Safety Act 2023, which comes into force in March of this year. This legislation requires social media platform to verify the age of users, remove illegal content within a given timeframe and provide more transparency about the types of content they allow. Australia has passed similar laws, going one step further and setting the minimum age for social media accounts at 16. These laws have also set hefty penalties in instances of non-compliance, with the DSA being able to fine a company up to 6% of their annual turnover.
Needless to say Big Tech leaders are not happy with this emerging regulatory landscape that they view as a threat to their business model. After all, any measures that reduce user engagement, or limit user numbers, means less advertising revenue and, thus, lower profits. Algorithmic transparency, in particular, is a huge threat to the dubious business model of many Big Tech platforms who rely on accelerating user engagement through targeted content placement in order to bolster engagement, even if that content is harmful.
During his presidency, Trump was a fierce advocate for Big Tech’s interests, despite his later public clashes with social media companies. Under the guise of protecting American businesses, his administration repeatedly used trade threats to pressure countries into softening their online harms regulations. The US Trade Representative (USTR) frequently intervened in legislative debates in Europe, Australia, and elsewhere, arguing that proposed online safety laws unfairly targeted American firms like Facebook, Google, and Twitter.
Since taking power, the UK has announced that it is willing to water down the Online Safety Act 2023 in order to secure a trade deal with the US. Vice-President, J.D. Vance, spoke at an AI summit in Paris a couple of days ago in which he said “The US innovators of all sizes already know what it’s like to deal with onerous international rules. Many of our most productive tech companies are forced to deal with the EU’s Digital Services Act and the massive regulations it created about taking down content and policing so-called misinformation.” As global leaders listened with concern, it became abundantly clear why Big Tech has been so keen to back the Trump administration.
Numerous studies, including those commissioned by Big Tech themselves, have found that social media algorithms push users toward more extreme content over time. Facebook’s own internal research, leaked by whistle-blower Frances Haugen, revealed that its recommendation system promoted divisive and inflammatory posts because they drove higher engagement. A 2018 internal Facebook report found that 64% of users who joined extremist groups online did so because those groups were recommended by the algorithm. Likewise, YouTube’s algorithm has been found to steer users toward conspiracy theories and radical content, even when they start with neutral searches.
Moreover, this digital environment is particularly harmful to children, who are exposed to content designed to exploit their attention spans and emotions. Studies have linked excessive social media use to rising levels of anxiety, depression, and self-harm among young people. YouTube Shorts was found to be actively promoting and recommending misogynistic content to accounts purporting to be teenage boys in a 2022 study. This suggests the Red Pill/Manosphere movement is also being actively nurtured by Big Tech.
Beyond extremism, the broader issue is that our digital public sphere is controlled by private, profit-driven corporations. Social media platforms have become the modern equivalent of the town square, yet they are governed not by democratic principles but by commercial interests backed by the most powerful government in the world. This has dangerous consequences for social cohesion. When public discourse is shaped by algorithms that prioritize profit over truth or social well-being, it leads to fragmented societies where polarisation thrives, and trust between citizens erodes.
With regulatory efforts continuously undermined by Big Tech’s lobbying, and now Trump’s trade pressure tactics, we are heading toward an even more unregulated digital space. The failure to rein in these platforms means that extremist content and harmful online environments will continue to thrive, unchecked. If governments remain unable to enforce meaningful regulations, especially those that target algorithmic amplification, social media will only become more dangerous. Without decisive action, we risk further entrenching a digital landscape where harm is not just permitted but actively amplified. This will lead to the emergence of a digital wild west that will further damage our already fragile social fabric.