Understanding propaganda in the age of automation and algorithms

In his search for how propaganda has evolved in the digital age, Samuel Woolley, a leading authority on the subject, highlights a disconcerting trend. He is of the opinion, “Today we deal with cyborg social media accounts run partly by people, partly by automated computer code—and now we seem to be approaching another evolution in propaganda, with sophisticated AI-enabled bots beginning to play a role in the manipulation of political information streams.”

The bottom line is that propaganda used to be a tool for gaining control, primarily employed by states through mainstream mediums such as news channels, radio and newspapers. The perpetrators of propaganda were relatively well known. This traditional understanding of propaganda has changed, and it is not for the better. The evolved nature of propaganda can be attributed to the rise of automation and algorithms, both of which have left indelible marks on nearly every aspect of modern day living.

In the wake of automation and algorithms, propaganda has become more sophisticated, murky, and complicated. It has become what many scholars have described as computational propaganda, a form of propaganda that mainly relies on the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media websites. Three keywords emerge from the said description: social media, automation, and algorithms.

Let us take up the first keyword, i.e. social media. Computational propaganda mainly relies on social media platforms as the primary mediums of dissemination. Unlike traditional forms of propaganda which often utilise mainstream mediums, computational propaganda thrives in the digital ecosystem provided by these social networking sites. These platforms, with their massive user bases and potential to spread information at breakneck speed, make it a lucrative choice for the propagation of misleading or manipulative content.

Social media sites are the preferred choice for computational propaganda, also because of their automation capability and algorithms. Automation leverages technology to perform specific tasks with minimal human supervision. A bot is an illustrating example of a software tool designed to execute autonomous actions. However, in the context of computational propaganda, bots are automated social media accounts designed to mimic human behaviour, disseminating disinformation and engaging in online conversations with minimal human supervision. These digital agents have become powerful tools for disseminating computational propaganda, particularly on platforms like X (formally Twitter), which is highly receptive to bots and automation.

Another critical aspect of computational propaganda involves exploiting social media platform algorithms. Algorithms, such as Twitter’s trending algorithm, determine which information becomes popular among users and subsequently promotes that information. Digital propagandists exploit these algorithms by artificially boosting content through a networks of bots.

One might ask why it matters at all. The answer lies in the profound implications of computational propaganda. The manipulation of information on social media, which boast billions of users globally, can significantly undermine democratic processes including influencing elections. It can erode public trust in institutions, fuel societal polarisation, and even incite violence. A prime example is the Cambridge Analytica scandal, where the firm allegedly extracted data from millions of Facebook users without their consent and used it to target them with personalised political propaganda during the 2016 US presidential election to manipulate the election outcomes.

The scandal is a stark reminder of the potential harm caused by computational propaganda and underscores the need to counter it. However, a significant challenge lies in the reluctance of these social media companies- the primary stakeholder to fully address the issue. Their business models thrive on huge volume of engagement and content, regardless of its veracity and quality, creating a conflict of interest when it comes to combating misinformation. But, this is not to say that nothing can be done in this regard.

To counter this evolving threat, states need empowered individuals with strong media literacy skills. This could be achieved through the implementation of educational initiatives, such as short courses or workshops. It will enable them to critically assess information sources and differentiate between fact and opinion.

Moreover, governments and tech companies must collaborate on stricter regulations for data collection and usage, ensuring greater transparency in online political advertising and holding platforms accountable for their algorithms’ impact. In case the tech companies fail to comply with these regulations it should result in tangible consequences, including temporary bans in extreme cases. Additionally, since, computational propaganda is a global issue, international cooperation is essential in establishing norms and coordinating efforts to combat this global challenge. By implementing these strategies, we can build a more resilient information ecosystem that protects against manipulation and safeguards democratic values.

Azhar Zeeshan
Azhar Zeeshan
The is a researcher at the Centre for Aerospace and Security Studies, (CASS) Lahore, and can be reached at [email protected]

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

Eight bodies recovered from debris of collapsed building in India

Indian rescuers recovered eight bodies on Sunday from the debris of a three-storey building that collapsed in Lucknow, with the state's chief minister, Yogi...