AI and Disinformation in the Russia-Ukraine War

The first TikTok war

The 3ar between Ukraine and Russia has shown how AI and social media can be utilized for disinformation purposes. Several social media platforms like Facebook, TikTok, and X  use AI-powered bots and deep fake applications. These applications can be used to avoid disinformation at the international level. This piece aims to analyze how such AI-powered deep fakes and bots are tangible within social media platforms and their impact on narrative construction.

AI tools can perpetuate disinformation, but how these technologies can be used to battle it is still being studied.  In the AI and social media-led world, information wars are the new trend prevailing among countries. The modern battlefield is not land and water, as shown in the Russia-Ukraine War. Still, social media platforms like TikTok and Facebook are now considered the new “mass media” and are used extensively to forward false propaganda. Everyone, including nations, has waged wars against AI-synthesized content, making it challenging to decipher the truth of situations.

The Russia-Ukraine war is a prime representation of how AI content manipulation, bots, and deep fakes can drastically alter people’s perceptions across the globe. While it is much harder to control AI misinformation, it provides ways to detect and verify the information. Ongoing cooperation between governments, technology companies, and civil society is needed to reverse the consequences of the evolution of Orwellian dystopian reality

Russia and Ukraine  have strategically used social media to shape narratives about the war. Russia, for its part, frames the war  as a fight against NATO expansion and the “de-Nazification” of Ukraine. Instead, Ukraine casts itself as a country under siege, battling for sovereignty. This narrative is not only limited to the primary actors, as other countries such as China and Belarus have also participated in disinformation campaigns while attempting to diminish Russia’s role and push an anti-Western narrative.

The war between Russia and Ukraine, which started in 2022, has often been described as the world’s first “TikTok war,” owing to the sheer volume of war-related content on the platform’s feed by  the algorithm. The amount of war-related content that circulates on social media is overwhelming. During the very first week of the war, videos with tags on TikTok #Russia and #Ukraine reached 37.2 billion and 8.5 billion views worldwide, respectively. The wide availability of opposing narratives makes it increasingly more complex for social media platforms to mitigate the spread of disinformation.

TikTok, in particular, has had difficulty with moderation, as controlling content on the platform has proven challenging. Because of the failure to regulate content, misinformation prevails. Within minutes of account creation, TikTok has tricked new users into believing undisputable lies about the war. Facebook has also been unable to control misinformation and disinformation, as captured in the absence of labels on 80% of posts containing conspiracy theories regarding United States bioweapon funded in Ukraine.

Russia has used bots to promote false stories, as apparent when Twitter deleted 75,000 fake profiles that disseminated Russian disinformation. Many deep counterfeit videos and false speeches of Vladimir Putin and Volodymyr Zelenskyy have been broadcast to mislead and change one’s perception of the war.

Anton Demokhin, Ukraine’s deputy foreign minister, believes that Russian AI-driven deceit campaigns are becoming a more severe threat worldwide. AI has been used to manipulate public perception across the globe, including attempting to influence elections in the USA. Facebook and Twitter, among other social media platforms, have developed AI systems, like SimSearchNet, that detect and delete misleading materials.

Nonetheless, AI moderation still struggles with restrictions on novel disinformation patterns before they are exposed to a broad audience. A concerted effort on March 2, 2022, to push the hashtags #IstandwithPutin and #IstandwithRussia resulted in 23 million tweets targeting South Africa, Ghana, and Nigeria. More than one thousand five hundred buffer accounts that posted unauthorized content on Twitter, Facebook, and LinkedIn. Tracking 240 Russia-Ukraine disinformation sites operating in various languages, including English, French, German, and Italian.

Government institutions and social media companies have tried to implement policies to combat false narratives and disinformation. Some countries block social media sites to some extent to prevent miscommunication, but people use VPNs to get around these measures. After the invasion of Ukraine, Russia saw over 400,000 VPN downloads each day, highlighting how narratives are virtually uncontrolled.

New hurdles to information warfare keep emerging from the AI revolution and the  second wave of digital disinformation. Like all governments, China and Russia have made efforts to control the narrative by creating domestic social media platforms that allow severe censorship.  The new ways of warfare AI and social media have dictated new ways of conflict, and disinformation is now a critical constituent of powerful geopolitical fights.

The Russia-Ukraine war is a prime representation of how AI content manipulation, bots, and deep fakes can drastically alter people’s perceptions across the globe. While it is much harder to control AI misinformation, it provides ways to detect and verify the information. Ongoing cooperation between governments, technology companies, and civil society is needed to reverse the consequences of the evolution of Orwellian dystopian reality.

Muhammad Danial Ihsan
Muhammad Danial Ihsan
The writercan be reached at [email protected]

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

PTI’s Punjab conventions on cards as Ayub announces ‘movement’ after Ramazan

Conventions plan includes: Feb 20 Convention in Central Punjab, Feb 21 in South Punjab, Feb 22 in North Punjab and Feb 23 in...