Deepfake Technology: A growing threat in Pakistan

The weaponization of fake nude and blasphemous videos of celebrities

Artificial Intelligence has revolutionized numerous industries, yet its darker applications pose significant challenges. One such alarming innovation is deepfake technology, which employs advanced AI algorithms to manipulate images, audio, and video, producing highly realistic but entirely fabricated content.

Deepfake tools have been exploited by malicious actors to craft deceptive and harmful material, often targeting celebrities, politicians, and public figures. This has become global, and in Pakistan, the situation is particularly concerning. Such content gravely endangers personal reputations, careers, and even lives.

Globally, deepfake technology is recognized as one of the most severe threats to digital privacy and online security. Studies indicate that 96 percent of deepfake content online is pornographic, primarily targeting women. In Pakistan, the psychological, emotional, and social repercussions of deepfake exploitation are profound. Victims, particularly celebrities and high-profile individuals, often face public outrage, severe emotional distress, and, in extreme cases, physical violence. Law enforcement agencies and legal experts in the country are raising the alarm about the escalating weaponization of deepfake content. This crisis demands immediate legal reforms, public awareness campaigns, and technological innovation.

Globally, deepfake technology is becoming increasingly prevalent, particularly from platforms and tools designed to simplify it. over 96 percent of deepfake videos circulating online are pornographic, celebrities the most common victims. By 2023, deepfake creation communities had over 609,000 members, each contributing to proliferating harmful content. Tools like DeepNude, which generate explicit content from basic facial images, have gained significant traction worldwide. As a result, deepfake technology has become a weapon for cybercriminals, disgruntled ex-partners, and others wishing to exploit others for personal or political gain.

Creating a deepfake video is surprisingly simple. A 60-second fake video can be generated with as little as one image of the target, taking only 25 minutes to produce. The technology has become so accessible that individuals with minimal technical expertise can create convincing deepfakes using free or low-cost software. This democratization of the technology has resulted in its rapid spread.

The impact of deepfake technology in Pakistan has been particularly damaging, with a surge in fake explicit content and blasphemous videos circulating online. The FIA in 2023 received 1,180 complaints related to deepfakes and non-consensual intimate imagery. These figures likely represent only a fraction of the actual cases, as societal taboos prevent many victims complaining.

Women, especially those in the public eye, are disproportionately affected. Prominent female celebrities, influencers, and social media personalities are often the primary targets. In one high-profile case, a well-known actress in Pakistan found herself at the centre of a scandal when a deepfake video featuring her was shared across multiple social media platforms. The video caused public outrage and led to her temporarily withdrawing from the media spotlight. Such incidents not only harm the victims emotionally but also disrupt their careers and personal lives.

The use of deepfake technology for blasphemous content is another trend. Any content deemed offensive to religious sensibilities can spark public outrage and even violence. Deepfake videos that depict religious figures in compromising or offensive situations are a tool to create social unrest. These videos are often shared on social medi platforms. The ease with which such content can be created and shared poses a direct threat to social harmony and peace.

The threat of deepfake abuse is not limited to Pakistan. High-profile incidents have occurred worldwide, with celebrities, politicians, and even ordinary individuals hit by malicious deepfake campaigns. One of the most notable international cases occurred in 2019, when a deepfake video of Indian politician Maneka Gandhi was circulated during the election period. The video depicted her making inflammatory remarks, leading to widespread public backlash.

While the challenges posed by deepfake technology are significant, they are not insurmountable. Through proactive legal reforms, societal education, technological advancement, and unwavering support for victims, Pakistan can lead by example. By addressing deepfakes at their root, the country has an opportunity to preserve the integrity of its digital landscape and protect the fundamental rights and dignity of its people.

In the USA, actress Scarlett Johansson has been vocal about her experiences with deepfake pornography, in which her likeness was used in explicit videos without her consent. Despite the emotional toll these incidents took on her, Johansson, like many others, was left without any legal recourse. Her case brought to light the significant legal gaps that exist in combating deepfake abuse and highlighted the need for stronger laws.

Another high-profile case occurred in 2018, when a manipulated video of former US President Barack Obama was released, showing him making derogatory remarks. While the video was part of a public awareness campaign designed to demonstrate the power of deepfake technology, it underscored the danger. The Obama video was particularly concerning because it revealed how easily deepfakes could be used to deceive and manipulate the public.

In 2020, South Korea faced a deepfake scandal involving prominent K-pop idols. Videos allegedly depicting idols in compromising situations spread rapidly across social media, sparking outrage among fans and the general public. Investigations later revealed that the videos were entirely fabricated, created using AI-based software.

The UK had a case in 2021, when a deepfake video of a leading opposition politician emerged during a heated parliamentary debate. The video, which falsely showed the politician making disparaging remarks about his constituents, went viral before being debunked. Damage to his reputation lingered, reflecting the irreversible consequences of deepfakes even being shown as fake.

In another alarming instance, Dutch journalist Maria Genova reported in 2022 that her face had been superimposed onto explicit videos without her consent. These videos circulated widely. The journalist’s public struggle brought attention to the rising prevalence of deepfake attacks targeting professionals, particularly women.

China has faced its own controversies, particularly with fake videos being used in scams. In 2022, a deepfake of a tech company CEO was used to trick employees into transferring large sums to fraudulent accounts..

Pakistan faces several significant challenges. One of the most pressing issues is the lack of resources and technological infrastructure. The FIA Cybercrime Wing, while dedicated to investigating online crimes, is woefully underfunded and understaffed. Law enforcement agencies lack the necessary AI-driven detection tools to help identify the creators of deepfakes. As a result, many perpetrators operate with relative impunity.

Another major challenge is the social stigma. In Pakistan’s conservative society, such issues are often seen as taboo, and victims, especially women, are often blamed. This societal attitude prevents many victims from coming forward, fearing they will be shamed for being targeted.

Moreover, the lack of digital literacy and public awareness makes it difficult for individuals to recognize and protect themselves from malicious content. Many people are unaware of how deepfakes are created or how to verify the authenticity of videos and images online. This ignorance only serves to empower perpetrators, as they know that their fake content can easily go viral before it is debunked or removed.

To tackle the growing problem in Pakistan, a multifaceted approach is necessary. First there needs to be a concerted effort to strengthen the legal framework. Pakistan’s Prevention of Electronic Crimes Act 2016 provides a legal basis for addressing online crimes, but does not specifically address deepfake technology. lawmakers should amend PECA to explicitly criminalize the creation, distribution, and consumption of deepfake content, particularly when involving non-consensual explicit material, defamation, or religious blasphemy. Additionally, Pakistan should establish clear legal guidelines for the prosecution of offenders.

Public awareness campaigns are also crucial. The public must be educated about the dangers of deepfake technology, recognizing it, and reporting it. Digital literacy programmes should be introduced in schools, colleges, and communities. These campaigns should also focus on changing the societal attitude toward victims, particularly women, to ensure that they feel empowered to report incidents.

International collaboration is also key. Pakistan should work closely with INTERPOL and the un to share best practices, access advanced tools for detecting deepfakes, and collaborate on international legal frameworks. Additionally, Pakistan should invest in AI-powered tools that can identify manipulated content.

Deepfake technology represents an alarming and rapidly evolving threat to digital privacy, social stability, and individual security worldwide.This technology undermines trust in media, disrupts lives, and poses significant risks to personal dignity and public safety. In Pakistan, the misuse of deepfake technology has been particularly devastating, with fabricated explicit and blasphemous videos targeting celebrities, influencers, and even religious figures. These malicious acts not only tarnish the reputation of individuals but also sow discord within communities and heighten societal tensions in an already sensitive cultural context.

Effectively addressing this menace requires a multifaceted approach. Strengthening Pakistan’s legal frameworks is essential to explicitly criminalize deepfake-related offenses and provide robust mechanisms for justice and deterrence. Concurrently, the development and deployment of advanced detection tools powered by AI can enable timely identification and removal of harmful content before it spreads. Education also plays a critical role; promoting digital literacy among citizens will equip them to discern authentic media from manipulated content. Victim support mechanisms must also be prioritized.

Global collaboration is paramount. By fostering international partnerships and sharing best practices, Pakistan can gain access to cutting-edge resources and strategies to combat this global issue. Technological innovation, combined with coordinated international efforts, has the potential to turn the tide against deepfake misuse, enabling countries to collectively safeguard their citizens from the harmful consequences of digital manipulation.

While the challenges posed by deepfake technology are significant, they are not insurmountable. Through proactive legal reforms, societal education, technological advancement, and unwavering support for victims, Pakistan can lead by example. By addressing deepfakes at their root, the country has an opportunity to preserve the integrity of its digital landscape and protect the fundamental rights and dignity of its people.

Ayaz Hussain Abbasi
Ayaz Hussain Abbasi
The writer is a freelance columnist

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

Civilian deaths soar in terror attacks during February: PICSS report

ISLAMABAD: A disturbing surge in civilian casualties from terrorist attacks across Pakistan during February is raising concerns that militants are increasingly targeting civilians, according...