The family of 15-year-old British national Shamima Begum woke up one early morning only to find that she had run away to Syria along with two other classmates from Bethnal Green Academy. Shamima was an avid X (formerly Twitter) user. She was exposed to radical content, got acquainted with hardliners, and stepped out of London to join the terror group the Islamic State of Iraq and the Levant (ISIL). The rest is history.
For a long time, online extremism has been a handy recruitment tool for terror enterprises. Terror mongers don’t need to deploy door-to-door knockings. Cyber clicks aptly do the trick instead.
To a greater extent, the rise of ISIS could be ascribed to extremism perpetuated through online means. Abu Bakr Al Baghdadi sold the dream of a self-styled caliphate through Dabiq (online magazine) and other targeted platforms. He got immense traction. According to the UN estimates, almost 40,000 foreigners from 110 countries joined ISIS ranks in Syria and Iraq during its heyday. Radicalized followers joined in hordes, conquered large swathes, took the world off guard, and wreaked maximum havoc before being battered kinetically.
Digital spaces have also been capitalized to hold sway on the thought process. Another instance of drip-feed radicalization is QAnon. Initially, a chatroom discussion that flushed almost all social media platforms with unfounded claims of Donald Trump being a crusader against a Satan-worshiping elite pedophile clique. This all happened in a developed country and many Americans still believe in the unbelievable conspiracy theories perpetuated by Q-drops. White supremacists, xenophobes, and hate groups routinely propagate their messages online in the garb of freedom of speech. In the same vein, gaming platforms like the notorious Blue Whale or PUBG have proved their potency to push one off the brink.
A cursory look at the online landscape reveals its vulnerability to be exploited by extremists. Social media platforms are dotted with ethnic, religious, political, and geographic parochial fissures. Practical manifestations of online extremism regularly claw back to haunt. Locally, whether it be the Jaranwala incident or the mob lynching of a Sri Lankan citizen in Sialkot; social media platforms were used to simmer the sentiments.
In Pakistan, preventing and countering online extremism is a tough nut to crack for authorities. Almost all tech giants are stationed outside the physical and legal jurisdictions of the country, making them virtually unscathed against the thick end of the writ of the state. Our monitoring efforts are sporadic, uncoordinated, and on a need basis.
Online extremists are eyeing Artificial Intelligence tools to wreak the maximum havoc. Being one of the most vulnerable ones, we need to act before this Frankenstein’s monster starts luring our youth.
The already murky online extremist tendencies took an ugly turn since Artificial Intelligence took charge. Organic online hate segments have certain limitations. They cannot create massive near-to-real content. To boot, state jurisdictional laws throw periodic spanners and leash in either originators or at least disseminators or at worst followers of the extremist content. Having said that, these limitations are to be broken by the all-encompassing Artificial Intelligence. AI is a potent tool to right the wrong and sway the opinions inadvertently. AI can generate large-scale false narratives to contaminate consumers. Bots can spew optimum human-like content. They tend to flush the comments sections and chat room discussions with desired chatter.
AI also helps craft echo chambers for bespoke online avenues. The echo chamber is an aural effect that encapsulates the user into a particular digital environment. The feed is based on algorithms rather than recency and frequency. Nerd extremists can tinker with algorithms to hunt their audience. In the same vein, reams over reams have been written over the Deepfake. The audio and video leak sagas have somewhere, something to be ascribed to this high-end digital forgery.
Of late, on a macro level, our lead agency NACTA has been pushing meta-narratives to counter online extremism. Nevertheless, this struggling approach will further dilute its influence at the onset of mass data churned out by Artificial Intelligence. Organic counter-narratives are likely to be dumped deep under the mechanical garbage produced by generative AI bots.
To stem the looming rot, the right mix of education, collaboration, and regulation can somewhat mitigate the risk.
Firstly, learning digital literacy and critical thinking is a must. Guardians who are sensitized and want to keep tabs on children are found wanting, owing to not knowing content moderation exercises. Only mindful users can steer clear of sham platforms where algorithmic changes are at play. Government and civil society have their parts to play here. Awareness seminars, workshops, and online courses can educate the masses about the right online choice.
Secondly, considering the disadvantaged position of Pakistan. It is an uphill task to negotiate with the social media giants on our terms. Nonetheless, we can nudge them to foster collaboration for the greater good. It is a fact that without international cooperation, reining in online extremism is a pipe dream. Tech giants can help governments by flagging, blocking, or filtering suspicious content. We need to galvanize them by proper advocacy with indigenous case studies on international fora.
Thirdly, proper regulations are also required to stem the extremist tide. We don’t have an extremism-specific law in place. Instead, piecemeal special laws like the outdated Telegram Act, 1885, Anti-Terrorism Act, 1997, or one-size-fits-all PECA, 2016 are exercised to fill in the gaps. A strategic approach to prevent/counter online extremism can only mitigate the situation.
Online extremists are eyeing Artificial Intelligence tools to wreak the maximum havoc. Being one of the most vulnerable ones, we need to act before this Frankenstein’s monster starts luring our youth.