Optimizing Pakistan’s AI governance

No time to waste

In the era marked by the boundless influence of AI, Pakistan should not only embrace AI technology but also take insights from the notebooks of nations with well-established AI regulations. This will help Pakistan enhance its own regulatory framework to ensure an ethically sound and prosperous future of AI. Extracting, storing, and exchanging of vast variety of data through automated machines, increasing productivity through Machine Learning and Natural Language Processing techniques, and generating write-ups, speeches, and images through AI chatbots- this is all that AI offers us today. From defense to education, every global industry is subject to the technological revolution of Artificial Intelligence (AI) termed as Fourth Industrial Revolution. Pakistan is considered to have substantial employable human capital in the form of 64% of its young population under their 30s. Such a large youth population, on the other hand, raises the government’s responsibility to upskill them to compete with the changing job requirements. In May 2023, after years of moderate efforts, Pakistan’s Ministry of IT and Telecom introduced a draft of the country’s first-ever National AI Policy under the Digital Pakistan Vision of 2019. This policy issues a roadmap for creating an enabling, responsible, and regulatory environment for the fast adaptation of Artificial Intelligence across the country to harness its economic and societal benefits.

Rapid technological change also comes with some potential cons, as does AI. Advanced computer systems with Artificial Intelligence are generally believed to be capable of carrying out activities that typically require human intelligence such as understanding and translating speech, recognizing visuals, and decision-making. In May 2023, hundreds of well-known tech giants issued a collective warning that AI can pose an existential threat to humanity while urging the need to mitigate the risk of extinction from AI. These cautions seem to be hypothetical but cannot be neglected as they have already been validated through a metaphorical Doomsday Clock which now stands very close to a global catastrophe. According to the Doomsday Clock 2023 Report by the Bulletin of Atomic Scientists, the significant risks posed by Artificial Intelligence and disruptive technologies stand on par with threats of nuclear war and climate change. The likelihood of AI augmenting the spread of misinformation, job displacement, inherent discrimination and biases, and the risk of misuse of personal data all have driven governments around the world to regulate AI.

Several states have developed comprehensive federal legislation for regulating AI such as the EU, Singapore, and United Kingdom. Those without any dedicated regulatory policy have published guidelines and policy papers depicting their future moves toward AI regulation such as China, the United States, Japan, Australia, Brazil, and Pakistan is no exception. One of the key pillars of Pakistan’s National AI policy aims to create an enabling and trusted environment through a hybrid intelligence ecosystem appreciating human ingenuity along with AI’s creative capabilities. The establishment of the AI Regulatory Directorate (ARD) by 2024 lies at the heart of Pakistan’s National AI regulatory policy. The development of the AI Regulatory Directorate (ARD) is directed towards harmonizing AI advancements with human values and societal needs. The key responsibilities of ARD include monitoring AI developments undermining personal data protection, regulating AI-based innovations, formulating data sharing frameworks, and ensuring alignment of AI algorithms with cultural, societal, religious, and international norms and guidelines. Of paramount significance is that this regulatory body will facilitate the establishment of Non-Disclosure Agreements (NDAs) for data collection and application access safeguarding personal data and ensuring digital privacy.

Generative AI tools employed for human-like images, texts, and speeches can augment the possible spread of disinformation, fake news, and breaches of personal data privacy. The foregoing governing body is poised to counter disinformation and privacy breaches: firstly, by issuing regulatory guidelines for the ethical use of generative AI; secondly, by promoting indigenous research in collaboration with the OpenAI platform; and thirdly by coacting with the Higher Education Commission to mitigate the unethical use of generative AI in academia. Furthermore, the regulatory policy aims to provide a sandboxing environment to carry out controlled trials of AI innovative technologies in a real-world scenario with actual users. The sandboxing technique will not only help regulators to identify the risks of AI technology before it enters the market but will also guide in the development of comprehensive guidelines and adaptable legislation to regulate those machines. The ARD seeks a synergetic engagement with all the stakeholders such as technocrats, industry experts, and think tanks to strategize AI adaptation and regulation.

As Pakistan’s National AI Policy embraces international collaboration in adapting innovative AI technologies from developed countries, it must also adopt optimal regulatory policies of those nations. EU’s AI Act of 2023 under the General Data Protection Regulation (GDPR) is likely to become a standard regulatory legislation with other states expected to follow suit. EU’s AI Act directs organizations to assign a risk category to every AI technology based on their potential risk to consumers. Prior to the deployment innovators are mandated to conduct risk assessments and cost-benefit analysis of their innovations. The designated Unacceptable AI Risk Systems such as facial recognition in public places and social scoring are prohibited from being used in the EU’s domain as they menace fundamental human rights. However, those with High-Risk classification such as tools used in job recruitment and the health sector are subject to registration and pre-implementation compliance evaluation. Singapore is among the forerunners in the timely development of its Advisory Council on Ethical Use of AI Data in 2018 which published its National Artificial Intelligence Strategy in 2019. These laws mandate the organizations to disclose all AI-adopted methodologies to their users and prevent them from using personal data for the creation of avatars or personas. China offers the most compelling regulatory framework named Administration of Deep Synthesis of Internet-based Information Services with a strong focus on penalizing synthetically generated content and deep fakes either images or videos and targeting AI Chatbots (OpenAI’s ChatGPT). These regulations obligate the proper labeling of AI-generated content to avoid deception.

Amid rapid AI advancements, Pakistan’s National AI Policy and the establishment of an AI Regulatory Directorate provide a proactive approach to benefit from transformative AI technologies mitigating their potential harms. However, it is ironic that the aforementioned regulatory directorate is subject to function under the National Commission for Personal Data Protection (NCPDP) which is yet to be established as its parent act named as Personal Data Protection Act (PDPA) is still pending to get its approval in Parliament. This raises a strong question of whether the timelines proposed in the policy document would be achievable or not. Pakistan must accelerate its efforts to regulate rapidly transforming AI tools to harness their potential benefits while protecting the fundamental rights of its citizens. Last but not least it must keep its doors open to consultation and collaboration with states having existing optimal legal frameworks for AI regulation such as the EU, China, Singapore, and many more.

Previous article
Next article
Malik Ahmed Rafay
Malik Ahmed Rafay
The writer is a freelance columnist

Must Read

Urmila Matondkar files for divorce

After eight long years of matrimony, Bollywood actor Urmila Matondkar has filed for divorce from former model and businessman Mohsin Akhtar Mir. According to...