“When I came to you with those calculations we thought we might start a chain reaction that would destroy the entire world, I believe we did”. The above mentioned quote is from the multiple Oscar winning film Oppenheimer in which the lead character Dr. Robert J. Oppenheimer is talking to Einstein about the invention of the nuclear bomb. Even though the quote is about nuclear weapons, some might say that it better describes the AI powered lethal autonomous weapons systems (LAWS) more accurately in the modern times.
According to the DOD 3000.9 Autonomous Weapons Systems can be defined as “A weapon system that, once activated, can select and engage targets without further intervention by an operator. This includes, but is not limited to, operator-supervised autonomous weapon systems that are designed to allow operators to override operation of the weapon system, but can select and engage targets without further operator input after activation.’’
The definition can be deemed as accurate but can also be viewed as a broad interpretation of the Autonomous Weapons Systems (AWS). According to this definition, we can conclude that the first AWS were the pit traps and that land mines are also autonomous weapons systems. It is not the type of weapon systems that come to mind when people think of Autonomous Weapons Systems. In modern times, the AWS are mostly used in order to describe the AI controlled unmanned weapon systems. Rapid digitalization has led to these modern autonomous weapons. Since the inception of warfare, humanity has always strived to be one step ahead of its enemy, whether it’s using some new projectile weapons or utilizing fire in a way the enemy gets startled. Centuries of this cycle has led us to present day where it is no longer hypothetical to fear a hoard of drones aiming at their target and taking the shot with no human operator.
There are many benefits of AWS such as precision, flexibility in combat, prevention of loss of lives, the cost-effectiveness that these systems provide. Another aspect of AWS is their uncomplicated nature in opposition to human based weapons. As an AI would theoretically never defy an order if it’s not programmed to do so. If we were to explain the current autonomy of weapons to some general from the 16 th century, he would probably think that humans have made “the perfect soldier”. The disadvantages of AWS include unintended consequences, proliferation to non-state actors and most importantly the ethical considerations of putting the lives of humans in the hands of a machine. LAWS could have long term effects on the strategic cultures of states, we would be looking towards a world where state relations are shaped by AI.
There is an ongoing debate on the international level on whether the use of autonomous weapons systems should be continued or discontinued. If continued, what level of human oversight is absolutely necessary. If discontinued, what implications would there be to face. The discussion basically comes down to the concept of “Keeping a Human in the Loop”. There has yet been noconsensus on the status of these weapons and their use in the modern warfare. We see this technology being used in present day in varying capacities, whether it be the Ukrainians intercepting Russian communications, Turkish forces allegedly launching a fully autonomous drone attack on Libya in March, 2020, or even the active use of loitering munitions in the Russia-Ukraine and Israel-Palestine conflict. In this ongoing debate, there are some states that are in favour of keeping a human in the loop but others not so much. The advancement in AI based LAWS has led to an arms race once again. There is just too much utility for states to ignore. Therefore, we do not see major powers give a directive against it openly even though UN-Secretary General has tried to find a common ground between them. There is a game of chicken going on between the states which will supposedly determine the victor of this race.
The future of LAWS is now at a more critical point than ever, the powers that be need to set their priorities straight whether to live in a world where a simple error in programming could lead to a world war. Where the meanings of sovereignty, morality, ethics and the international landscape are shaped by the decisions made by machines or take calculated actions while we can control the fallout of this technology. There is a serious need for the international community to come to an agreement through formulation of binding treaties that balance innovation with human oversight. Actionable frameworks need to be put in place in order to make sure that the future of policy making is not algorithm driven and before we know it the negotiations are being done by robots with navy blue suits and bright red ties. Only through foresighted dialogue can the states come to a fruitful and acceptable arrangement as the biggest danger of LAWS is that they are likely to make decisions with no flexibility and their potential benefit is all the same.