Laws of Armed Conflict and Autonomous Weapons Systems

Killer robots exist, but do not fear, they are bound by the laws of armed conflict. Although humans violate these rules, autonomous drones will surely succeed where fallible humans have not, right?

3 mins read

Killer Robots

Lethal autonomous weapons systems are colloquially called “killer robots.” This phrase undoubtedly conjures images of a post-apocalyptic past where artificial intelligence in the form of an android (Chip from Not Quite Human) or cyborg (the Terminator from the Terminator) turns on its human creators and destroys 1980s suburban America. BS is here to calm your anxieties. I do not peddle fear. I rarely discuss dangers posed by the proliferation of new technologies that are inadequately regulated. The pressing concern regarding “killer robots” is whether current laws of armed conflict are sufficient to regulate their use.

What Does “Autonomous” Mean? 

“An ‘autonomous system’ is a machine, whether hardware or software, that, once activated, performs some task or function on its own.”[6] One concern of opponents of autonomous weapons systems is that drones will decide which targets to kill without human interference.  In 2021, the Pentagon emphasized that humans will always control artificially intelligent weapons, a concession likely made to assuage anxieties by human rights organizations about their use.[7]  

Where Does the Technology Presently Stand? 

Some autonomous drones are programmed with artificial intelligence and are able to learn and improve functionality over time. Eventually, artificially intelligent weapons will regularly enter arenas considered too dangerous for humans. Proponents of the technology note that these systems make decisions faster than humans and operate without fear.[1]  Opponents worry that autonomous weapons may violate human rights laws. The U.S., China and Russia are competing to quickly develop autonomous weapons while participating in treaty discussions to limit their use. Many other countries want them banned.[2]  Despite uncertainty over the future of this technology, current rules of war are sufficient to govern some concerns posed by autonomous weapons. 

Current International Frameworks: The Principles of Distinction and Minimization of Harm  

Individuals that believe new legal standards are necessary to regulate “killer robots” note that treaties were written to supplement legal frameworks when other weapons posed humanitarian threats. Cluster munitions, antipersonnel mines, blinding lasers, chemical weapons, and biological weapons were addressed outside of the Geneva Conventions. [3]  Those weapons; however, are distinct from drones because they are unable to distinguish targets and are not designed for limited killing. Under Article 48, 51(2) and 52(2) of Additional Protocol I to the Geneva Conventions, parties to a conflict must distinguish between civilian and enemy combatants.[4] Autonomous drone technologies; however, target with precision.  Under Article 35 of Protocol I, weapons should not be designed to cause “superfluous injury or unnecessary suffering.” Weapons should not “cause widespread, long-term and severe damage to the natural environment.”[5] Some argue that since drones are designed to strike specific targets, their use may minimize human suffering and further advance efforts to make warfare more humane.  

Legal Accountability For Lawless Drone Activity 

The legal accountability for autonomous weapons that violate international law is addressed in the principle of command responsibility, which is found in Additional Protocol I and in the statute for the International Criminal Tribunal for the former Yugoslavia (ICTY). This rule states that commanders are criminally responsible for their failure to prevent or repress war crimes committed by subordinates.[8] If an autonomous weapon was released and violated the rules of war, the unit responsible for sending the weapon would be held accountable.  The principle states the commander must “have known” or “had reason to know” that the crime would take place. It can be argued that if a military decided to use this technology, it knew or had reason to know that the weapon’s behavior may violate rules of war. Autonomous weapons make decisions in the field, but they must be sent by someone who is ultimately responsible.  

BS Conclusion 

The decision by the international community to work together to make war more humane is very new. Whether these weapons further that objective is an ongoing debate. It is unlikely the U.S., China, or Russia will ever be bound to a treaty that significantly limits autonomous weapons’ use. These weapons are an inevitable part of future warfare. Humans have not done a particularly good job in conducting wars humanely, perhaps it is time to let the machines have a shot. 

BS


[1] David Martin, New Generation of Drones Set to Revolutionize Warfare,
CBS News, January 8, 2017 http://www.cbsnews.com/news/60-minutes-
autonomous-drones-set-to-revolutionize-military-technology/

[2] https://www.washingtonpost.com/
technology/2021/07/07/ai-weapons
-us-military/

[3] Advancing The Debate On Killer Robots: 12 Key Arguments for
A Preemptive Ban On Fully Autonomous Weapons, 
Human Rights Watch and IHRC (May 2014), p.4

[4] https://ihl-databases.icrc.org/customary-ihl/eng/docs/
v1_cha_chapter1_rule1

[5] https://ihl-databases.icrc.org/ihl/WebART/470-750044?OpenDocument

[6] Paul Scharre and Michael C. Horowitz, 
An Introduction to Autonomy in Weapon Systems, 
CNAS (2015), p.5

[7] https://www.washingtonpost.com/
technology/2021/07/07/ai-weapons-us-military/

[8] https://ihl-databases.icrc.org/customary-ihl/eng/docs/
v1_cha_chapter43_rule153