top of page
Writer's pictureSara Snakenbroek

Food for thought: Lethal autonomous weapons systems (LAWS) and international humanitarian law


Humans have constantly invented new technologies to make the waging of war more efficient. [1] Through technological progress, more physical distance and increase in damages was achieved, ultimately, leading to the deployment of armed drones on the battlefield. [2] Scientific research, nowadays, aims at furthering military advancement by creating lethal autonomous weapons systems (LAWS). Peter Asaro, an expert in the field of artificially intelligent devices, defines LAWS as:

‘any system that is capable of targeting and initiating the use of potentially lethal force without direct human supervision and direct human involvement in lethal decision-making’. [3]


This unprecedented category of weapons takes humans out of the equation, raising questions on its legality under international humanitarian law.



Photo by Charl Durand on Pexels

There are disagreements not only on the advantages of such artificially intelligent devices but also on their unavoidable development. American roboticist and robo-ethicist Ronald Arkin, argues that the development of LAWS is inevitable, even in the case of an international legal ban which would remain challenging to enforce. [4] Other academics, such as Kenneth Anderson and Matthew Waxman, both law professors, refute this claim entirely, emphasising its lack of evidence. [5]


Others have highlighted technical, political, ethical and legal concerns of taking humans out of the loop concerning lethal decision-making. Paul Scharre, a specialist in autonomous weapons technology in his capacity of Director at the Centre for a New American Security, denounces the difficulty for LAWS to apprehend unforeseen conditions such as the surrounding presence of civilians, cyberattacks or technological malfunctions. [6] Contextual adaptation and common sense are human aptitudes impossible to program artificially.


Likewise, Asaro depicts the danger emanating from LAWS, which would ease the decisional threshold for going to war for its reduced costs, notably on human capital. [7] Additionally, he stresses the international political instability LAWS could bring, by escalating conflicts due to actions engaged outside of human control. [8] Human Rights Watch, an international NGO, advocates for banning LAWS due to their inability to meet international legal standards. [9] While the international community is considering acting on a ban, several States, such as the US, the UK, Russia, China, and Israel strongly oppose any prohibition of LAWS due to military benefits. [10]


Arkin lists such advantages as being:


‘a reduction in friendly casualties; force multiplication; expanding the battlespace; extending the war fighter’s reach; the ability to respond faster given the pressure of an ever-increasing battlefield tempo; and greater precision due to persistent stare’. [11]


He further stresses LAWS’ self-sacrificing capacity enhancing a ‘first-do-no-harm’ policy instead of a ‘shoot first, ask questions later’ strategy. [12] LAWS would be able to avoid human-type errors caused by emotions (stress, fear) rendering warfare ethical. [13] Arkin also highlights the technological superiority of LAWS in terms of robotic sensors (‘electro-optics, synthetic aperture or wall penetrating radars, acoustics, and seismic sensing’). [14] After enumerating these military benefits, Arkin nevertheless recognises that autonomous weapons systems and human soldiers should work together to enhance ethical conduct, not leaving lethal decision-making entirely out of human supervision. [15]


Having established the divisions of the international community on whether LAWS should be prohibited or not, the existing international humanitarian legal framework will now be analysed to assess LAWS’ legality.

Article 36 of the Additional Protocol 1 to the Geneva Conventions [16] regulates the legality of the development and deployment of new weapons. It stipulates that in order to be legal a new category of weapons must be examined and recognised as not prohibited by Protocol 1 or any international rule. [17] According to Anderson and Waxman, two rules must be referred to when reviewing new weapons under Article 36:


  1. the rule against inherently indiscriminate weapons, [18] and

  2. the rule against weapons causing unnecessary suffering or superfluous injury. [19]


A weapon is deemed indiscriminate if it cannot target specifically combatants and is likely to hit civilians. [20] As regards LAWS’ autonomous character, if the programming of the autonomous system is ensured in such a way - with exact and reliable information on the target - the weapon cannot be considered indiscriminate and can even be considered as more discriminating than traditional weaponry. [21] The second rule applies when unnecessary treatment complications are created. [22] LAWS would likely comply with this rule as their autonomous programmed nature is precisely designed not to cause unnecessary suffering or superfluous injury. It appears that LAWS could be deemed lawful under Article 36 provided their accurate programming.


The Martens Clause, which first appeared in the preamble of the Hague Convention, [23] tightly relates to Article 36. It protects both civilians and combatants by the principles of humanity and dictates of public conscience in those circumstances not covered by Protocol 1. [24] Whereas some States tend to interpret this clause as merely being bound by customary international law, a broader interpretation is chosen by human rights organisations supporting the prohibition of LAWS. [25] Moderate interpretations read the Martens Clause as an interpretative tool which does not in itself hold the authority to prohibit weapons of a certain kind. [26] However, as Arkin states whether LAWS violate the Martens Clause will be difficult to prove until lawyers agree on its meaning and scope. [27]


Three main principles of international humanitarian law will now be analysed concerning the use of LAWS. Article 57 of Protocol 1 states that those planning the attack must: ‘take all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimizing, incidental civilian casualties and damages’. [28] Jakob Kellenberger, former president of the International Committee of the Red Cross, argues that LAWS could achieve greater precaution before engaging into attack due to their observation capacities and ability to choose the right striking moment, minimising civilian casualties. [29]


Distinctively, Robin Geiss, Professor in international law and security, affirms that real precaution would be to deploy LAWS on battlefields free of civilians. [30] Article 57 also states that ‘those who decide upon an attack shall refrain from deciding to launch any attack which may be expected to cause incidental loss of civilian life’. [31] LAWS’ capacity to change course strictly depends on the sophistication of its technology. It is therefore highly uncertain that LAWS would comply with the principle of precaution.


The principle of proportionality - laid out in several articles of Protocol 1 states that casualties - injury or damage must not be ‘excessive in relation to the concrete and direct military advantage anticipated’. [32] Arkin believes in LAWS’ compliance with the principle of proportionality due to their self-sacrificing capacity, engaging into striking action only when necessary and once the risks have been calculated, making them more reliable than humans. [33] Geiss agrees with Arkin that on the assumption of appropriate software development, LAWS could comply with this principle to even higher standards by the very fact that they could be programmed to incapacitate targets rather than by using lethal force where the presence of civilians is reported on battlefields or similar situations. [34]


However, numerous scholars disagree, such as Anderson and Waxman, who stress the difficulty of programming the notion of ‘excessive’ which entails attaching numeric values to objects and human beings[35]. Petman also demonstrates the issues deriving from LAWS’ programming when it comes to apprehending all possible circumstances and reactions to particularly unanticipated situations. [36] While Arkin, an expert in robotics, agrees on the ability of coding human judgement into LAWS, law professors Anderson, Waxman and Petman deny LAWS such capacity to comply with legal principles. This dispute is relevant as it questions whether opinions from robotics/technological specialists or legal experts are wiser to consider in the debate on LAWS.


Finally, under the distinction principle, LAWS must have the capacity to distinguish between combatants and civilians, [37] the latter not possibly being the subject of an attack. [38] Asaro clarifies that individuals are classified as combatants when their actions are likely to harmfully affect military operations. [39] However, he questions whether LAWS could make such distinction in the absence of human judgement as well as whether they morally ought to. [40] Legal scholar, Michael Schmitt, who specialised in international humanitarian law, underlines the difficulties in distinguishing civilians from combatants in circumstances where a target can be deemed of civilian nature but possesses military purpose. [41]


Issues such as these can arise concerning extremist armed groups’ combatants whose soldiers do not necessarily wear military uniforms in an attempt to blend into civilian population. [42] While some AI proponents, such as Andrew Zisserman, argue that LAWS could achieve lower error rates (2.3%) than humans’ average (5.1%) in terms of visual object and target identification, [43] the moral question of whether lethal decision-making should be left in the hands of LAWS persists.


Both Doctor Sara Kendall, professor of international law, and Robert Sparrow, professor of philosophy and ethics, raise further issues of accountability [44] and legal responsibility [45] concerning these types of errors. Concerns are raised that it would be unfair to hold accountable human soldiers for unpredictable robotic actions. [46] Similarly, to the principle of proportionality, whether LAWS comply with the distinction principle remains uncertain.


Thoughts


One may suggest that operating LAWS leading to lethal decisions must be supervised by professionals. Arkin’s view of machines and men working together to avoid both human and robotic errors [47] appears sensible in a political context where an enforceable international ban would be doubtful. Consequently, the international community must attempt to form specific regulations regarding the use of LAWS while maintaining minimal human supervision.


Dr Sara Kendall rightfully denounced the risk of LAWS exceeding their own creators’ law [48] reflecting the fact that international humanitarian law was designed to apply to humans [49] and not artificially intelligent robots, as only an adapted legal framework can tackle LAWS’ ethical and legal implications.


Geiss’ argument supports such a conclusion by stating the vagueness and abstraction of the principles regulating the use of LAWS under existing international humanitarian law. A new internationally legally binding instrument on the ethical use of LAWS (stricter limitations, reversibility of the autonomous mode, human supervision, recording of actions) is, therefore, indispensable to avoid their uncontrollable deployment.





Endnotes


[1] Ronald Arkin, ‘Lethal Autonomous Systems and the Plight of the Non-Combatant’ AISB Quarterly No. 137 (Georgia Institute of Technology, 2013) 1.

[2] Markus Wagner, ‘Taking Humans Out of the Loop: Implications for International Humanitarian Law’ Journal of Law, Information and Science (2011) 1.

[3] Peter Asaro, ‘On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making’ International Review of the Red Cross Vol. 94 No. 886 (2012) 690.

[4] Arkin (n1) 6.

[5] Kenneth Anderson and Matthew C. Waxman, ‘Law and Ethics for Robot Soldiers’ Policy Review 28 (2012) 13.

[6] Paul Scharre, ‘Why Unmanned’ Joint Force Quarterly Issue 61 2nd Quarter (2011) 92.

[7] Asaro (n3) 692.

[8] Ibid.

[9] Jarna M. Petman, ‘Autonomous Weapons Systems and International Humanitarian Law: Out of the Loop?’ The Eric Castren Institute of International Law and Human Rights Research Reports 24 (2017) 24.

[10] Alexandra Brzozowski, 'No Progress in UN Talks on Regulating Lethal Autonomous Weapons' (www.euractiv.com, 2019) accessed 26 February 2020.

[11] Arkin (n1) 1.

[12] Ibid 3.

[13] Ibid.

[14] Ibid.

[15] Ibid.

[16] Ibid 6.

[17] Ibid.

[18] Ibid; Art. 51(4).

[19] Ibid. Art. 35(2).

[20] Scharre (n6) 10.

[21] Ibid 11.

[22] Ibid.

[23] Hague Convention II with Respect to the Laws and Customs of War on Land (1899).

[24] Arkin (n1) 6; Art. 1(2).

[25] Robert Sparrow, 'Ethics as A Source of Law: The Martens Clause and Autonomous Weapons’ (Humanitarian Law & Policy Blog, 2017).

[26] Ibid.

[27] Arkin (n1) 7.

[28] Ibid; Art. 57(2)(a)(ii).

[29] Jakob Kellenberger, ‘Keynote Address’, International Humanitarian Law and New Weapon Technologies, 34th Round Table on Current Issues of International Humanitarian Law (2011) 6.

[30] Robin Geiss, ‘The International-Law Dimension of Autonomous Weapons Systems’, Friedrich-Ebert-Stiftung International Policy Analysis (2015) 16.

[31] Arkin (1) 6; Art. 57(2)(a)(iii).

[32] Ibid; Art. 51(5)(b), and Art. 57(2)(a)(iii).

[33] Geiss (n30) 15.

[34] Ibid. 17.

[35] Scharre (n6) 13.

[36] Brzozowski (n10). 37.

[37] Arkin (n1) 6; Art. 48.

[38] Ibid; Art. 51(2).

[39] Asaro (n3) 697.

[40] Ibid. 699.

[41] Michael N. Schmitt, ‘Military Necessity and Humanity in International Humanitarian Law: Preserving the Delicate Balance’ 50 Virginia Journal of International Law 795 (2010) 5.

[42] All Party Parliamentary Group (APPG) on Drones Inquiry Report, The UK’s Use of Armed Drones: Working with Partners (2018) 38.

[43] Andrew Zisserman, ‘Self-Supervised Learning’, French Institute for Research in Computer Science and Automation (2018) 3.

[44] Sara Kendall, ‘Law’s Ends: On Algorithmic Warfare and Humanitarian Violence’ in Max Liljefors, Gregor Noll and Daniel Steuer (eds) War and Algorithm (Rowman and Littlefield, 2019) 4.

[45] Robert Sparrow, ‘Killer Robots’, Journal of Applied Philosophy, Vol. 24, No. 1. (2007) 62–77.

[46] Ibid.

[47] Arkin (n1). 3.

[48] Sparrow (n45) . 3.

159 views0 comments

Commentaires


bottom of page