Mistakes, Machines and Misinformation: Intelligence and the scope of the precautionary principle of IHL
Emma Marchant
In 2008 US personnel in Iraq arrested a Shia militant who, they discovered, had in his possession a laptop containing surveillance footage from one of their own Predator unmanned aerial vehicles (UAVs, or colloquially, drones). Over the following months more insurgents’ laptops were recovered all containing recorded footage from these Predators. It transpired that a $26 (£20) piece of Russian software called ‘Skygrabber’ had been used to eavesdrop on the satellite downlinks. Whether this was an intentional campaign by Iran, or just an Iraqi searching for his team’s football match remains a mystery. In either case it transpires that this was only possible as the data was not encrypted when it was transmitted from the Predator.[1]
Whilst it does not appear that there were any casualties of this security breach, other than I imagine the pride of the relevant members of the US military, it does show the ease with which a systemic weakness was exploited without apparently raising any alarms. From the perspective of International Humanitarian Law (IHL) this raises challenges to the application of distinction, feasible precautions, and what I term the ‘intelligence standard’ of IHL.[2] The primary concern being that if software systems, like the Predator, should be interrupted or modified without operator awareness, or of course this awareness is restricted from decision makers, then the verification of targets in accordance with IHL may be jeopardised.
There is a long history of the use of deception and misinformation during warfare. During WWII there are numerous examples; whether that is the fabrication of military orders by the British SOE, the extensive use of disinformation and digging of false trenches by Russia when preparing to retake Stalingrad, or the use of wooden structures to imitate tanks by Germany. Of course, history is littered with examples; the infamous wooden horse which gave the Greeks access to Troy that now lends its name to cyber threats. The concept of cyber-warfare has already created a growing volume of scholarship[3] and my research does not focus on this, as such, but aims to address the issues that may be created for the application of IHL in the ways technology could be used to provide misleading or false information.
I argue that the development of military technology has increased a link between the IHL prohibitions of perfidy and indiscriminate attack and thus impinges upon the precautionary principle. I suggest that the ability of adversaries to infiltrate the very systems which provide information for decision-makers increases the possibility of misidentification of targets. Furthermore, as technology advances and information becomes increasingly analysed by software and AI it will become exponentially more difficult to ascertain the validity of the information relied upon. I suggest that reliance on these systems when there is a suspicion of infiltration results in attacks that may not comply with the principle of distinction under IHL. Further, they could easily be seen to be unlawful as a failure to take all feasible precautions in the conducting of the attack and potentially result in attacks that could be viewed as indiscriminate.
The purpose of IHL is to limit the conduct of warfare and to protect those not taking part in the hostilities. It admits of military necessity but provides protections for civilians caught up in conflict, aiming to minimise unnecessary suffering. The main provisions of IHL concerning deception and misinformation are those of the prohibition of perfidy and the permission of ruses of war. The substantial issue with these is that the delineation, which on paper may appear clear, struggles when faced with the realities of modern warfare. The difference between the two is principally that conduct amounts to perfidy when the activity invites the confidence of the adversary, such that they act in the belief of IHL protections. An example of this would be feigning surrender to lure an enemy closer, and then exploiting this to attack. A combatant who is hors de combat is protected by IHL and thus the reliance on this protection for a surprise attack would be considered perfidy.
By contrast during the Gulf war when the Iraqi tanks approached with turrets pointed aft and rotated them only as a precursor to an attack this was considered a legitimate ruse. The reversal of turrets is not a recognised indication of surrender and thus was not considered to be perfidious. Most significantly from a technological and intelligence-based standpoint, ruses of war can include “… misleading electronic, optical, acoustic or other means to implant illusory images in the mind of the enemy. It is permissible to alter data in the enemy’s computer databases …”[4] However, these electronic methods and means of altering data and communications must not cause the enemy to attack civilians or civilian objects under the mistaken belief that the target is lawful.[5]
My suggestion is, that given the developing nature of technology, we should look further than perfidy and ruses of war to determine the legal principles governing the use of deception and misinformation during armed conflict. The precautionary principle of IHL requires that all feasible precautions are taken in the method and means of attack and thus obligates parties to minimise civilian casualties. I contend that it is through this principle we can establish the intelligence standard that is obligated by IHL for target verification.[6] Therefore, when deception and misinformation are found to be within the loop of knowledge and decision making the law should provide safeguards in order to minimise casualties. It is clear militaries have no desire for their intelligence systems to be infiltrated, and thus to a certain extent this may be self-regulated. However, I argue that these systems are also covered by the precautionary principle, to the extent that states need to have done everything feasible in securing their information networks. As Clausewitz states: “Many intelligence reports in war are contradictory; even more are false, and most are uncertain…”[7] Today, with the advances of technology, not only may the reports be false or uncertain, they may have been manipulated or compromised by hostile forces in unexpected and novel ways. States need to do everything feasible to direct their attacks at military objectives and minimise civilian casualties; to do so they need to be able to rely on their technology, and as such these systems need to be protected.
[1] Gorman S, Dreazen YJ & A Cole, ‘Insurgents Hack US Drones – $26 Software is used to Breach Key Weapons in Iraq; Iranian Backing Suspected’ (2009, December 17) Wall Street Journal
[2] Emma J Marchant, ‘Insufficient Knowledge in Kunduz: The Precautionary Principle and International Humanitarian Law’ (2020) 25:1 JCSL 53
[3] See for example, Michael N. Schmitt, The Tallinn Manual 2.0 on the International Law applicable to Cyber Operations (2017 CUP)
[4] Yoram Dinstein, The Conduct of Hostilities under the Law of International Armed Conflict, (2010, CUP, 2nd Edn.) 240
[5] Jean-François Quéguiner, ‘Precautions under the law governing the conduct of hostilities’ (2006) 88(864) IRRC 793, 799
[6] Emma J Marchant, ‘Know your Enemy: Implications of Technology for Intelligence Standards in Targeting under International Humanitarian Law’ (PhD Thesis, University of Birmingham, 2020)