Auteur: Maaike Verbruggen
Dossier: ‘Are Arms controlling us, or can we take back control?‘
Gasthoofdredacteur: Hugo Klijn
Introduction
Artificial Intelligence (AI) is advancing and encroaching in all aspects of life, including in warfare. AI can be used to detect patterns in the signals submarines collect, to detect early signs of equipment malfunction, and to drive convoys through combat zones without drivers. It can also be used by munitions to search for, select and engage with targets without human supervision. These weapons are called Lethal Autonomous Weapon Systems (LAWS), and they are cause for concern by the diplomatic community.
What are LAWS?
Let’s first make something clear: this discussion deals neither with the replacement of humans on the battlefield by machines, nor robots taking over control, like Terminator. It’s much more vague, which makes it also harder to find a solution.
At the heart is the increasing sophistication of targeting acquisition by weapon systems. Instead of humans determining exactly what, when and where a weapon system should hit, advances in technology allow a weapon system to do more of the search process itself. Munitions can have cameras or other sensors built in, to search for targets that match a certain set of criteria. Examples are missiles that scan for ships with a specific 3-Dimensional silhouette that categorizes it as a certain class; or use cameras that classify whether an object is a tank or a house; or are launched pre-emptively and can strike as soon as they detect the radar signature of a mobile missile launcher that is only out in the open for a very short period of time. This can make weapons extra precise, usable on battlefields where enemies might block communication signals needed to order a strike, and faster to respond than humans. But this way you end up with weapon systems that can decide who or what to target on their own. This raises the question how much autonomy weapons should have, and which decisions should remain in human control.
This line is not easy to draw. One problem is that LAWS are not a clearly demarcated type of weapon system, nor is autonomy a static characteristic. Autonomy reflects the extent of human action to execute a specific function, which can change depending on use. Many weapons can have multiple modes, with varying levels of human oversight. There is no consensus over which uses are cause for concern. Is autonomous target selection and engagement less problematic in air defence than torpedoes? Should we only be concerned about machines making the final decision to engage with targets, or also with machines selecting a target, and humans giving the all-clear? After all, humans often go on auto-pilot and uncritically trust machine recommendations, and this might be a gateway to more autonomy. And are we only talking about super-intelligent futuristic weapons, or also the current generation of weapons under development, or even weapons already in use?
Problems with LAWS
The use of LAWS poses potential legal, ethical, security and safety challenges.
First, the use of weapons is bound by international law. An attack must cause as little possible harm to civilians, and this must be proportional to the military gain; it must distinguish between combatants and civilians; and commanders must take all possible precaution in attack. A weapon system can fundamentally not make the assessment whether an attack is legal. Weapon systems could in theory become capable of more precise accurate identification of targets than humans. However, in practice distinguishing between civilian and military targets is highly contextual, making it less likely that weapons could make such judgements. Their use thus might be illegal–but opinions differ on whether that means certain types of weapons should be illegal, or specific uses of these weapons, depending on the context of the attack.
Second, the ethics are two-sided. On the one hand, governments must protect the soldiers they send out in battle. If LAWS could reduce how much risk they face, don’t governments have the moral obligation to protect those who risk their lives for their country? On the other hand, all humans have the right to dignity in death. Would it be undignified for machines to make the decision whether you live, or you die? And how would delegating the decisions over life and death affect the sense of moral responsibility of soldiers? Could this lead to more cavalier attitudes? And would it be fair to hold them accountable for autonomous actions of their weapon systems?
Third, their use could increase the chances of war breaking out. Arguably the most important political cost of warfare for Western government is the casualty count. If governments think LAWS could reduce the number of casualties, the threshold to war might be lowered. International tensions are already increasing, especially between the USA and China. AI is one of the prime areas of competition. If relations deteriorated, counties might feel pressure to field weapons as quickly as possible, even though many technical problems remain.
Fourth, these weapons pose serious safety problems. AI has advanced, but is frankly speaking still quite stupid. AI does not understand context or nuance, and it makes many mistakes, especially when the real world does not look like the data it has trained on. Systems can interact in unexpected ways, and as modern weapons are tightly connected to other weapon systems, an accident could have serious consequences.
Governance of LAWS
It’s clear that LAWS raise some serious questions, and LAWS have been subject of discussion by the diplomatic community since 2013. In 2013 it was discussed in the Human Rights Council, and since 2014 it has been discussed in the framework of the UN Convention on Certain Conventional Weapons (CCW). The UN CCW is an arms control agreement that bans weapons that are excessively injurious or have indiscriminate effects. It is an umbrella agreement with a main body and the ability to add additional protocols on specific weapons, such as non-detectable fragments and blinding laser weapons. The current talks aim at increasing understanding between countries on what the problems are exactly and have not (yet) taken the form of formal negotiations on a possible outcome.
There are four different options on the table. First is a ban on LAWS. The difficulty here is how to define LAWS, as set out previously. A second option is a positive obligation to maintain “Meaningful Human Control”. However, there is also no consensus about what determines how meaningful human control is. Would this require humans making every decision in full control over weapons at all times? We don’t require this for many existing weapons. What is prudent? A third option is to increase enforcement of existing law, especially Article 36 reviews. These are legal reviews of new weapons, means, and methods of warfare, obliged by Additional Protocol II of the Geneva Convention. Article 36 reviews could help to ensure no weapons are fielded incapable of complying with the principles of distinction, precaution and proportionality. However, only approximately 20 countries are known to have procedures in place to conduct these reviews. A fourth option is focusing on soft law measures, such as codes of conduct or a political declaration. This would be non-binding, but could still have a strong normative effect.
It’s not easy to find a solution. AI is considered a highly strategically beneficial technology, so countries don’t want to ban it. Many countries have invested in defence research on AI, and even own weapons that could be classified as LAWS depending on the exact definitions. There is a risk of cheating, since we do not have a good solution yet for the verification of software. Moreover, the future is uncertain. Should we wait and see how AI develops so we know more about its qualities, or will the genie then have escaped the bottle? A lot of countries are hesitant about LAWS, but also do not want to exclude the possibility of using it in the future if the technology turns out to be critical. Additionally, countries have not made up their minds for themselves, and lack consensus between different domestic institutions. They all have different interests to juggle, such as interoperability in NATO versus the well-being of soldiers versus upholding the rule of law. A final issue is practical: The UN CCW is in dire financial straits and there is limited money for meetings. It also operates by consensus, which makes progress slow.
Conclusion
The problem of LAWS is a nasty problem. There are clear strategic benefits, but also serious humanitarian andstrategic risks. The technology is advancing so rapidly, and the scope is so amorphous, that regulation is difficult. However, that does not mean the problem is insurmountable. There is serious desire by all state parties to prove the ongoing relevance of the UN CCW, and a global awareness that AI has the potential to change warfare in a way that the international community must address. Time to move forward.
Want to read more?
Framing Discussions on the Weaponization of Increasingly Autonomous Technologies’. UNIDIR Resources. Geneva: United Nations Institute for Disarmament Research, 2014.
‘Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions’. Versoix, Switzerland: International Committee of the Red Cross, 2016.
Boulanin, Vincent, and Maaike Verbruggen. ‘Mapping the Development of Autonomy in Weapon Systems’. Stockholm: Stockholm International Peace Research Institute, November 2017.
Maaike Verbruggen
- Doctoral Researcher at the Vrije Universiteit Brussel
- Formerly Research Assistant at the Stockholm International Peace Research Institute
- Interested in the intersection of arms control, emerging technologies and military innovation
- Twitter: @M__Verbruggen