Lethal Autonomous Weapon Systems: Technology and Ethical Question

Lethal Autonomous Weapon Systems: Technology and Ethical Question

This paper posits a comprehensive analysis of Lethal Autonomous Weapon Systems, examining their moral considerations, legal frameworks, and strategic implications. Drawing on interdisciplinary research and scholarly discourse, this paper explores arguments put forth by renowned scientists. Moreover, the many advantages that LAWS and Artificial Intelligence bring to the military development and the reduction in human casualties have been covered.  Furthermore, the ethical concerns surrounding LAWS have also been highlighted wherein the limitations of machine judgment and the potential for unintended consequences, emphasizing the importance of human control and accountability have been focused upon. 

The world as it is now, faces challenges and changes that define the dawn of a new era where concepts such as co-existential interdependence and multilateral globalisation cannot act as prime guarantors of peace, stability, and development. With the rising powers and influence of global superpowers, shifts in power balances across the globe and the ever-increasing geopolitical skirmishes, paired with the onslaught of global issues such as climate change, global warming and border conflicts, the free and open international order now finds itself at risk. Today, international relations can be characterised by the overlapping ideals of cooperation and head-to-head confrontations. A common dimension that has persisted throughout the history of humanity has been conflict, security, and warfare. Nations within the international hemisphere have always functioned with respect to their national interest and ensuring the welfare and security of their citizens. As the modernised world rapidly advances within the spectrum of technology, the need for an advanced security and military rises. Countries have been opting to commit a significant part of their GDP to the research and development of weaponry and security measures. The world has seen a rise within the presence of Artificial Intelligence within technology, as nations have advanced. Militaries all across the world have committed to weaponizing such artificial intelligence, inculcating it within a myriad of machinery, at the hands of the genius minds of roboticists and engineers. This has led to the birth of the idea of Lethal Autonomous Weapon Systems.


Lethal Autonomous Weapon Systems: What are they?

In an age where technological advancement has led us to the normalisation of automated labour and the construction of artificial intelligence, nations all across the world have tried to harness such robust power to achieve their national interests and preserve their security and sovereignty. In that endeavour, research, and development in order to bolster military capabilities has led highly advanced nations to develop measures to integrate artificial intelligence into their defence framework, leading to the genesis of Lethal Autonomous Weapon Systems (LAWS). Also known as “killer robots”, LAWS are incredibly advanced technological weapons that utilise sensors as well as computer algorithms to identify targets and once identified, deploy, or employ the on-board weapon system to annihilate the target, without the need of the system being controlled by a human remotely. These systems, once activated, can identify and intercept targets without any further intervention by a human operator. These work in massive contrast to the systems that are under a human’s supervision, also known as “human on the loop” weapon systems, wherein it is within the operator’s abilities to monitor and cease the weapon in its tracks. Hence, human decision making is only limited to the preliminary stages such as the construction and inherent programming of the machine as well as its deployment. During missions, there is absolutely zero control of the human over the machinery, its directions, or its targets, once deployed, besides the general command overriding capacity of the termination of the device (Congressional Research Services, 2023). The person activating the weapon do not know who or what the weapon will be intercepting, nor are they aware of the location or the time period of the occurrence of the strike. The autonomous weapon system utilises the immensely developed high-tech in-built software and identifies the target(s) by using highly accurate sensors, which upon matching the target with the target profile within a specific environment, activates the weapon system (International Committee of the Red Cross, 2022).


Arguments Supporting the LAWS

There are an avid number of arguments and debates that seek to portray the efficiency and advantages of utilising the LAWS. The dimension that shines the most in any debate or discussion is the military benefits they provide. They have the ability to carry out military actions swiftly, precisely, and accurately. Autonomous systems can swiftly detect and engage targets due to sophisticated sensor technology and real-time data analysis, potentially leading to greater operational efficiency and mission success. They process decision making at a robust speed, incredibly more rapid than human beings. Since humanity is oft curbed by their physical and cognitive functionality and limitations, the LAWS, not being hindered by such limitations, provide a much more pliable, resilient, and accurate alternative to human operators. Moreover, since the weapon systems already have their commands coded into them and due to their working through in-laid computer algorithms, in contrast to humans, they do not require incessant communication with their operators to function cohesively. This functionality also severely increases their viability in remote areas wherein the ability to communicate might be hampered due to varying factors such as high distances or loss in networking capabilities. Through this ability to commit to missions with minimal communication, swift decision-making speed, and robust firepower, the LAWS make it tremendously simpler to be used in long distance missions or covert operations such as penetrating enemy territories, at the expense of fewer casualties all the while expelling the need for a network for communication or providing supplies (roboticsBiz, 2022).

Advanced sensors and AI algorithms in autonomous systems can improve situational awareness. They can gather and analyse massive volumes of data from many sources, providing a more complete view of the battlefield, prospective threats, and shifting dynamics. The LAWS can work indefinitely without being affected by human fatigue. They can operate over lengthy periods of time, possibly offering continuous surveillance, information gathering, and defence capabilities without stopping or resting, resulting in a higher degree of operational endurance. Furthermore, autonomous weapon systems significantly contribute towards the multiplication of a military force. They possess the vigorous potential to strike at a speed even quicker than that is achievable by a human, all the while possessing a lethal weapon to conduct a fatal strike even in a situation when communication links have been severed. Due to their presence and employment, fewer human resources are expended upon missions as combatants and those select few that are deployed, have heightened efficacy owing to a smaller number of fighters. Since the number of humans being deployed has decreased, having been replaced by the autonomous weapon systems, the number of inherent casualties also get reduced, especially during dangerous operations. These systems have also suited to be better suited to missions that have been classified as “dull, dirty, or dangerous” missions. Herein, a dull mission refers to a long duration combat or sortie; a mission wherein a human is exposed to a potentially radiological material is termed as a dirty mission and a dangerous mission can be the disposal of explosive ordnance (Etzioni & Etzioni, 2017, pp. 72-73).

Since the LAWS are governed by artificial intelligence and computer algorithms, the potential of human error decreases, besides any error within the programming of the robot itself. These machines also possess adaptability and impeccable tools for close combat, which further reduces the need for the usage of bigger firearms or explosives to conduct tasks that aim to kill. This manoeuvrability allows them to use their highly accurate attacking tools to land a finishing blow with greatly reduced risk of failure or even the chance for collateral damage in the form of loss of lives of innocent civilians or loss of valuable infrastructure (Nasu & Korpela, 2022). In the case of air battles, the pilots of the aircrafts experience significant exposure to gravitational forces and as a result are quickly stemmed with fatigue and exhaustion. Employing autonomous weapon systems or a ‘robot pilot’ to the aircrafts would eliminate the presence of these physical setbacks. Furthermore, these robots could also be programmed in such a way that it undertakes unpredictable actions, confusing and disorienting opponent parties. Contrarily to humans, who possess a powerful self-preservation instinct, these autonomous pieces of machinery need not be programmed with that instinct, enabling them to function without being hampered by the innate survival instinct of human beings. The concept of emotions such as fear or shock that cloud the human brain during times, will not be of any hinderance to machinery. The LAWS also convey a cheaper cost of maintenance and upkeep within the long run, despite the initial high research and development cost. Moreover, autonomous weapon systems play a hand in reducing the amount of money spent upon personnel. The US Department of Defence stated that it had to spent $850,000 annually to equip and maintain a single soldier during the war in Afghanistan whereas, in 2013, a singular small armed autonomous robot costed $230,000 (Etzioni & Etzioni, 2017, pp. 73-74).


Arguments Against the LAWS

The prospect of delegating the power of choosing between the life or death of a human within the hands of a robotic being has always been a recurring concern pertaining to autonomous weapons wherein this idea had been significantly manifested in the concern relating to the killer robots being able to choose their own targets that had to be eliminated (Etzioni & Etzioni, 2017, p. 75). One of the prime critiques of the LAWS had been forwarded by a highly regarded computer scientist and roboticist, Noel Sharkey. He had been extremely averse to the idea of the robots that he had helped build being used for the purposes of warfare and lethal missions. Having researched upon the publicly available military plans and roadmaps to understand the intended-use cases for the autonomous weaponry, Noel Sharkey had been deeply sceptical regarding the plans. He had suspected that the decision makers belonging of the political and military department had not necessarily deeply investigated or researched about the systems that they had been wanting to deploy, and as a result, Noel felt that these policymakers had also been strikingly unaware about the limitations of the machinery, associating the policymakers’ understanding of Artificial Intelligence as sci-fi or myths. The core of Noel Sharkey’s argument lay within the inability of the Artificial Intelligence residing within the LAWS to be able to differentiate between hostile combatants and innocent civilians, which is often even difficult for human beings, causing a severe violation to the Principle of Distinction, which is classified to be one of the most significant rules of armed conflict. He reiterates that bestowing the power of decision making to Artificial Intelligence about identifying targets will likely lead to civilian casualties and further collateral damage. Presenting these arguments, Noel Sharkey continues to argue for a prohibition of the autonomy of weapons, and fights for a legally binding obligation to retain human control over weapon systems (Sauer, 2021, p. 240).

Another significant criticism that has been attributed to the use of LAWS has been the problem of accountability when the autonomous weapon systems are deployed. The biggest enforcer of this criticism had been the ethicist and moral philosopher Robert Sparrow, wherein he had put forward his arguments while keeping in mind the legal and ethical dimensions. He drew upon the well acclaimed Just War Theory, more particularly the aspect of jus in bello, or the international humanitarian law, a dimension of which states that some person must be held responsible for civilian deaths. Robert Sparrow focused on this specific aspect and argued that one of the most important aspects of fighting a just war is accountability being presented upon a person who is responsible for the deaths that occur (Etzioni & Etzioni, 2017, p. 75). Since the robots are functioning autonomously with no further human control after activation, they keep operating for long periods of time and as a result, the place where the responsibility of the killing remains unclear and hazy. Since the LAWS make it unfeasible and impractical to identify the responsibility for the killings and convey accountability, they do not completely adhere to the requirements of the international humanitarian law or jus in bello, and hence Robert argues that they should not be employed in war. This problem arises when the AI-powered lethal machines make their own conclusions, making it difficult to tell whether a bad decision has been the result of errors in the programme or the independent reasoning of the machine. Therefore, while punishing the machine would be meaningless, the commanding officer chairing over the operation and inherently the actions of the machine could not be held accountable either since they had had no control over the machine. A human being making the decision to engage a target with force showcases a transparent chain of accountability which carefully outlines the responsibilities and the roles of the person actually administering the lethal blow as well as that of the commander who had given the order to the soldier committing the kill. In this regard, the plane of clarity regarding the accountability of the casualties remains non-existent, and as a result, the party that should be blamed or held responsible remains unclear. Due to these factors, Robert Sparrow went along with his idea of not accepting the deployment of autonomous weapon systems within battlefields (Sauer, 2021, p. 240).


Conclusion

The rise of Lethal Autonomous Weapons Systems poses a complicated and difficult junction of technology, ethics, and international security. While LAWS have the potential to improve operational efficiency and decrease danger to human people, their use presents substantial ethical, legal, and strategic concerns. LAWS's lack of human judgement, contextual awareness, and responsibility endangers the concepts of discrimination, proportionality, and human rights. The foremost ethical concern with LAWS lies in their lack of human judgment and contextual understanding. Autonomous systems may struggle to navigate complex ethical decisions, assess the proportionality of force, or accurately discriminate between combatants and non-combatants. The potential for unintended and inappropriate actions, resulting in civilian casualties and violations of international humanitarian law, is a pressing issue that demands careful consideration. The possibility of unforeseen repercussions, misapplication, and dehumanisation of conflict cannot be overstated. The lack of human monitoring and control over fatal activities calls into question the basic underpinnings of ethical decision-making and the moral duties that come with using force. Furthermore, accountability is a huge difficulty. When autonomous systems make mistakes or inflict injury, assigning blame becomes difficult. The lack of clear lines of responsibility raises worries about the capacity to hold persons or institutions accountable for activities made by these systems, especially when unexpected consequences or potential violations of international law occur. In addition, the rising dependence on LAWS has the potential to dehumanise combat. Removing human troops from the direct repercussions of war may reduce moral and psychological barriers to violence, potentially leading to conflict escalation and a disrespect for the worth of human life. The likelihood of an arms race and the expansion of autonomous weapon systems exacerbates these concerns, increasing the possibility of abuse, unintentional escalation, and human rights violations. Continuous study and development are required to solve the limits and issues associated with LAWS. The emphasis should be on building solid safeguards, including human judgement into decision-making, improving knowledge of ethical issues, and evaluating the long-term ramifications of weapons systems' increasing autonomy.

 

References

  1. Congressional Research Services, C. (2023, May 15). Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems. Retrieved June 11, 2023, from Congressional Research Services: https://crsreports.congress.gov/product/pdf/IF/IF11150#:~:text=Lethal%20autonomous%20weapon%20sys
  2. Etzioni, A., & Etzioni, O. (2017, May-June). Pros and Cons of Autonomous Weapon Systems. Military Review, 72-81.
  3. International Committee of the Red Cross, I. (2022, July 26). What you need to know about autonomous weapons. Retrieved from INTERNATIONAL COMMITTEE OF THE RED CROSS: https://www.icrc.org/en/document/what-you-need-know-about-autonomous-weapons
  4. Nasu, H., & Korpela, C. (2022, June 21). Stop the “Stop the Killer Robot” Debate: Why We Need Artificial Intelligence in Future Battlefields. Retrieved from Council on Foreign Relations: https://www.cfr.org/blog/stop-stop-killer-robot-debate-why-we-need-artificial-intelligence-future-battlefields
  5. roboticsBiz. (2022, October 24). Pros And Cons Of Autonomous Weapons Systems. Retrieved from roboticsBiz: https://roboticsbiz.com/pros-and-cons-of-autonomous-weapons-systems/
  6. Sauer, F. (2021). Lethal autonomous weapon systems. The Routledge Social Science Handbook of AI, 237-250.


Pic Courtsey-Product School at unsplash.com

(The views expressed are those of the author and do not represent views of CESCUBE.)