Offsetting Innovation and Responsibility: Legal, Ethical, and Strategic Implications of Autonomous Weapons
The rapid evolution of Autonomous Weapon Systems (AWS) and Artificial Intelligence (AI) has transformed the character of modern warfare while raising profound legal, ethical, and strategic questions. From early semi-autonomous weapons to fully autonomous systems capable of independent targeting, AWS have redefined operational precision, decision-making, and the global arms race. Yet, their deployment challenges the accountability structures of International Humanitarian Law, blurs the distinction between combatants and civilians, and intensifies debates on moral responsibility when machines take life-or-death decisions. While proponents argue that AWS can reduce human error and collateral damage, critics highlight the risks of dehumanization, automation bias, and erosion of ethical oversight. Strategically, AWS accelerate the security dilemma, drive procurement races, and complicate deterrence, particularly in volatile regions. The article contends that balancing innovation with responsibility requires legal innovation, ethical governance frameworks, and multilateral cooperation to ensure transparency, accountability, and restraint. Ultimately, the trajectory of AI-enabled warfare will hinge on humanity’s ability to embed moral agency within technological advancement and foster cooperative regimes that prevent destabilizing escalation.
The roots of the Autonomous Weapon System can be traced back to World War II, where weapons were operated with limited human intervention. Ever since, it has been empowered with automation, robotics, and artificial intelligence within defense technology. Nevertheless, technological advancement in the defense sector has occurred since the late 20th century and subsequently reached its zenith in the post-2000 era.[1] One and a half decades back, Autonomous Weapon Systems, such as loitering munitions, unmanned combat vehicles, were operationally deployed, marking it as a key milestone in the modernized defense system. This consistent evolution of the Autonomous Weapon System (AWS) and AI has reimagined the strategic culture and laid the foundation of Modern warfare. These technologies have transcended human capabilities, including decision-making, precision, and operational tactics. It is often asserted that greater achievements entail greater responsibilities, and these advancements have challenged the existing legal system, ethical frameworks, and strategic relations among the nations. Thus, this article will examine the developments of AWS and AI in the defense sector and analyze their legal, ethical, and strategic implication in the era of strategic autonomy.
Introduction: Definition, Utilization, and Types of AWS and AI in the defense sector.
The United Nations for Disarmament Research (UNIDIR) defined AWS as “a weapons system that, once activated, can select and engage targets without further human intervention,”[2] ; however, human interference to some extent may vary. The AWS is more diverse and kaleidoscopic in nature; it operates under a semi-autonomous module, which requires human intervention to confirm the action. On the other hand, the most debatable, fully autonomous module, with an initial command, can act independently to neutralize the target.
With the advent of Artificial Intelligence, it enabled machine learning, real-time sensing, and defined algorithms, paving the way for AWS to act more efficiently and elevate the degree of accuracy with precise target killings. For instance, Israel’s AWS includes Harpy and Harop loitering munitions, which are notable for targeting radar installations by using an autonomously operated long-endurance system.[3] Similarly, the USA Navy’s Sea Hunter, an unmanned autonomous surface vessel (AUSV), is designed for an anti-submarine mission, with an extended endurance up to 90 days without refueling, making it better for surveillance and combat support.[4]
However, the configuration of the AWS has always been in contention because the system autonomy level is varied in several ways – like Human-in-the-loop, human-on-the-loop, or human out-of-the-loop, operational domain (air, sea, and land), and mission purpose (strike, surveillance, and defense). These characteristics bring significant implications for their strategic utility, legal scrutiny, and ethical acceptability.[5]
International Developments in AWS and AI in the Defense Sector
In pursuit of modern technology, major global powers are competing for strategic dominance. Such as the United States launched the Loyal Wingman Drone, which enhances manned combat capacity through autonomous wingman support,[6] along with maritime innovations like the Sea Hunter vessel to maintain maritime security. Even ongoing Russia-Ukraine war, Russian-made AI-enabled loitering munitions and swarming drones have been deployed on the battleground despite ethical controversies.[7] Apart from that, China’s innovation in the A.I, produced UAVs like CH-7 and a Cyber offensive capability, boosting regional dominance and ultimately challenging US influence.[8] Meanwhile, South Korea has developed SGR-A1 autonomous sentry robots for defense in the unsettled Korean Demilitarized Zone.[9]
Despite these technological advancements in the defense sector, international governance remains fractured. Moreover, the United Nations Group of Government Experts (UN GGE) has failed to reach a consensus on the regulation of the AWS due to the diverse security perspectives among the global powers.[10] These divergent views make rule-based order more fragile. It not only stimulates risks of proliferation but also erodes the possibility of harmonious security measures to administer autonomous warfare carefully.
Legal Implications: Bridging the Accountability and Compliance Gap
Since an Autonomous Weapon System (AWS) has three-layered autonomy, which raises a major accountability gap, commonly distinguished by challenges in specifying legal liability for autonomous decisions. According to Anna Rosalie Greipl, AI-led decision-making undermines the traditional human intent within the International Humanitarian Law framework, specifically, it impedes compliance with the distinctive principle between the Combatants from Civilians, which would prevent collateral damage.[11]
The Absence of human control over AWS can lead to unlawful targeting incidents and may dilute the existence of the IHL accountability mechanism. Meanwhile, scholars argue that already established legal principles, such as analyzing command responsibility, can be made applicable while using AWS in the battleground or during the mission. It would bring some amount of human interference while upholding the accountability and fully complying with the IHL.[12] However, these measures face executive hurdles due to a lack of an enforcement mechanism, creating a gray zone in the application.
Nevertheless, discussion around its enforcement mechanism and implementation of the international rules should be taken among the global players, at least to aim for restricting the deployment in the populated region while ensuring compliance monitoring. In the meantime, the accountability gap should be addressed through legal innovation by integrating AI interpretability in the AWS mechanism to ensure transparency, an auditable mechanism, and legal scrutiny for judicial redressal.[13]
Ethical Implication: Balancing Dehumanization Concern with National Security.
Deployment of the AWS and its integration with AI brings out many ethical dilemmas, most profoundly it raises question over, Human Security vs National Security, few may say that both human security and national security are intertwined with each other, but it is necessary to understand that humans are the ones who makes nation as entity, so forth securing human security is utmost duty of the nation/government. Thus, precise target elimination should be aligned with accountability and moral responsibility. Fuming concern arises when human operators are being replaced with the opacity of the algorithm; ultimately, machines are taking control over human life, which makes it more unethical and automation bias.[14]
Conversely, current recognition of AWS’s potential to improve ethical conduct by enhancing precise target elimination and reducing unintended human casualties, which are commonly observed in the human-operated machine system, which are subject to error and depends on the mental health of the operator.[15] Thus, machines based on ethical conduct should be incorporated with human interference. Hence machine may act as a force multiplier under human-led moral agency rather than a substitute.
Strategic Implications: Managing the Dynamics of warfare and escalation risk.
Strategically, the rapid growth and development of AI-empowered AWS would lead to the heavy procurement of these weapon systems. The arms race among the nations can be understood from the Security Dilemma theory. According to Robert Jarvis, an American scholar, in the anarchical international system, Nations will go for defense co-operation, which ultimately would magnify the procurement process and may turn into a hostile situation.[16] With regards to the South Asian countries, due to their vulnerability and limited autonomous capabilities complicate existing nuclear deterrence exacerbates the possibility of conflict.
However, acceleration and unpredictability of the AWS may activate the communication channel and confidence-building measures among nations to restrict the catastrophic event.[17] Thus, a nation may pursue strategic autonomy rather than relying on one nation. Moreover, joint multinational military exercises aimed at AWS interoperability and coordination of high command can also foster trust and reduce the escalation risk. Though in the recent ongoing conflicts, such as the Russia-Ukraine, Azerbaijan-Armenia, AWS has played a crucial role in the warfare. Thus, the future trajectory of warfare will be long-prone and lethal.
Conclusion:
The emergence of AWS and AI in the defense sector has revolutionized future warfare with legal, ethical, and strategic complexity. Due to automation in the armaments and the resulting distorted accountability, an eclectic blend of ethical human intervention and automation may prove to be a Controlled Autonomy solution. The ethical governance cannot remain rigid but must evolve with the changing circumstances; it should engage with diverse stakeholders to bring a moral compass while integrating into the AWS lifecycle. It must be understood that humanity should be given utmost priority; thus, resulting in innovation should be perceived as a deterrence mechanism rather than a coercive action against humanity. Strategically, the relation between AI and Human judgment should be tied in a knot so that international peace and co-operation could take place before the escalation. Binding multilateral treaties are a prerequisite for the regulation of the proliferation and securing export control to foster global confidence and transparency among the nations.
Despite Artificial Intelligence, it is a human decision that decides the course of the action, whether it is peace building or confronting each other by leveraging technological advancement. Under such uncertain circumstances, the role of the government, academia, and civil society holds an unequivocal responsibility to ensure transparency and accountability. Ultimately, remote lies with us; an ethically governed system would either decide the future trajectory of warfare or peace and stability among nations.
References
[1] International Committee of the Red Cross (ICRC), “Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects,” 2014, https://www.icrc.org/en/document/report-icrc-meeting-autonomous-weapon-systems-26-28-march-2014.
[2] Alice Spazian, Arthur Holland, and Alisha Anand, “On Lethal Autonomous Weapons: Mapping Our Research to the Discussions of the GGE on LAWS,” UNIDIR, 2021, https://unidir.org/publication/unidir-on-lethal-autonomous-weapons/.
[3] Israel Aerospace Industries (IAI), “Harop,” November 23, 2023, accessed September 3, 2025, https://www.iai.co.il/p/harop.
[4] Naval Sea Systems Command (NAVSEA), “The Force Behind the Fleet,” June 14, 2024, accessed September 4, 2025, https://www.navsea.navy.mil/Media/News/Article/3808190/us-navy-showcases-sea-hunter-unmanned-surface-vehicle-at-la-fleet-week.
[5] International Committee of the Red Cross (ICRC), Position on Autonomous Weapon System, 2021, https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems.
[6] Anil Khosala, “Loyal Wingman Concept: Redefining Air Combat,” IIRF Yearbook 2025 (Indian Institutional Ranking Framework, 2025), https://55nda.com/blogs/anil-khosla/tag/mumt/.
[7] Gregory C. Allen, interview by Kateryna Bondar, “The Russia-Ukraine Drone War,” May 28, 2025, https://www.csis.org/analysis/russia-ukraine-drone-war-innovation-frontlines-and-beyond.
[8] Ritu Sharma, “Mother of All Drones – China Unveils Gigantic Drone Capable of Carrying Swarm of UAVs in Its Belly,” Eurasian Times, November 19, 2024, accessed September 4, 2025, https://www.eurasiantimes.com/mother-of-all-drones-china-unveil.
[9] Alexander Velez-Green, “The South Korean Sentry—A ‘Killer Robot’ to Prevent War,” The Foreign Policy Essay, CNAS, 2015, https://www.cnas.org/publications/commentary/the-foreign-policy-essay-the-south-korean-sentry-a-killer-robot-to-prevent-war.
[10] Alexander Blanchard, Vincent Boulanin, Laura Bruun, and Netta Goussac, Dilemmas in the Policy Debate on Autonomous Weapon Systems (Stockholm: SIPRI, 2025), https://www.sipri.org/commentary/topical-backgrounder/2025/dilemmas-policy-debate-autonomous-weapon-systems.
[11] Anna Rosalie Greipl, Artificial Intelligence and Related Technologies in Military Decision-Making on the Use of Force in Armed Conflicts: Current Developments and Potential Implications (Geneva: Geneva Academy of International Humanitarian Law and Human Rights, 2024), 12, https://www.geneva-academy.ch/joomlatools-files/docman-files/Artificial%20Intelligence%20And%20Related%20Technologies%20In%20Military%20Decision-Making.pdf.
[12] Yordan Gunawan, Muhamad Haris Aulawi, Rizaldy Anggriawan, and Tri Anggoro Putro, “Command Responsibility of Autonomous Weapons under International Humanitarian Law,” Defense AI and Arms Control Network, 2022, https://www.researchgate.net/publication/365505097_Command_responsibility_of_autonomousweapons_under_international_humanitarian_law.
[13] Marslli Marco, “Lethal Autonomous Weapon Systems: Ethical Dilemmas and Legal Compliance in the Era of Military Disruptive Technologies,” International Journal of Robotics and Automation Technology, 2024, https://zealpress.com/jms/index.php/ijrat/article/view/599.
[14] Peter Asaro, “On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making,” International Review of the Red Cross, 2012, https://international-review.icrc.org/articles/banning-autonomous-weapon-systems-human-rights-automation-and-dehumanization-lethal.
[15] Mengkorn Pum, “Autonomous Weapons Systems: Ethical Concerns and International Regulation in the Use of AI in Military Applications,” 2024, https://www.researchgate.net/publication/387383028_Autonomous_Weapons_Systems_Ethical_Concerns_and_International_Regulation_in_the_Use_of_AI_in_Military_Applications.
[16] Robert Jervis, “Cooperation Under the Security Dilemma,” World Politics 30 (1978): 167–214, https://www.sfu.ca/~kawasaki/Jervis%20Cooperation.pdf.
[17] Iqana Puscas, “Confidence Building Measures for Artificial Intelligence,” UNIDIR, 2022, https://unidir.org/files/2022-12/Confidence-Building_Final.pdf.
(The views expressed are those of the author and do not represent the views of CESCUBE
Photo by Mitch Nielsen on Unsplash