Need for ethics in use of Artificial Intelligence in Warfare

Need for ethics in use of Artificial Intelligence in Warfare

Our understanding of artificial intelligence in the 21st century is mainly comprised of the information we acquire from popular media. There is a fundamental problem in that perspective because most of the popular media depicts and ends with a dystopian reality. That is a problem when we see that majority of the Western countries are investing in the inclusion of AI into military weapons. Military development and artificial intelligence are a very controversial topic as we understand the plight of every military weapon upon everyone. Every active conflict and war have seen that the use of any weapons can lead to a devastating toll on human lives. The problem in that loss is based on human error, primarily on the error of the human mind operating the weapons. Artificial Intelligence is brought to be the antithesis of irrational miscalculations made by human beings. There is a problem in that as well because the irrational element that is considered in human beings leads to the loss of life, AI eliminates that sentimental element that is responsible for human beings to save the lives of other humans during an ongoing conflict. The ethics in the military usage of AI is primarily a discussion about how we are going to incorporate AI into the functioning of our lives and especially into the military which is also responsible for the protection from any external threat which can endanger our rights and natural ability to live peacefully. This article discusses the need for ethics in AI operation and discussion the mechanism involved in its adequate functioning. 

“What I don’t want to see is a future where our potential adversaries have a fully AI-enabled force and we do not”- Lieutenant General Jack Shanahan, Director of the Joint AI Center (JAIC) of the DIB.

Looking at ethical codes, some legal experts argue that ethics themselves are too subjective to govern the use of AI”- MIT Technology Review.

Death by algorithm means people are treated as interchangeable entities… A machine, bloodless and without morality or mortality, cannot fathom the significance of the killing or maiming of a human being” UN Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions.

Introduction: -

The conversation for ethical military artificial intelligence (AI) initiatives with its future advantages and suspicion on a mutual basis. Artificial intelligence has been depicted in various ways regarding its instrumentalization in popular media and technological developments with far-reaching effects on multilateral levels. AI is often hyped up as a business saviour and derided as an impediment to the future. The question of AI ethics also comes to mind for its military uses. The reality is that AI is already a growing element of the military strategy for many countries while the European Union and other countries such as China have been engaging for some time on the issue of AI ethics[1]. This rapid acknowledgement and adoption of AI in all aspects of military and defense developments are a question that put forth is that how military development will seek towards big tech firms. The ethical guidelines of AI are quite technical as there is a dual element of human and technology interface involved in this technological development. This conundrum brings forth the elements of strategic positioning of AI and how this creates the future of battlefields. Deployment of AI tech in automated diagnostics of the battlefield, defensive cybersecurity and hardware maintenance assistance has the lead in AI-based developments[2]. These instruments bring about not only the surveillance of strategic information but despite the fact what is the value- judgement of the information collected, how that information will be corrected will critically depend on the mission success of AI towards the deployment of the military. The ethics of artificial intelligence depends on how collective interest is calculated by artificial intelligence and whether that collective interest is measured on deontological or utilitarian establishments[3]. The question of the need for ethics of Artificial intelligence becomes the question of which is the correct ethical viewpoint by which we need to judge the metrics of ethics in artificial intelligence? This question also becomes essential because we cannot leave a polarized view of artificial intelligence even though it has been based on most of the suspicion. As per the Future of Life Institute, AI will continue to hold the potential of economic, social, medical, security and environmental promises. This promise can unravel into increasing democratizing services, designing and delivering faster production times, balancing energy usage, real-time environmental monitoring for pollution and quality and enhancing cybersecurity. Although the narrative that AI with its capabilities of adapting to military changes prevails, ethical challenges of comprising capable intelligent systems not only are based on these considerations of ethical restrictions, but these systems of moral code can also allow AI to venture into multilateral dynamics. The question of why this need and how this need is intended to be actualized (if used Hegelian terminology), in the status quo.

Ethical Dilemmas: -

Human intelligence generally follows a sequence known as the perception-cognition-action information processing loop, in that individuals perceive something in the world around them, think about what to do, and then, once they have weighed up the options, decide to act upon it[4]. AI is programmed to do something similar, in that a computer senses the world around it, and then processes the incoming information through optimization and verification algorithms, with a choice of action made in a fashion like that of human’s mode of thinking and processing information to make moral choices[5]. The 1979 Three Mile Island Nuclear Power Incident and 1986 Challenger Space Shuttle Disaster have brought about the challenges of machine learning being incorporated into weapons systems or the development of weapons systems can bring about such unintended attacks and incidents[6]. Mitigation strategies and technological requirements for AI weapons systems have a much foreseeable unintentional escalation which is analogues in the context of military development[7]. AI-based military framework focusing on the ethics, safety and legal considerations can assist in the rapid adoption of these mature technologies into a wide scope of usage and we can expect low-risk application and greater precaution in less mature applications that might lead to more significant adverse consequences. Ethical use of the AI-based weapons system should create a situation where the AU should not go out of control and ask for human intervention. Does the question then stand on what ethical grounds this should be based upon?

Detailed research into the working of AI and encouragement of ensuring proper implementation of AI ethics principles focuses on three arguments for developing AI ethics. School of Deontology follows the establishment of ‘moral law’[8]. Following the obedience to a strict set of moral laws that dictate the righteousness inherent in action, not concerning themselves with the consequences of that action. The moral and duty of deontologists are abided by the culture and moral relativism of the country. For example, Project Maven as led by Google takes a deontological approach[9]. Deontologists believe that an established set of rules and ethical regulations will determine the guidance of the AI in a war and will ensure the requirements of ethical establishment[10]. The principal argument is that a particular authority or state will be responsible for a pre-emptive understanding of ethics. The second ethical dilemma is provided by utilitarianism. If a deontologist considers a preemptive ethical stance, utilitarianism considers the consequence of actions, more than the ethics of the action itself. The concern is creating the greatest balance of good over evil. A consequentially based approach is an argument which seeks a greater number of human living in safety than the maintains of the current system[11]. The actions decide how the system tends to be regulated rather than the system itself. A utilitarian would evaluate each option, in this case, to use or not to use AI and evaluate each outcome by how much well- being it contains, and the precise targeting system of fully autonomous AI can be used to only exercise lethal force in very specific instances with less collateral damage than a human operator. The third viewpoint is presented by the contract theory, as popularized by Thomas Hobbes who postulates that no person is naturally so strong, they could be free from fear of another persona and no one so weak they could not present a threat. Contract theorists will approach whether battlefield AI is ethical in two directions[12]. Firstly, that nations, as part of their contract with citizens should do their best to defend their security and rights. Secondly, contract theorist would argue AI policy must be aligned with the general desire of the populace. The focus on responsible and equitable design help to ensure that AI should be developed in a way that aligns with the current will of the populace and if the populace feels that it is no longer being used in such a manner, Ais governability will allow it to be shut down. The will of the people regulates ad decide the ethical outcome and framework by which the AI should work accordingly. And in the case of military developments, the masses will have an active contribution. The ethical dilemma presides that how the populace will contribute towards these ethical frameworks?

The Office of the Under Secretary for Policy and the Office of Net Assessment of the United States of America has signified that investing into the major understanding of new approaches in the rising competition and deterrence in an age of AI is now essential more than ever[13]. AI competitive and escalators dynamics, proliferation and strategic ability can venture into nation-states dwelling into major competition market for the military use of AI as being seen by China, the United States of America (USA), and Russia.  In terms of existing legislation, the United Nations Convention on Certain Conventional Weapons (UN CWW), which came into force in 1983 and is intended to prohibit or restrict the use of weapons that are “excessively injurious or have indiscriminate effects[14]Universal Declaration of Human Rights (UDHR) works as the mother doctrine for the UN CWW. UDHR begins by stating that “recognition of the inherent dignity and the equal and inalienable rights of all members of the human family is the foundations of freedom, justice and peace.” As per this argument not only humans can make moral judgments and possess their inherent dignity are morally due able to take the lives of others. Non-human systems do not have the necessary moral qualities. In 2014, the UN CCW held the first multilateral meeting focused on challenges stemming from LAWS, which was followed by 2015 and 2016[15]. In 2017 and 2018, the UN CCW convened in a more formalized structure known as a Group of Governmental Experts (CGE)[16]. LAWS was supported by The Campaign to Stop Killer Robots claims that 26 countries have supported a ban on LAWS under the CCW process. Law of Armed Conflict (LOAC) also known as International Humanitarian Law (IHL) is intended to regulate the conduct of hostilities and minimize humanitarian harms to civilians[17]. LOAC is codified in formal treaties such as the Four Geneva Conventions, the Additional Protocols- specific treaties such as the Ottawa Landmine Ban. AI includes the requirement of a legal review of weapons to ensure LOAC compliance, which is interpreted under Mertens Clause which is an extended principle in Additional Protocol 1 that refers to protections that extend beyond codified law and are based on the “principles of humanity and the dictates of public conscience[18]. The latest EU policy document suggests “trustworthy AI” should be lawful, ethical, and technically robust and spells out as seven requirements: human oversight, technical robustness, privacy and data governance, transparency, fairness, well- being and accountability[19].

Ideal ethical principles for AI: -

Human beings should exercise appropriate levels of judgement and remain responsible for the development use and outcome of AI systems”, the Defense Innovation Advisory Board of USA has made these statements clear regarding standards which are required to be pursued when the conversation of AI is focused upon military usage[20]. The primary target for every state and organization stands that effectively a human should pull the trigger, even if the AI identifies the target and aims the gun. As the initial approach to the ethical regulations will be based on the organization or the state itself, researching issues in technology and defense, the organization needs to be intended with those value systems. As per the Electronic Frontier Foundation (EFF) white paper, the approach for the ethical development of Artificial Intelligence, military development of AI needs to carefully curate and thoughtfully approached. Deployment of AI and looking at the ethical side of the coding is very subjective and governed by different schools of thoughts. The foremost approach of contract theory which involves the usage of the populace, supporting civilian leadership of AI research, supporting international agreements and institutional agreements on predictability and robustness, encouraging open and engaging research between the organizations and state can open the avenues where AI could be short down to the regulations which are considered ethical for the society and different states. Human rights organization consistently argue that the use of AI weapons such as currently deployed armed drones will lead to an increase in civilian deaths and unlawful killings. If the current usage of UAV’s is leading to so many unlawful deaths, then the expectancy of AI conducting these deaths are also at a high percentage. And if AI is being precedented into the market, AI will lead into an international arms race. Ethical requirements will require the research into the coding of awareness of AI among software acquires is an important element for contextualizing AI into warfare capabilities. Fine lines of peace and global stability and if the military is being deployed with unprecedented AI, then the social contract which holds the trust elements into the state and military for security will be diluted[21]. If ethics are ignored, then the sanctity of securing the state of humanity is in jeopardy. Ethical developments will remove the arbitrary relations into the deployment of weapons mechanism. The need for AI involvement cannot be ignored, or it cannot be banned. That will be halting substantial development into security relations which states can be restricted by the assumption that it will always get negative. And to solve this ethical conundrum, organizations need to follow certain standards to develop this framework. These standards involve responsibility towards accurate and broad levels of judgement, equitable and to avoid the intentions of business, traceable in the appropriate understanding of the engineering behind development process, operational methods and transparency and reliability of well- defined domain usage and maintenance of safety and security. The language of the AI deployment itself needs to be neutral whether the question of military deployment is human, automated non- AI system or an AI controlling the command center. The principle of ethics in AI is intended to create a system of no bias or uncontrollable military usage free of any state or organization.

Notes  

[1] P.Maxwell (20th April 2020). “Artificial Intelligence is the future of Warfare (Just not in the way you think)”. Modern War Institute at West Point.

[2] Ibid.

[3] Ibid.

[4] M. MacCarthy (15th November 2019). “Assessing ethical AI principles in defense”. Brookings Institute.

[5] Ibid.

[6] Wasilow, Sherry & Thorpe, Joelle. (2019). Artificial Intelligence, Robotics, Ethics, and the Military: A Canadian Perspective. AI Magazine. 40. 37-48.

[7] R. Neagle (3rd March 2020). “AI and ethics of killer robots”. InformationAge.

[8] “Ethics of Artificial Intelligence and Robotics”. Stanford Encyclopedia of Philosophy Ethics of Artificial Intelligence and Robotics (Stanford Encyclopedia of Philosophy)

[9] R. Reeves (20th January 2020). “The Ethical upside to Artificial Intelligence” War on Rocks.

[10] “Ethics of Artificial Intelligence and Robotics”. Stanford Encyclopedia of Philosophy Ethics of Artificial Intelligence and Robotics (Stanford Encyclopedia of Philosophy)

[11] Ibid.

[12] Ibid.

[13] M.L. Cummings (January 2017). “Artificial Intelligence and the Future of Warfare”. Chatham House- The Royal Institute of International Affairs, International Security Department and US and Americas Programme.

[14] Ibid.

[15] Ibid.

[16] Ibid.

[17] Coker, Christopher. (2019). Artificial Intelligence and the Future of War. Scandinavian Journal of Military Studies. 2. 55-60.

[18] Wasilow, Sherry & Thorpe, Joelle. (2019), ibid.

[19] L. Clarke (28th January 2021). “Can ethical commitments contain the risks of military AI?”. TechMonitor.

[20] Z. Fryer-Biggs (17th February 2021). “Can Computer Algorithms learn to Fight Wars Ethically?”. Washington Post.

[21] J. Dettmer (7th June 2021). “Possible First Use of AI- Armed Drones Triggers Alarm Bells”. VOA News


Pic courtesy-Alex Knight at unsplash.com

(The views expressed are those of the author and do not represent views of CESCUBE.)