“Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems
In paragraph two of its resolution on lethal autonomous weapon systems, pursuant to U.N. General Assembly resolution 78/241, the General Assembly requested the Secretary-General to solicit the views of Member States and Observer States regarding lethal autonomous weapons systems (LAWS). Specifically, the request encompassed perspectives on addressing the multifaceted challenges and concerns raised by LAWS, including humanitarian, legal, security, technological, and ethical dimensions, as well as reflections on the role of human agency in the deployment of force. The Secretary-General was further mandated to submit a comprehensive report to the General Assembly at its seventy-ninth session, incorporating the full spectrum of views received and including an annex containing those submissions for further deliberation by Member States.
In implementation of this directive, on February first, 2024, the Office for Disarmament Affairs issued a note verbale to all Member States and Observer States, drawing attention to paragraph two of resolution 78/241 and inviting their formal input. Corresponding communications—notes verbales and letters—were also disseminated to the entities identified in paragraph three of the resolution, requesting their contributions on the matter. For the first time, this Article analyzes the positions of States parties on LAWS submitted to the Secretary-General in 2024, pursuant to UN General Assembly Resolution 78/241 calling for the views of Member States and Observer States on lethal autonomous weapons systems, inter alia, “on ways to address the related challenges and concerns they raise from humanitarian, legal, security, technological and ethical perspectives and on the role of humans in the use of force.” The Article focuses on Member States’ positions in relation to human-centric approaches to LAWS and compliance with international humanitarian law. Moreover, it argues that the standard for autonomous weapons systems’ compliance with the laws of war should not only be whether they follow the principles of international humanitarian law of distinction, proportionality, and precaution, but whether they can be free of algorithmic bias. The last several years of data analysis have shown that data bias and algorithmic bias can result in unintended consequences that pose the risk of unlawful discrimination. From housing to finance, mortgage lending to credit worthiness, and college applications to job recruitment, the use of artificial intelligence (AI) can result in unintended consequences that pose the biggest risk to women and minorities. While relying on potentially biased inputs, the “black box” of a machine can magnify these biases in its outputs or decisions. Furthermore, machine learning can help algorithms even learn to discriminate.
AI mistakes are often patterned, reflecting patterns in training data, algorithms, or the AI’s fundamental design. The Article asks whether Yale Law School professor Oona Hathaway’s recent arguments on individual and state responsibility for the patterns of “Mistakes” in War may also apply to the pattern of biases in AI-driven LAWS. In current and future disputes, machines do and will continue to make life-and-death decisions without the help of human decision-making. Who will then be responsible for the “mistakes” in war?
Although much has been written about algorithmic bias, an “algorithmic divide” can create an AI-driven weapons asymmetry between different nation states depending on who has access to AI. In the final analysis, the Article argues that the transformative potential of AI must be harnessed not in conflict but in conflict resolution.
During the 2017 testimony to the US Senate Armed Services Committee, then-Vice Chairman of the Joint Chiefs of Staff General Paul Selva stated, “because we take our values to war […] I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life[.]” The laws of war are rapidly advancing to a critical crossroads in war’s relationship with technology.
I. Introduction
Artificial intelligence (AI) is set to transform military operations across various domains, from autonomous weapon systems and decision-support tools to cyber warfare capabilities. The advances of Lethal Autonomous Weapons Systems (LAWS) are owed to advances in AI, robotics, sensor technologies, and real-time data processing, enabling systems to independently identify, track, and engage targets.
These advancements promise greater precision, reduced human casualties, and improved strategic decision-making.4 However, they also raise critical questions about the future of warfare and the development of LAWS, which has led to a series of moral and legal concerns regarding these systems’ potential violation of international humanitarian law in action. As the pace of technological innovation accelerates, it will demand more rapid and responsive legal and policy frameworks to address the disruptive consequences of these emerging tools.5 Arguments exist that AI’s ability to process and interpret complex datasets offers the potential for more precise targeting.6 In certain scenarios, AI systems—particularly some autonomous weapons—can distinguish between military and civilian targets more effectively than humans, thereby reducing the risk of unintended civilian casualties and enhancing compliance with the principle of distinction under international humanitarian law (IHL)—the law that governs the conduct of armed conflict.7 Additionally, AI can play a crucial role in minimizing civilian harm through improved precautionary measures.8 By generating real-time maps of civilian infrastructure and conducting continuous risk assessments, AI enables military forces to take all feasible steps to prevent or reduce incidental civilian harm. This application of AI strengthens and reinforces the precautionary obligations mandated by international law.
At the same time, the use of AI in warfare presents significant challenges to IHL.9 This Article seeks to critically analyze the official statements submitted by U.N. Member and Observer States and compiled in the U.N. Secretary-General’s 2024 report on lethal autonomous weapons systems. The report provides a summary of Member State positions pursuant to U.N. General Assembly (UNGA) Resolution 78/241, including definitions, challenges, and legal concerns, as well as proposed next steps and the Secretary-General’s concluding observations.10
The analysis draws on an original study analyzing official State submissions compiled in the Secretary-General’s 2024 report.11 Using a structured qualitative coding matrix,12 the study systematically evaluates each State’s position across seven key categories: (1) human control; (2) assertion of the need for a new legally binding instrument; (3) support for legal regulations or governance measures short of a treaty; (4) calls for new prohibitions; (5) explicit references to ethical concerns; (6) preferences on the appropriate forum for continued discussion; and (7) mentions of bias, including algorithmic, racial, or gender-based bias. Each submission is coded according to standardized criteria designed to identify patterns of convergence and divergence in different state approaches to the legal, ethical, and regulatory challenges posed by autonomous weapons technologies. This matrix provides a comparative snapshot of global positions on LAWS’ governance as of mid-2024.
The findings of this research reveal both emerging normative convergence and persistent divergence among states regarding the governance of LAWS. An overwhelming majority—89% of State submissions—emphasize the imperative of maintaining meaningful human control over the use of force, reflecting widespread apprehension about delegating life-and-death decisions to autonomous technologies.13 A similarly high proportion (89%) endorse the development of legal or regulatory frameworks—including risk mitigation strategies, accountability mechanisms, and technical safeguards, which underscores a strong collective inclination toward enhanced governance structures.
Notably, 67% of states advocate for the adoption of a new legally binding international instrument, with an additional 11% expressing conditional support pending broader consensus. This indicates growing momentum towards the formalization of international legal norms in the domain of LAWS. However, positions on outright prohibitions remain markedly fragmented. Only 24% of states support comprehensive bans,14 whereas a significant majority, 64%, prefer a differentiated approach that combines narrowly tailored prohibitions with regulatory measures for systems that adhere to existing IHL. In contrast, only a small minority, 7%, categorically oppose any form of prohibition, which highlights the complexity and sensitivity of consensus-building on this issue.
Ethical considerations also feature prominently, with 89% of submissions referencing normative principles such as those encapsulated in the Martens Clause,15 which indicates a shared recognition of the ethical stakes involved in the deployment of LAWS. Nevertheless, attention to algorithmic and structural bias remains limited. Only 33% of states explicitly reference concerns related to algorithmic, racial, gender-based, or data-driven bias, which suggests that these critical issues have yet to gain sufficient traction in international disarmament discourse.
Divergence is also apparent in views on institutional fora for continued deliberation. While 29% of states advocate for exclusive reliance on the Convention on Certain Conventional Weapons (CCW), an equal proportion support the CCW while remaining open to complementary mechanisms. A further 31% favor a pluralistic approach, incorporating the CCW alongside other multilateral platforms such as the UNGA or the Office of the High Commissioner for Human Rights (OHCHR).
Collectively, these findings illustrate an evolving but uneven landscape. While there is growing consensus on the necessity of human oversight and the ethical foundations of LAWS governance, substantial divergence persists regarding the form and scope of legal instruments, the legitimacy of prohibitive measures, and the mechanisms for addressing systemic bias. These tensions underscore the need for sustained dialogue and norm development to bridge the regulatory gaps in the emerging governance architecture for autonomous weapons systems (AWS).
Oona Hathaway’s work examines the state of the law on “mistakes” in war. It finds that, while those responsible for mistakes are often not held accountable, there is significant evidence that certain mistakes can be treated as criminally culpable. Similarly, the pattern of “mistakes”—the predictable result of a system controlled by AI—can be deemed a serious breach of international humanitarian law. This Article shows how AI “mistakes” are often the result of predictable systemic biases rather than one-off events. For an IHL violation to be a crime, mens rea must be present; this factor—mens rea—is at issue when it comes to “mistakes” in war. From a legal standpoint, the central challenge lies in determining whether AI technology designers, developers, or end-users possessed the requisite intent to commit a criminal act through the system’s use.16
In assessing mens rea in the context of AI-driven transmissions, a critical consideration is the extent of human control and oversight over the system.17 Legal analysis frequently centers on whether the relevant actors could reasonably foresee the potential consequences of the AI’s conduct.18 Where an AI system has been programmed to undertake tasks that carry an inherent risk of legal infractions, can liability extend to its creators or operators, who can be held accountable for the resulting unlawful outcomes?19
As will be discussed in this paper, IHL—the law that governs the conduct of armed conflict—requires that members of a military distinguish between civilians and combatants, prohibiting the intentional targeting of civilians or engaging in indiscriminate attacks under the principle of “distinction.” The law requires that any loss of life not be excessive in relation to the anticipated military objective, a principle known as “proportionality.” Last, the law imposes an obligation to take all feasible precautions to minimize incidental harm to civilian life and damage to civilian objects, known as the principle of “precaution.”
LAWS have implications not just for customary IHL but also for human rights, such as the right to remedy. The International Covenant on Civil and Political Rights (ICCPR) guarantees the right to remedy. According to General Comment No. 31 of the U.N. Human Rights Committee, it is incumbent upon States to conduct investigations into allegations of misconduct and, in the event that evidence of specific violations is discovered, to take appropriate measures to hold the offenders accountable as mandated by the ICCPR.20 The omission to conduct an inquiry and bring the offender(s) to justice may amount to a distinct violation of the ICCPR.
The UNGA’s 2005 adoption of the Basic Principles and Guidelines on the Right to a Remedy and Reparation requires upholding the duty to conduct investigations and pursue legal remedies. The norms necessitate States holding people liable who are proven to have committed human rights infringements.
The right to remedy crystallizes two important purposes. Firstly, this right aims to deter future violations by implementing measures to prevent the recurrence of similar violations. Secondly, it provides retribution, giving victims the satisfaction of seeing someone held accountable for the harm they endured. Further, punishment conveys to wrongdoers that they have committed an offense, acknowledging the victim’s suffering and upholding their moral claims.21
However, when it comes to determining a remedy and establishing accountability for the actions of LAWS, the AI-driven nature of LAWS brings into question the lack of agency, whether moral or otherwise.22 In other words, even though these systems are autonomous, humans design, develop, and disseminate the systems in question, and thus bring up an accountability gap concerning LAWS usage.23 This accountability gap must be the focus of States parties to the Convention.24 AI-based “mistakes” in military contexts are often results of predictable systemic biases rather than isolated incidents.
The first part of the analysis (Part II) outlines the central challenge of determining agency, or mens rea, for breaches of international humanitarian law where AI is involved. At the heart of this challenge is what we will call “the accountability gap” that appears when automated weapons replace human actors in high-risk situations; the first half of this paper compares how different countries propose addressing this gap to ensure violators of IHL are held accountable and those impacted can enjoy their rights to remedy and reparations. The second part (Part III) demonstrates how systemic biases develop in AI technologies and how they potentially contribute to “mistakes,” exacerbating the accountability problem of LAWS governance. This section details the inter-dependence between the quality and representativeness of training data, AI models, and feedback loops, while also analyzing the impact of prejudice from individual programmers and wider AI-adjacent work cultures. Altogether, this article presents a likely scenario as to how biases in the development of AI-driven LAWS could manifest in miscategorization of civilians, unlawful targeting, and the amplification of existing patterns of marginalization.
II. A Legal Perspective - Autonomous Weapon Systems Under International Humanitarian Law
The following section lays out the legal challenge of determining agency, or mens rea, for IHL violations in the context of AI. This part explores the key areas of tension between AWS governance and IHL principles. It will explore what this article calls “the accountability gap” in using LAWS—between machine autonomy and human legal culpability—especially the human control factor and the problem of attribution of criminal liability for AI-driven actions.25 To assess corresponding policy gaps in holding States parties accountable, the article will also map where countries at different stages of AWS development fall on either side of the debate on a new binding international instrument setting out specific prohibitions and effective human supervision.
To date, there is “no internationally agreed definition of autonomous weapons systems or lethal autonomous weapons systems.”26 Commonly known as LAWS, lethal autonomous weapon systems “[select] targets based on sensor [and algorithmic] data,”27 and they are “designed to…apply force…without human intervention[.]”28 Not only are the weapons operating without human input, but “[m]oreover, allowing machines to take human life dehumanizes individuals, reducing them to data points processed by sensors and algorithms. This mechanization of violence undermines human dignity and ethical principles.”29
For the purposes of this discussion, an “autonomous weapon system” is defined as “[a]ny weapon system with autonomy in its critical functions [meaning it] can [independently] select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without [direct] human intervention.”30 Once launched or activated by a human operator, the system itself—through its sensors, software, and weaponry—assumes targeting functions that would otherwise be controlled by humans. This definition encompasses both existing autonomous weapons and potential future developments. It provides a broad framework for legal analysis, avoiding the need to immediately categorize specific systems as lawful or unlawful.
UNGA Resolution 79/62, of December 2024, invokes the potential for a two-tiered approach: prohibiting some LAWS while regulating others under international law.31
There is global consensus that extant international law applies to LAWS, including the U.N. Charter, IHL, international criminal law, international human rights law, the human rights treaties, the law of state responsibility, international environmental law, international product liability law, and existing treaties on specific types of weapons (e.g., chemical, biological, and nuclear weapons).32 While existing international law provides foundational principles governing LAWS, enforcement gaps underscore the pressing need for a standalone international treaty. Such a treaty could harmonize interpretations, establish clear prohibitions and concrete regulations, and ensure accountability in the use of these technologies.
A. Member States’ Views on a Legally Binding Instrument

The majority of U.N. Member States express support for the creation of a new, legally binding instrument to govern the development and use of LAWS. By contrast, 26.7% either oppose such an instrument or believe it is premature to pursue one. The analysis below highlights key views from the U.N. Secretary-General’s report that illustrate how Member States are framing their positions.
Member States remain divided over the need for a legally binding instrument to regulate AWS. The U.S.33 and Russia disavow the necessity of new legal frameworks, asserting that existing framework on international law and international humanitarian law is sufficient to cover emerging developments in weapons. Russia emphasizes that “there are currently no convincing grounds for imposing any new limitations or restrictions on lethal autonomous weapons systems, or for updating or adapting international humanitarian law to address such weapons.”34 Similarly, Israel maintains that “existing international law, and in particular, international humanitarian law, fully applies to lethal autonomous weapons systems and… provides a sufficient legal framework for any future use.”35
Several U.N. Member States adopted a cautionary note,36 emphasizing the need to deepen shared understanding before pursuing a legally binding instrument. Australia, for example, “advocates for building a shared understanding of how existing IHL applies to LAWS before pursuing any new legal instrument.”37 Similarly, Canada acknowledges the possibility of a treaty but emphasizes that “it is unclear at this stage what gaps in the current international framework a new instrument would seek to fill.”38 Moldova echoes this gradualist approach, stating that while it agrees IHL applies to LAWS, “in the light of existing international instruments,” there is a continuing obligation to ensure human responsibility without necessarily requiring a new treaty at this point.39
Other states, such as Norway, support an incremental approach. Norway’s submission emphasizes that while it supports the development of prohibitions and regulations, “for a prohibition to be meaningful, it must take a binding form. In this sense, Norway supports a legally binding instrument to prohibit certain autonomous weapons systems.”40 This stand reflects a middle-ground view that existing IHL needs clarification through formalized legal commitments.
A growing number of states—particularly from the Global South—are calling for robust treaty-based regulations. Sri Lanka “strongly supports and advocates the negotiation of a legally binding instrument on autonomous weapons systems,” echoing a call voiced by a “growing majority of States” in the Group of Governmental Experts.41 Sierra Leone similarly “supports the Secretary-General’s call for urgent negotiation of a legally binding instrument to regulate autonomous weapons systems in line with international laws, including humanitarian and human rights laws.”42 Mexico is equally unequivocal in its position that it “considers it necessary to adopt a legally binding instrument that establishes prohibitions and regulations on autonomous weapons systems,”43 while Malawi calls for “the start of negotiations on a legally binding instrument.”44 These positions reflect a normative consensus among many Member States that new, specific, and enforceable legal rules are key to addressing the challenges posed by LAWS.
B. Diverging Approaches: Member States’ Views on New Regulations, the Two-Tier Approach, and Prohibition

The two-tier approach in AWS presents a multi-layered analysis. Fundamentally, the two-tier approach calls for regulating existing autonomous weapons while prohibiting the development of new weapons. Some Member and Observer States have taken the stance to prohibit certain types and uses of AWS and, on the other hand, place limitations and conditionalities on the development and use of all other AWS.
The standpoint of France is that AWS without human control and a responsible chain of command (“fully autonomous weapons systems”) must be prohibited—while such control can be ensured within “partially autonomous weapons systems.”45
Some U.N. Member States, like the U.S., believe that existing IHL already provides the applicable framework of prohibitions and restrictions on the use of LAWS and AWS in armed conflict. However, many states believe that IHL can provide a framework to build off in order to better address the evolving landscape surrounding LAWS.
A significant percentage of Member States—40 out of 45 (89%)—affirm the need for new legal regulations or measures to address the challenges posed by autonomous weapons systems. Only 3 states (7%) reject this need outright, while 2 make no mention of it. Support for new prohibitions is also substantial, with 11 states (24%) calling for categorical bans and another 29 states (64%) endorsing a tiered approach that combines prohibitions with regulatory safeguards. Just 3 states (7%) oppose any new prohibitions, and 2 (4%) remain silent on the issue.
These figures reflect growing convergence around the inadequacy of existing legal frameworks and the necessity of proactive governance. The discussion below analyzes key state views, focusing on how Member States distinguish between the types of measures and prohibitions they support and the potential legal contours of such rules.
The U.S. submits that its “approach to LAWS starts with the recognition that existing IHL already provides the applicable framework of prohibitions and restrictions on the use of autonomous weapon systems in armed conflict.”46 The U.S. is in support of the two-tier approach, which reflects a distinction in IHL between weapons that are by their nature prohibited and regulations for the use of other weapons not categorically prohibited from use in all circumstances.47 The U.S. believes that IHL does not prohibit the use of autonomy in weapon systems or the use of a weapon that can select and engage a target.48
India is of the view that the laws of armed conflict must be respected at all times and maintains that emerging technologies, including LAWS, could enhance compliance with IHL rather than departing from it.49 India advocates for a nuanced approach that avoids stigmatization of technology while ensuring adequate control and accountability in its application.50 The country highlights the importance of transparency, confidence-building measures, and the voluntary exchange of best practices among states to mitigate risks associated with LAWS.51
While some states assert the sufficiency of existing legal frameworks, others argue that the unique risks posed by AWS require new prohibitions and legal measures. Israel rejects the development of new norms based on concepts such as foreseeability, control, or responsibility, cautioning that “treating them as rules of international humanitarian law, or even framing prohibitions while using them, would be problematic on many legal and practical levels.”52 Israel further asserts that “existing international law, and in particular, international humanitarian law, fully applies to LAWS and, in its view, provides a sufficient legal framework for any future use.”53 Greece, while endorsing the two-tier approach, does not explicitly support prohibitions, instead underscoring the need for compliance with existing IHL and acknowledging concerns about military AI.54
By contrast, a growing number of states call for the adoption of new international legal obligations. Pakistan states unequivocally: “New international legal obligations are needed to address the significant risks in a comprehensive and integrated manner.”55 Palestine echoes this view, asserting that “prohibitions are required on the development and use of autonomous weapons systems” that target humans directly or cannot be operated with meaningful human control,56 and emphasizing that “this combination of prohibitions and regulations should be in the form of an international legally binding instrument.”57 Palestine takes a strict approach, emphatically asserting that until a legally binding instrument is created, “a moratorium must be imposed on the development of automated weapons systems.”58
South Korea supports a tiered approach, calling for the prohibition of LAWS “that by their nature are incapable of being used in accordance with international humanitarian law,” while leaving room for regulation of other systems through ongoing discussion.59 Similarly, the Netherlands urges the adoption of system-specific regulatory measures, noting that “different types of measures should be adopted” to ensure legal compliance depending on a system’s operational environment and end user.60 France proposed that “the [Group of Governmental Experts] should seek consensus on a two-tier[ed] approach, based on the recognition that lethal autonomous weapons systems that cannot comply with IHL are de facto prohibited and should not be developed or used, and that further work is needed to operationalize this commitment at national level.”61 Canada, likewise, supports the two-tiered approach, but maintains that “weapons systems based on emerging technologies in the area of LAWS that cannot be use in compliance with IHL [as it currently today] are [currently] prohibited.”62 Like India, Canada stresses that “to be compliant with IHL, emerging technologies . . . must maintain an appropriate level of human involvement,” a yet-undefined standard related to control and accountability over the use of autonomous weapons.63 New Zealand aligns with this position, supporting strengthened weapons reviews “supplemented with specific rules and limits.”64 The U.K., while asserting it does not possess fully autonomous systems, also declares that “[n]o State should develop or deploy such systems.”65 Kiribati adopts a strict prohibitionist stance, stating that it is unlawful to “develop, produce . . . or . . . use any autonomous weapon system” that does not permit human users to predict and understand how the system will function and to limit its effects.66
C. Challenges to International Humanitarian Law in AI-Enabled Warfare
This section points to areas of tension between specific IHL principles and the use of LAWS, which complicate determining agency for IHL violations in AI-enabled warfare. It will highlight how opaque AI training processes complicate determinations as to whether a State has taken due precautions to minimize harm, especially to civilians.
Under IHL, states bear the primary responsibility for ensuring that AWS are used lawfully. The rise of AI-enabled warfare presents significant challenges to the core principles of international humanitarian law, specifically the legal rules governing hostilities: (1) distinction—differentiating between military and civilian targets, as well as between combatants and non-combatants;67 (2) proportionality—ensuring that incidental civilian harm is not excessive in relation to the anticipated military advantage;68 and (3) precautions in attack—suspending or canceling attacks if it becomes apparent that a target is not a legitimate military objective or if the attack would violate proportionality rules.69 These obligations rest on human combatants, who remain accountable for compliance.70 Legal responsibility cannot effectively be transferred to machines, software, or weapon systems, since an autonomous entity that “cannot experience physical or psychological pain . . . cannot be punished like a human being.”71
Distinction, the requirement to differentiate between combatants and civilians, is particularly strained by AI’s role in warfare. While AI’s advanced data processing has the potential to enhance targeting accuracy, biases in training data can lead to misidentifications. This risk is especially concerning with autonomous weapon systems, where errors in classification could result in unlawful attacks on civilians or civilian infrastructure.72
Proportionality assessments also grow more complex with AI-driven systems. AI can analyze vast amounts of data to weigh military advantage against potential civilian harm, yet the “black box”73 nature of many AI models makes these calculations difficult to interpret or validate.74 This lack of transparency complicates decision-making and accountability, particularly for autonomous weapons, which must make proportionality judgments in real time without human intervention.
Precaution, another fundamental principle, is also affected. AI-powered decision support systems can assist in risk assessment and target verification, but they introduce the risk of “automation bias,” where human operators place undue trust in AI recommendations without critical evaluation. With autonomous weapons, the challenge is even greater, as ensuring all feasible precautions are taken becomes increasingly difficult in the absence of direct human oversight.
Finally, military necessity, which permits only the use of force required to achieve a legitimate military objective, must be carefully balanced against these other principles.75 While AI can enhance operational efficiency and decision-making, its deployment must remain within the bounds of lawful and ethical warfare. The challenge lies in ensuring that AI-driven technologies do not erode the safeguards established by international humanitarian law.
Several states explicitly emphasized that fully autonomous weapons pose significant risks to the foundational principles of IHL. Cuba and Egypt underscored that such systems threaten compliance with the cardinal rules of distinction, proportionality, necessity, and humanity, and called for legal and ethical concerns to be addressed to ensure conformity with international law.76 Finland similarly stated that lethal autonomous weapons systems that “cannot comply with the rules of international humanitarian law and its fundamental principles of proportionality, distinction and precaution” are prohibited under existing law and should not be developed or deployed.77 Mexico warned that systems incapable of meeting the principles of distinction, proportionality, and precautions in attack, and that lack sufficient predictability or explainability, should be prohibited.78 These views reinforce concerns that the deployment of AWS may erode key legal safeguards, and reflect growing international consensus on the need to clarify how IHL applies to emerging military technologies.
D. The Rule of Distinction & the Protection of Civilians
The following section examines the principle of distinction between combatants and civilians and AI’s potential for misidentification due to biased data or labeling. It also raises the question of how an autonomous machine might determine proportionality in the use of force—an inherently subjective determination—without a full human understanding of the context: a complex conflict situation.79 With the rules of distinction and proportionality, this segment will illustrate how IHL standards are designed to evaluate human judgment; as such, they expect “common sense and good faith” in their subjects.80 The misalignment between machine autonomy and human agency means that miscalculations by AI-driven LAWS that lead to unlawful attacks on civilians can raise a real problem of attribution of criminal liability for those AI-driven actions.81
The principle of distinction is a fundamental rule of international humanitarian law, requiring parties in an armed conflict to differentiate between civilians and combatants and to target only military objectives.82 The clearest violation of this principle is the deliberate attack on civilians. Under the inalienable and non-derogatory principles of IHL, civilians are entitled to general protection from the dangers of military operations and cannot be targeted unless they directly participate in hostilities.
However, the distinction between combatants and civilians is not always clear-cut, which can lead to tragic ambiguities.83 A civilian may be considered a combatant if they engage in acts on a spontaneous, sporadic, or unorganized basis that meet the following criteria: (1) the act is likely to negatively impact military operations or cause death, injury, or destruction to protected persons or objects; (2) there is a direct causal link between the act and the resulting harm; and (3) the act is specifically intended to cause the required level of harm.84
Proportionality is a fundamental principle of IHL, requiring that excessive harm to civilians be avoided at all costs.85 Additional Protocol I to the 1979 Geneva Conventions explicitly prohibits attacks that are expected to cause incidental civilian loss of life, injury, or damage to civilian objects if such harm would be excessive in relation to the anticipated concrete and direct military advantage.86 In a ruling by the International Criminal Tribunal for the Former Yugoslavia (ICTY), the assessment of whether harm is “excessive” was based on whether a “reasonably well-informed person, in the circumstances of the … perpetrator, making use of the information … available … could have anticipated excessive civilian casualties.”87 However, this standard is highly subjective, raising concerns about its practical application.
For example, in the well-known ICTY case Prosecutor v. Prlić et al., the tribunal found that the destruction of the Old Bridge of Mostar harmed Muslim civilians by cutting off access to food and medical supplies.88 Jadranko Prlić and other members of the Croatian Defense Council (HVO) demolished the historic bridge, which had stood since the 15th century. The defendants argued that their objective was to hinder the Army of the Republic of Bosnia and Herzegovina from delivering supplies to its troops. However, the ICTY ruled that destroying historic property for military advantage violates international law if the anticipated harm to civilians is disproportionate to the military gain.89 The Trial Chamber further determined that the attack’s impact on the city of Mostar’s Muslim population outweighed any concrete military advantage, particularly given that the destruction was partially intended to weaken Muslim morale.90
Applying a similar proportionality analysis, the ICTY reached a different conclusion in its assessment of North Atlantic Treaty Organization’s (NATO’s) bombing campaign against Yugoslavia. This campaign involved thousands of airstrikes targeting industrial sites, government ministries, media centers, and oil refineries. Various groups questioned whether NATO’s bombing methods adhered to the principles of distinction and proportionality, especially given the civilian casualties and claims that the force used exceeded what was necessary to neutralize these facilities.91 Nonetheless, applying the “reasonable military commander” standard, the ICTY report concluded that NATO met its legal obligations by using precision-guided munitions and modern aircraft technologies.92
These two cases, both arising from the same conflict, illustrate the inherent subjectivity in applying the rule of distinction. The assessment often depends on unpredictable and unquantifiable variables present in the midst of armed conflict.
It remains uncertain whether LAWS can fully execute missions while accounting for the various considerations that human decision-makers traditionally weigh. IHL relies on qualitative human judgment, which is guided by “common sense and good faith” in military command decisions.93 This raises concerns about whether LAWS can effectively conduct distinction, proportionality, and military necessity assessments, given the numerous contextual factors present on the battlefield.94
Even if LAWS can be programmed to evaluate and balance the potential military advantages against the harm inflicted on civilians,95 the challenge persists due to the inconsistent application of the “reasonable military commander” standard in proportionality cases.96 Furthermore, without a uniform guideline among states on how proportionality should be assessed, the deployment of LAWS may, at times, lead to violations of international humanitarian law under the principle of proportionality.
E. The Principles of Humanity—The Martens Clause
Many States parties reference the Martens Clause and cite its “principles of humanity” and “dictates of public conscience” as justification for creating a binding instrument to address the accountability gap in AI-driven warfare.97
The “principles of humanity and the dictates of public conscience” are mentioned notably in article 1(2) of Additional Protocol I and in the preamble of Additional Protocol II to the Geneva Conventions, referred to as the Martens Clause.98 The Martens Clause links ethical considerations with IHL.99 It states that in situations not explicitly covered by treaties, civilians and combatants remain protected by customary IHL, the principles of humanity, and the dictates of public conscience. This principle is particularly relevant to AWS, as there is not currently a binding global treaty tailored to regulate the use of AWS in conflict.100 In this regulatory gap, the Martens Clause provides a legal and ethical bedrock, affirming that the deployment of AWS cannot erode core IHL tenets that protect civilians in armed conflict.101
Several states have underscored the relevance of the Martens Clause to the regulation of AWS. Austria emphasized its function as a legal mechanism responsive to evolving societal concerns, stating that the clause “recognizes that the law can develop in relation to societal concerns and the dictates of public conscience and is, thus, of particular relevance to the AWS issue.”102 Ireland similarly acknowledged that ethical frameworks governing AWS must center on “the principles of humanity and dictates of public conscience” when assessing the acceptability of such technologies.103 A joint submission from the Ibero-American Member and Observer States advocated for “new prohibitions and regulations guided by international law . . . and grounded in the principles of humanity and the dictates of public conscience,” situating the Martens Clause within a broader rights-based framework for regulating AWS.104 By contrast, the Russian Federation departed from the use of these principles as determinative, asserting that “[t]he principles of humanity, the dictates of the public conscience and the human rights component cannot be used as the absolute and sole sufficient condition for imposing limitations and restrictions on certain types of weapons.”105 These divergent positions reflect deeper tensions regarding the normative weight of ethics and customary law in regulating AWS where specific treaty law remains absent.
F. Member States’ Views on Compliance & Legal Review
This section examines State party positions on various mechanisms to facilitate accountability and transparency to ensure IHL compliance and review of AWS to minimize the risk of AI “mistakes” and determine human control.
Several states articulated a granular vision for compliance with IHL and legal review of AWS. France proposed a comprehensive set of safeguards including legal reviews, risk assessments, and a chain of human command and responsibility.106 Italy suggested that compliance can be assessed and reinforced by limiting targeting parameters, duration, and operational scope.107 Finland echoed these recommendations, calling for legal review mechanisms to be paired with requirements for transparency, foreseeability, and human accountability.108 Moldova emphasized the need for both international and national mechanisms to enforce regulatory and prohibitive measures through “monitoring, control and legal accountability.”109 Austria went further, recommending a multilayered international framework with regular reviews to ensure that legal and ethical standards—particularly meaningful human control—are upheld in all stages of AWS design and deployment.110
In addition to IHL obligations, New Zealand raised concerns regarding AWS compliance with international human rights law and international criminal law, noting the risks of arbitrary targeting and algorithmic bias, which could give rise to violations of both IHL and human rights norms.111 The Philippines similarly urged that all military uses of AI respect the principle of human dignity and warned against the depersonalization of human life.112
In contrast, the Russian Federation rejected the need for any binding or universal legal review mechanism specific to AWS. It argued that such processes would be futile and unnecessary, opposing proposals for a dedicated review regime.113 This position stands in stark contrast to the prevailing view that robust legal oversight is indispensable to ensure the lawful and ethical use of AWS.
G. Challenges to AI-Driven LAWS: Human Control & Legal Accountability
Under IHL, AWS raise concerns because, in the acts of selecting and attacking targets, human decision-making is effectively replaced by computer-controlled processes, thereby ceding life-and-death decisions in armed conflict to machines. International criminal tribunals, such as the International Criminal Court, are generally limited to exercise jurisdiction over “natural persons” pursuant to the governing statute.114 The disconnect between machine autonomy and human legal responsibility creates a significant accountability gap: if AWS are to be deployed without meaningful human control and violate the rules of IHL, existing legal frameworks offer no clear avenue for attributing criminal liability.115 In their submission, the U.K. highlighted this regulatory gap, providing:
The legal frameworks providing for the responsibility of States under international humanitarian law, and of individuals under international and domestic criminal law, do not allow for accountability for the effects of military action to be transferred to a machine. States are responsible for the commission of internationally wrongful acts, including in the indiscriminate or otherwise unlawful use of weapons systems. International humanitarian law relies on the precept of command accountability, which places humans at the centre of decisions over use of force. The use of autonomy in weapons does not, and cannot, negate the human’s role as the accountable actor as a matter of law.116
The ability of LAWS to comply with IHL will largely depend on further advancements in recognition technology and the sophistication of onboard decision-making software. A critical question arises: should militaries deploy LAWS knowing that these systems may fail to accurately distinguish between civilians and combatants within their autonomous operations? At a minimum, weapon systems must adhere to IHL requirements, with additional safeguards potentially needed to align with ethical principles.

-
Member States’ Views on Human Control
This final section on AWS under IHL will describe the divide between countries on whether maintaining “meaningful human control” over LAWS is required for accountability and to protect human rights and dignity.117 In a joint submission, the Ibero-American Member and Observer States emphasized that “[i]t is paramount to maintain meaningful human control to prevent further dehumanization of war and ensure individual accountability, the responsibility of the State and of non-State armed groups, and the human rights of victims.”118 Argentina similarly stated:
With regard to regulation, the general principle should be to maintain meaningful human control over the critical functions of autonomous weapons systems. In addition, it is important that there be sufficient knowledge and information to understand lethal autonomous weapons systems, that the functioning of such systems be assessed and that the development of algorithmic biases be avoided.119
Guatemala warned that delegating lethal decision-making to machines without human oversight “would make it impossible to assign responsibility . . .” rendering such systems incompatible with the right to life and IHL.120 Fiji echoed this ethical concern, stating that “allowing machines to take human life dehumanizes individuals, reducing them to data points processed by sensors and algorithms.”121 China advocated broadly for “ensuring that artificial intelligence is always under human control.”122
Several states also emphasized that human responsibility must remain central in the deployment of these systems. Canada called for a consensus “on what ‘human involvement’ would be required in order for weapons systems to be compliant with [IHL],” reaffirming that “humans, not machines, are responsible for the use of force.”123 Egypt argued that “autonomous weapons systems must remain under meaningful human control and supervision to ensure human responsibility and accountability from the perspective of international law.”124
However, some technologically advanced states reject the concept of human control as a regulatory necessity for LAWS. For example, the U.S. rejected the terminology of “meaningful human control,” explaining that a focus on “control” risks obscuring “the genuine challenges in this area.”125 Israel similarly stated that “human control is not an end in and of itself,” but may serve as a relevant factor in implementing IHL obligations.126 South Korea noted that “a degree of human involvement is not necessarily a requirement” for IHL compliance, suggesting instead that adherence to the principles of “distinction, proportionality and precautions in attack” should guide legal assessments.127 It is also important to note that Australia too explicitly opposed the creation of a new international law requiring a universal standard of human control. It emphasized that while human control or involvement is one potential means of ensuring IHL compliance, “there is no express IHL requirement for a weapon to be subject to ‘human control’” and the appropriate level of involvement “will vary depending on the context of the use of a LAWS.”128
III. Bias in AI Technologies and Its Consequences
As argued at the beginning of this paper, AI “mistakes” are more often the result of predictable systemic biases than isolated incidents. By tracing the sources of data inequality and algorithmic bias, this final section of this paper explores how systemic biases in AI technologies may contribute to “mistakes” and complicate the accountability challenge. Part III underlines the critical importance of understanding and exposing these biases.
Algorithmic bias can manifest in three main ways: (1) bias in data—occurring during data collection and training; (2) bias in design and development—embedded in the structure and objectives of AI models; and (3) bias in use—arising from how AI systems are deployed and interpreted in real-world scenarios.129 AI algorithms, particularly those used in machine learning, are susceptible to replicating, amplifying, or introducing biases—often unconsciously encoded by programmers or stemming from incomplete or skewed datasets.130 A substantial body of research has documented gender and racial biases in AI. Two primary mechanisms have been identified. First, models trained on historical data tend to reproduce and reinforce existing societal inequalities. For instance, a 2016 study found that an AI system used in the criminal justice sector disproportionately misclassified recidivism risks based on race and gender.131 Existing datasets and algorithms disproportionately represent white males, leading to significant recognition errors for women of color in AI-driven image and voice recognition systems.132 Studies have also demonstrated that AI-based translation systems default to masculine pronouns and that women are less likely to be shown advertisements for high-paying job opportunities.133
This means bias can permeate the entire lifecycle of AI models, from data collection and training to evaluation, deployment, and eventual archiving or disposal. Addressing these risks requires careful oversight and policy intervention to ensure AI technologies—whether in civilian or military contexts—operate fairly and effectively.
A. Member States’ Views on Bias in LAWS

Among the seven dimensions examined in this study, bias was the least frequently addressed, with only 33% of states (15 out of 45) explicitly referencing it in their submissions. Those that did engage with the issue highlighted a range of concerns, including the potential for algorithmic systems to reproduce or exacerbate racial, gender, socio-economic, and other structural inequalities in the use of force.
States such as Australia,134 Austria,135 Germany,136 and Ireland137 emphasized that machine learning systems, when deployed in armed conflict, may misidentify individuals or amplify existing social biases, leading to disproportionate harm to marginalized groups. Others, like Canada,138 Luxembourg,139 and New Zealand,140 drew attention to how bias embedded in training data or design assumptions could result in violations of international humanitarian or human rights law. Although a minority, the submissions that addressed bias recognized it as a critical ethical and legal risk—one that challenges the assumption that AI systems are inherently neutral and highlights the need for diverse and inclusive regulatory frameworks moving forward. The subsections that follow present specific state views on distinct categories of bias—ranging from algorithmic and automation bias to data, racial, and gender bias—to assess how governments understand and articulate this underdeveloped area of concern.
1. Understanding bias in AI-driven LAWS
Bias in AI is not a rare or isolated phenomenon. A review of 133 AI systems deployed across various sectors between 1988 and 2021 revealed that 44.2% exhibited gender bias, while 25.7% displayed both gender and racial biases. The implications of AI bias in military applications are particularly concerning. A UNIDIR report highlighted that the criteria used by autonomous weapons to distinguish between combatants and non-combatants may incorporate factors such as gender, age, race, and ability.141
Bias in autonomous weapon systems and other military AI applications carries severe legal and ethical implications. Flawed algorithms could wrongly classify individuals based on age, gender, or skin tone, leading to incorrect identification of civilians as combatants.142 UNIDIR’s 2021 report outlines a range of harms resulting from such misrecognition, from wrongful targeting to violations of international law.143
Beyond ethical concerns, bias also affects system functionality and predictability. Many AI models lack transparency, making it unclear which data features influenced a particular decision. This absence of explainability undermines accountability, as no justification can be given for why a specific action was taken.144
Additionally, military AI training data is often more restricted in scope than civilian datasets. The data used may represent only specific conflicts or operations, limiting the model’s generalizability. As a result, both the quantity and quality of data available for military AI training may be insufficient, further exacerbating bias and reducing the reliability of these systems.145
Ireland’s submission underscores these concerns, warning that “[t]he consequences of bias in machine learning are amplified in a military context . . . [w]omen of colour may be misrecognized at a higher rate, leaving them exposed to differential risks, or that an autonomous system may miscategorise civilian men as combatants, due to their traditional roles in warfare.”146 Ireland’s submission to the Secretary-General called for alignment of LAWS in compliance with IHL and was highlighted by its emphasis on gender considerations, including intersectional gender considerations.147
Canada raises a similar alarm that fully autonomous weapons systems may not be consistent with principles related to the Women, Peace, and Security Agenda and weighed in on “the issue of collateral harm to women and children in conflict zones, and the risk that autonomous weapons systems could exacerbate existing power imbalances and biases.”148 Several States have referenced LAWS’ cognitive limitations (lack of human judgment) and epistemological limitations (making judgments based upon biased, incomplete, or inappropriate data). In fact, Canada’s position, clearly mentions the need for LAWS to be consistent with the WPS Agenda while acknowledging the geopolitical power imbalance: “A primary concern for Canada remains the potential for the inclusion of unintended or intended biases in the development and programming of autonomous functions in a weapons system. We are concerned that fully autonomous weapons systems may not be consistent with the principles related to the Women, Peace and Security agenda.”149 Canada referenced its dialogue with Indigenous and civil society stakeholders on its Feminist Foreign Policy who raised a number of concerns related to LAWS, including the issue of collateral harm to women and children in conflict zones, and the way that autonomous weapons systems could “exacerbate existing power imbalances and biases.”150 Canada has stated that one of its “primary [concerns]…remains the potential for the inclusion of unintended or intended biases in the development and programming of autonomous functions in [weapons].”151
Fiji emphasized the correlation between algorithmic bias and historically marginalized communities in the context of LAWS:
Algorithmic bias in autonomous weapons systems is a major concern, especially for historically marginalized populations. These systems could perpetuate racial, gender and other biases, leading to disproportionate harm to some groups. The reliance on data from sensors to apply force can embed systemic prejudices into the decision-making processes of autonomous weapons. Evidence from civilian applications of artificial intelligence, such as policing and criminal sentencing, shows that marginalized populations are disproportionately affected by algorithmic bias.152
B. Asymmetry in AI Development: The Digital Divide & the Algorithmic Divide
This section frames the “algorithmic divide” as the new digital divide in global technological development.
Gender and social inequalities are embedded within data-collection processes. According to the International Telecommunication Union, one-third of the world’s population has never used the internet, with women and girls disproportionately affected.153 This disparity raises concerns about the representativeness of AI training data.
Global inequalities influence who benefits from and who is harmed by military AI applications. Seemingly neutral data-collection practices—such as publicly uploaded digital photographs—are already integrated into mass surveillance programs and could be exploited for autonomous weapons targeting.154
In the age of AI, a new and widening chasm—the algorithmic divide—threatens global equity in AI deployment. Analogous to the earlier digital divide, the algorithmic divide refers to disparities in access to algorithm-enhanced technologies. Individuals and communities lacking access to AI systems are excluded from the economic, educational, and social opportunities these technologies afford.
This divide is particularly acute in the Global South, where algorithmic infrastructure, digital literacy, and computational resources are limited. The UNGA Resolution on AI further recognized the “varying levels”155 of technological development between and within countries.
At the global level, data analyses that omit data from the Global South, especially the developing nations, will become increasingly problematic. Often, big-data analytics, machine learning, and artificial intelligence are deployed to address global problems, excluding the information from the world’s most vulnerable populations. Thus, any algorithmically-generated analyses will likely be of limited use in addressing these problems.
Fiji’s submission expresses strong concern about the impact of algorithmic bias in autonomous weapons systems, particularly for historically marginalized populations.156 It warns that these systems “could perpetuate racial, gender and other biases, leading to disproportionate harm to some groups,” especially when force decisions are based on data from sensors that might embed systemic prejudices.157
Other states echoed such concerns. For example, Germany noted the need for “specific measures to mitigate unintended biases;” for example, “algorithms featuring artificial intelligence . . . based on ethical norms in order to avoid reinforcing and exacerbating existing structures of inequality.”158 Luxembourg similarly observed that “the underrepresentation of historically marginalized communities . . . in the fields of science, technology, engineering and mathematics could create significant biases in artificial intelligence systems,” and called for a “gender-sensitive and intersectional approach” in the governance of LAWS.159 New Zealand expressed concern that “biases in data sets that underpin the algorithms used in selecting targets and/or decisions to use force could lead to violations of international human rights law.”160 A stakeholder submission by the Women’s International League for Peace and Freedom warned that countries in the Global South may become “the battlegrounds for the testing and deployment of these weapons,” with power imbalances allowing “the rich countries [to use] these weapons against the poor.”161
India takes a different view. India’s submission frames emerging technologies, including LAWS, as tools with “transformational effects on reducing poverty and improving the lives of all people,” especially in developing countries.162 It cautions against the “stigmatization of such technologies,” advocating instead for a balanced approach that upholds IHL.163
C. Bias as a Socio-Technical Challenge
The following section looks at how bias in AI systems can be mitigated at various stages of AI development. These phases include, but are not limited to, where algorithmic bias is introduced, such as in data collection, design, and deployment.
Research on algorithmic bias, particularly gender bias, can be broadly categorized into studies focused on either the perpetuation or mitigation of bias. A prominent example of research documenting bias perpetuation is the Gender Shadesproject by Joy Buolamwini and Timnit Gebru.164
Addressing algorithmic bias, therefore, entails confronting and altering the biases embedded in work cultures, particularly in professions essential to AI development—namely, STEM fields. Currently, these professions are dominated by a relatively homogenous demographic that is not representative of broader society: tech firms employ few women, minorities, or individuals over the age of 40.165 This reality necessitates a fundamental cultural shift within professions such as engineering and computer science, where technical expertise has historically been associated—often implicitly—with masculinity and particular ethnic backgrounds.
The rationale driving the integration of AI into weapon systems and broader military operations often rests on the assumption that these technologies can render warfare more rational and predictable.166 This perspective overlooks the extent to which AI technologies—particularly those used in military applications—both influence and are influenced by human decision-making. The problem of algorithmic bias underscores the need to perceive AI technologies not as separate from human judgment, but as deeply entangled with it throughout the entire AI lifecycle.167
Australia’s submission highlights the risk that LAWS employing AI may “engage issues of unintended bias concerning race, gender or other characteristics,” emphasizing that the effects of emerging technologies are shaped by social and demographic context.168 Acknowledging the potential for discriminatory outcomes, Australia further stressed that international discussions on the governance of LAWS should be “diverse and inclusive,” with meaningful participation from women and gender-diverse individuals to address the gender digital divide and technology-facilitated harms.169
Australia’s submissions can be connected to how historical context and structural inequality can shape these imbalances in data representation—where certain demographic groups are either overrepresented or underrepresented in training datasets. Both pathways result in systemic discrimination, as algorithms trained on biased or incomplete data tend to replicate and institutionalize societal inequities.
Algorithmic bias cannot be divorced from its historical antecedents. In societies with entrenched systems of racial and gender exclusion—such as the United States—technological systems have frequently inherited and perpetuated discriminatory practices. These practices—like lending170 and employment discrimination,171 healthcare disparities,172 and predictive policing—lead to disproportionate arrests173 and harsher treatment of persons automatically designated to be ‘high-risk’ in terms of violence or recidivism.174 For example, AI systems trained on past employment175 or law enforcement176 data may inadvertently reinforce historical inequalities, penalizing individuals belonging to historically marginalized groups, regardless of personal history.
This legacy of exclusion is especially evident in algorithmic decision-making across various sectors. Whether through the encoding of biased assumptions by developers or the utilization of historically skewed datasets, AI technologies risk becoming instruments of systemic discrimination. Consequently, marginalized populations—especially women and racial minorities—may find themselves disadvantaged by technologies purported to be neutral.177
D. Algorithmic Bias and Its Impact on Military Applications
The following sub-section provides a structured overview of how algorithmic bias becomes embedded in AI systems during each stage of development, namely: (1) biased training data in AI models; (2) flawed datasets, biased processing, and prejudice in AI system design and development; and (3) bias in deployment of AI technology, including “automation bias,” or human over-reliance on automated decision-making (increasing the risk of misclassification and unlawful targeting in conflict situations). This sub-section concludes with UN Member and Observer State proposals measures to mitigate the harm of “mistakes” in the deployment of AI-based LAWS.
1. Bias in training data: machine learning & deep learning
The following segment details how the quality and representativeness of AI outcomes depend on their training data and design choices made by human architects. The article will describe how biased data can reproduce discriminatory patterns at-scale in machine learning (ML), while accurate deep learning (DL) can entrench and replicate existing biases. It will also highlight how attempts to combat bias by validating or interpreting AI outcomes are obfuscated by the “black box”178 nature of many algorithms.179
ML, broadly defined, is an AI methodology that allows systems to improve performance based on data-driven learning. ML models adapt over time, recalibrating their outputs in response to new data. These systems are commonly employed in pattern recognition tasks such as image classification, natural language processing, and predictive analytics.
Despite their adaptability, ML models remain vulnerable to the quality and representativeness of their training data. When trained on flawed or biased data, these systems can reproduce discriminatory patterns at scale. Thus, the claim that AI systems are inherently impartial is fundamentally flawed. In reality, ML outcomes are inextricably tied to the data and design choices made by their human architects.
Many ML models’ opacity poses serious challenges for its use in military engagement. Black-box algorithms, which cannot be easily interpreted by users or regulators, erode trust and hinder corrective intervention. Initiatives such as the US Department of Defense’s Explainable AI (XAI) program seek to develop transparent AI models capable of articulating their decision-making processes,180 but it is unclear what the future will hold for the US and other states involved in LAWS. Decisions about targeting, engagement, and the use of force cannot afford to be distorted by flawed data or algorithmic bias. Today, deep learning systems have achieved human-level performance in a range of complex tasks such as diagnosing skin cancer and translating natural language. Yet, the internal decision-making processes of these systems are often opaque, even to their creators. Crucially, DL systems are capable of identifying intricate patterns in data and applying these patterns to new contexts. While this allows AI to perform with remarkable speed and accuracy, it also increases the risk that the system will internalize and replicate existing societal inequities. As such, “[j]ust about any problem that requires ‘thought’…to solve[,]”181 as some AI experts suggest, can be addressed by DL—but only if the underlying assumptions, data, and objectives are ethically and legally sound.
ML models are inherently shaped by the data they are trained on, which represents only a limited snapshot of the real world.182 This data often contains both direct biases—such as stereotypical language and imagery—and indirect biases, reflected in the frequency of occurrence. For instance, an image dataset may feature more male than female physicists, leading to skewed outputs. Bias arises when certain types of data are overrepresented or missing, often due to the way data was collected and sampled.183
In this regard, the European Union emphasized that “gender equality and the empowerment of women is an important horizontal priority” and that it is “important to take into account a gender perspective when discussing the issue of lethal autonomous weapons systems, given the nexus between gender equality and emerging technologies.”184 This perspective may assist in the recognition of bias in datasets in a way that may allow for necessary adjustments.
2. Bias in datasets, design, & development
Algorithmic bias extends beyond just the training data. The popular phrase “garbage in, garbage out” highlights how biased inputs inevitably produce biased outputs. Any biases embedded—whether explicitly or implicitly—in the training data will persist throughout the model’s lifecycle.185 Since a supervised AI model learns only from its training data, any imbalance or misrepresentation in that data will have lasting effects on its performance and decision-making.186
Algorithms are not immune to the biases of their creators. Algorithmic bias refers to systematic and repeatable errors in data processing that result in unfair or discriminatory outcomes, particularly against individuals based on race, gender, or other protected characteristics. These biases may arise from two principal sources: (1) the inherent subjectivity and prejudices of programmers; and (2) skewed or incomplete datasets that either overrepresent or underrepresent certain populations.
One of the most widely recognized challenges in AI research is the presence of preexisting bias inherently embedded in datasets.187 Such biases are particularly evident in sampling practices that result in documented disparities, such as higher misidentification rates for darker-skinned individuals compared to lighter-skinned individuals in facial recognition technologies.188
Evidence from various sectors—housing, finance, healthcare, education, and employment—demonstrates how AI systems can unintentionally but systematically produce discriminatory outcomes. AI applications in mortgage lending, credit scoring, college admissions, and job recruitment, among others, have revealed that algorithmic tools can result in disparate impacts on marginalized communities. These patterns are not incidental but structurally embedded within the data that trains these systems. If AI is taught to associate certain demographic characteristics with negative outcomes—through explicit instruction or learned patterns—it will replicate and reinforce those biases. In fact, ML can adapt its performance by analyzing data patterns through complex algorithms and statistical models, which can manifest new forms of digitized bias.
These biases manifest in what is often called the “black box” nature of AI—a reference to how the internal processes of algorithmic decision-making are frequently opaque, making it difficult to discern how particular inputs yield specific outputs.189 Inputs are shaped by human programmers whose decisions reflect conscious and unconscious biases. Consequently, when these biased inputs are fed into ML systems, they can result in prejudiced outcomes. Compounding this is the phenomenon where AI systems “learn” to discriminate by identifying and replicating patterns found in historical data—patterns that often reflect longstanding societal inequities.
A particularly troubling aspect of algorithmic bias is its subtlety and statistical complexity. AI systems may process thousands of variables simultaneously, assigning weights to different inputs. For instance, although an algorithm may be constructed to exclude race as a factor, reliance on geographic location190 or linguistic191 markers can have equivalent discriminatory effects. Given the complexity of algorithmic interactions, this form of proxy discrimination is especially difficult to detect and remedy.
The lack of “explainability”—or “interpretability”—of how a model arrives at its conclusions poses a significant challenge to oversight and accountability. This opacity makes it difficult for affected individuals to contest decisions or for regulators to enforce anti-discrimination norms. A salient example is a study by researchers at the University of California, Berkeley, which found that African American and Hispanic borrowers were routinely charged higher mortgage rates than similarly situated White or Asian applicants192 —a disparity rooted in “algorithmic bias” rather than overt discrimination or “human bias.”193
Bias can also be introduced during the data pre-processing and attribute selection stages of ML. In AI development, deciding which variables to include (such as education level, years of experience, or demographic characteristics) can significantly affect outcomes. While these variables may enhance predictive accuracy, they may also encode discriminatory assumptions if not carefully vetted. Thus, the design phase of AI systems requires rigorous scrutiny to mitigate the risk of embedding systemic bias into automated decision-making.194
Bias is not only present in data but can also be amplified throughout the AI development process. Training AI models involves human decision-making at various stages, including data annotation, feature selection, model evaluation, and post-processing.195 These steps are inherently influenced by the perspectives and unconscious biases of engineers, programmers, and task workers. Additionally, some biases may emerge from “black box” processes, where the inner workings of AI algorithms remain opaque. As a result, AI technologies may reflect both the biases in their training data and the biases of their developers.196
While not specifically tied to bias, Singapore considered this possibility, submitting the view: “[W]here artificial intelligence is applied in critical functions in such systems, we must recognize the risks of unintended outcomes. If artificial intelligence behaves in an unanticipated manner in such systems, the resulting effects can be very serious, such as unintended escalation, friendly fire, or unlawful harm to civilians.”197
3. Bias in AI deployment
AI systems do not remain static after development; their biases can evolve and be reinforced through widespread use. This happens in two key ways: 1) the continued use of biased AI systems amplifies existing biases; 2) human users may act on AI-generated outputs, creating new data that reinforces prior biases. This can lead to negative feedback loops, where biased decisions inform future AI outputs, perpetuating and justifying discriminatory practices.198
At this stage, human-AI interaction introduces another concern: automation bias. Studies have shown that people tend to over-rely on automated systems, deferring to AI-generated decisions without critical evaluation.199 In military contexts, this could lead to excessive trust in AI-based targeting or threat identification, increasing the risk of misclassification and unlawful targeting.200
Many member states have noted the importance of implementing risk mitigation and confidence-building measures at the development and deployment phase to address the risks of bias. Ireland’s submission offers a particularly detailed articulation of how bias emerges and compounds in ML systems, warning that such systems can “repeat[], amplif[y], or contribut[e] to unjust biases.”201 Citing empirical research, Ireland notes that existing datasets and algorithms often skew toward white males, making women of color “significantly less likely to be intelligible to machine learning programs trained to recognize images and voices.”202 It emphasizes that the consequences of such bias are “amplified in a military context”203 and therefore proposes a suite of mitigation measures, including comprehensive testing and reviews, transparent documentation of datasets, benchmarking against demographic indicators, and targeted training for system operators.204
Austria similarly highlights the need to “prevent[] algorithmic bias [and] prevent[] automation bias,” alongside ensuring the integrity and quality of data and adequately training personnel.205 Sweden adds that the growing overlap between civilian and military applications of AI heightens these concerns, noting that technological progress is often commercially driven and that both civilian and military systems pose shared challenges in maintaining meaningful human control.206 A stakeholder submission from the Geneva Centre for Security Policy underscores the urgency of addressing automation bias and control in battlefield settings, warning that ongoing conflicts—such as those in Ukraine and Gaza—are accelerating both the interest in and deployment of autonomous systems, even as critical legal and ethical concerns over bias remain unresolved.207
The state submissions request that any future normative framework on autonomous weapons incorporate specific obligations and commitments to address bias in AI systems. These may include: (a) comprehensive testing and validation to detect and rectify potential biases before deployment; (b) rigorous documentation of training datasets to enhance transparency, ensure traceability, and clarify dataset origins, motivations, and intended applications; (c) algorithmic evaluation benchmarks that assess system performance across gender, age, and racial dimensions, as well as in varied operational contexts beyond the training dataset; and (d) specialized training and awareness programs for personnel involved in AI testing and deployment, ensuring informed and ethical use of autonomous weapon systems. By integrating these principles into the regulatory framework of the Convention on Certain Convention Weapons, the international community can take meaningful steps toward reducing the risks of bias in autonomous weapons and ensuring compliance with fundamental human rights and IHL.
The integration of these systems into military decision-making structures raises pressing concerns regarding algorithmic bias defined as “the application of an algorithm that compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation.”208
The presence and implications of algorithmic bias in military AI have been increasingly acknowledged in international fora. The 2023 Responsible AI in the Military Domain (REAIM) Call to Action underscored “potential biases in data” as a critical consideration for military personnel.209 While this recognition marks a step forward, the phenomenon of bias extends beyond data selection and permeates the entire AI lifecycle, influencing both technical and socio-political dimensions of decision-making.
Bias in AI systems is not a neutral or inevitable byproduct of computation but a socio-cultural phenomenon that disproportionately impacts marginalized groups and disrupts equitable decision-making processes.
E. The Group of Governmental Experts on LAWS & the Lifecycle of Bias in AI Decision Support Systems
This penultimate section will describe the dangers of biases baked into military AI-based Decision Support Systems (AI DSS)—meant to identify and assess potential threats—and the subsequent formation of the Group of Governmental Experts (GGE) on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems—meant to guide good governance in the field of AI-driven warfare. Established under the Convention on Certain Conventional Weapons (CCW), the Group is tasked with examining the challenges and implications of LAWS and exploring possible recommendations. This segment argues that bias permeates the entire AI lifecycle, from pre-development to post-use. It differentiates between three types of biases that manifest in military applications of AI technology: (1) pre-existing bias (from historical inequalities); (2) emergent bias (from AI-environment interaction); and (3) technical bias (from algorithmic design).
In military AI DSS, bias in datasets is especially problematic in the construction of so-called “kill lists,” where AI-assisted intelligence gathering and target identification are based on preexisting societal biases.210 These biases may be embedded in data labels that assign terrorist or threat classifications to individuals based on racial, ethnic, or religious identifiers.211 For instance, an AI DSS designed to support counterterrorism efforts could incorporate the prejudiced assumption that devout Muslim individuals are inherently “extreme.”212 Given the historical entanglement of counterterrorism frameworks with racial and ethnic profiling, such biases risk perpetuating systemic discrimination under the guise of algorithmic objectivity.
The inherent purpose of military AI DSS—threat identification and assessment—necessitates a critical evaluation of the cultural, religious, and ethnic biases that influence both the system’s decision-making and the perspectives of its developers. The U.S. military’s Project Maven, for example, was originally conceived to enhance image analysis for counterterrorism operations, particularly within the Defeat-ISIS campaign.213 Given this objective, its design and training processes were inevitably shaped by assumptions regarding specific demographic groups. The effectiveness and fairness of such systems in target identification remain contested, underscoring the necessity of interrogating the sources of bias that inform AI DSS design—especially when the identification of human targets is at stake.
In 2024, States parties organized the GGE for emerging technologies in LAWS.214 This session marked the GGE’s most in-depth discussion on bias to date, under the agenda of “risk mitigation and confidence building.” A key focus was a working paper on bias submitted by Canada, Costa Rica, Germany, Ireland, Mexico, and Panama.215
Algorithmic bias216 has been widely explored in academic research and policy discussions on the social implications of AI. International legal instruments explicitly prohibit discriminatory practices that resemble the characteristics of algorithmic bias. Rule 88 of customary international humanitarian law217 and Article 85(4)(c) of Additional Protocol I218 prohibit adverse distinctions in military practice based on “race, colour, sex, language, religion or belief, political or other opinion, national or social origin, wealth, birth, or other status.” The mechanisms through which algorithmic bias manifest in AI DSS align with such prohibited distinctions because the pre-existing biases embedded in data, models, and decision-making structures often reinforce systemic inequities.219
Bias in AI DSS is particularly insidious because it is not necessarily intentional, explicit, or even consciously programmed.220 AI systems are inherently embedded within broader societal structures, processing data that reflects and perpetuates existing hierarchies. Consequently, systemic biases emerge even in the absence of overtly discriminatory intent. The normative implications of this reality necessitate a reevaluation of how AI DSS are designed, deployed, and regulated, particularly in military contexts where decisions have life-or-death consequences.
To fully comprehend the scope of algorithmic bias, it is imperative to analyze its occurrence across the entire lifecycle of an AI DSS, from pre-development to post-use review. Bias is not a monolithic phenomenon but rather a multidimensional issue that manifests in various forms. In analyzing bias within AI DSS, it is useful to distinguish between three types of bias: (1) pre-existing bias, rooted in societal structures and historical inequalities and embedded in the datasets and assumptions that inform AI DSS development;221 (2) emergent bias, arising from the interaction between AI systems and their operational environment, which reflects how AI adapts and responds to real-world conditions and often reinforces discriminatory patterns; and (3)technical bias, a product of algorithmic design and machine learning methodologies, stemming from model architecture, data processing techniques, and feature selection, which might inadvertently prioritize certain attributes over others.222
Given the increasing use of AI DSS in image recognition applications within military operations, these biases are particularly salient. Image recognition models trained on racially imbalanced datasets, for instance, have been shown to exhibit higher error rates when identifying individuals from underrepresented demographic groups.223 In a military context, such biases could lead to misidentification, wrongful targeting, and violations of IHL principles.224
Addressing algorithmic bias in AI DSS necessitates a holistic, interdisciplinary approach that integrates insights from legal, technological, and socio-cultural perspectives. The governance of AI in military contexts must move beyond merely acknowledging the existence of bias to actively implementing safeguards that mitigate its impact. The integration of AI into warfare demands a critical reassessment of existing legal frameworks and may require new regulations. Several key issues must be addressed: Accountability, Human Oversight, Preserving Human Judgment, Human-Centered AI Design, and Bias Mitigation Strategies.
Accountability: Determining responsibility for AI-driven actions, particularly those of autonomous weapons systems, is complex.225 Traditional notions of command responsibility may need to be redefined to account for the semi-autonomous or fully autonomous nature of these systems.226 When it comes to transparency and interpretability, many AI models function as “black boxes,” making it difficult to understand or justify their decisions.227 This lack of transparency poses challenges for military accountability,228 particularly in autonomous weapons systems where post-action review of targeting decisions may be impractical.
Human Oversight: Meaningful human control over AI-driven military systems is essential to ensure ethical and lawful decision-making.229 Operators should not merely rubber-stamp AI-generated recommendations, and in the case of autonomous weapons, maintaining meaningful human oversight is particularly urgent.230
Preserving Human Judgment: Experts stress the importance of retaining human judgment in decisions involving the use of force.231 Autonomous weapons systems, designed to operate with minimal human intervention, pose significant challenges to this principle, raising concerns about humanitarian risks, ethical considerations, and compliance with international humanitarian law.
Human-Centered AI Design:232 AI should be developed to support, rather than replace, human decision-makers.233 In the case of autonomous weapons, this could mean designing systems with adjustable levels of autonomy based on operational contexts.234
Bias Mitigation Strategies: Developers must prioritize methods to identify and reduce bias throughout the AI system lifecycle, from initial data curation to post-deployment evaluation.235 A particularly well-documented form of bias at this stage is automation bias, which describes the tendency of human users to place excessive trust in AI-generated outputs, often without critically evaluating their validity.236 The risk is particularly pronounced in military contexts, where time-sensitive decision-making and high operational stress may further encourage deference to AI outputs.
The five foundational characteristics of a functioning algorithm—finiteness, definiteness, input, output, and effectiveness—are essential to computational logic. However, when data collection or instruction design is compromised by bias, the algorithm’s effectiveness is likewise impaired. If a dataset fails to adequately represent marginalized groups, the resulting system may systematically disadvantage those groups in practice.
Historically, such biases reflect and reproduce entrenched patterns of exclusion. In the employment sector, for example, hiring algorithms have been found to disadvantage women and people of color by relying on historical hiring data that reflect prior discriminatory practices. In the criminal justice system, risk assessment tools have been shown to assign higher recidivism scores to Black defendants, contributing to longer sentences and perpetuating racial disparities. In each case, algorithmic systems amplify rather than correct social inequities.
The employment sector illustrates the operationalization of algorithmic bias with stark clarity. In the realm of job advertising, algorithmic platforms often utilize ML models to optimize advertisement distribution. However, these systems may prioritize efficiency over equity, reinforcing occupational stereotypes based on historical hiring trends. One illustrative case study involves Facebook’s job advertisement algorithms, which were found to skew advertisement delivery by gender: lumber-related jobs disproportionately reached male users, while cashier roles were predominantly shown to women, despite identical targeting criteria. This divergence reflects historical occupational segregation, suggesting that the algorithm replicated rather than reformed existing inequalities.
Similar issues arise in algorithmic screening tools used in recruitment. In 2014, Amazon’s ML tool for resume assessment was found to penalize resumes containing the term “women’s,” due to the male-dominated dataset it was trained on. This case illustrates how ML systems, when trained on historically biased data, can systematically exclude qualified candidates from underrepresented groups. Without corrective measures, such tools risk institutionalizing past discriminatory practices under the guise of computational objectivity.
Fundamentally, it is often not the algorithm itself, but the data that introduces bias. Datasets used in training AI models may reflect existing prejudices. These problems, as represented before, generally fall into two categories: (1) data that reproduces existing human biases, and (2) data that is incomplete or unrepresentative. The former was exemplified in Amazon’s 2014 experiment with AI-driven recruitment tools. The system, trained on historical resumes submitted to the company, quickly began to favor male candidates, reflecting the historical dominance of men in the tech industry. Resumes that mentioned terms such as “women’s chess club captain” were systematically downgraded. This case illustrates how biased training data can distort outcomes, even when no overtly discriminatory variables are included.
Moreover, AI systems are not only susceptible to bias at the training and design stages; they can also learn bias through user interaction. A notable case is Microsoft’s 2016 chatbot “Tay,” which was designed to learn conversational patterns from Twitter users. Within hours of its launch, Tay began generating misogynistic and racist content—reflecting the language of users it interacted with.237 This episode underscores the extent to which AI systems can mirror the prejudices of their environments. When such systems are deployed in high-stakes domains such as warfare, and policing, the damage can be irreparable.
F. “Mistakes” in War
Yale Law School’s Oona Hathaway and co-author Azmat Khan argued that “those responsible for “mistakes” in war are often not held accountable in their article “Mistakes” in War.238 They argued that “law holds not just individuals, but also States, responsible for preventing certain “mistakes.”239 The authors examined the U.S. military’s own assessments of civilian casualties, focusing on the United States, both because of its global military operations and because of its influence on global practices. 240 The article discusses how “mistakes” in the U.S. counterterrorism campaign have been far more common than generally acknowledged. Moreover, it argues that “the repeated errors that these reports reveal—not just one mistake, but an unmistakable pattern of mistakes—are the predictable result of a system that, during the period examined, did too little to learn from its mistakes.”241
The following section focuses on the pervasive pattern of war “mistakes.” AI depends on pattern-based content. They are designed to predict the next sequence based on observable patterns with the goal of generating plausible, not necessarily verified, content.
The pattern of “mistakes” discussed by Hathway and Khan can apply to the pattern of biases in AI-driven LAWS. AI mistakes are often patterned and predictable, reflecting flaws in training data, algorithms, or the AI’s fundamental design. Analyzing these patterns can reveal deeper issues. As discussed earlier, AI models are not trained solely on accurate data, training data could still produce new, potentially inaccurate, content by combining patterns in unexpected way with lethal consequences in war.
Hathaway and Khan argue that “each State, moreover, must make legal advisers available to advise military commanders on the application of IHL. There is, too, an obligation to investigate allegations of violations of IHL and to prosecute grave breaches . . . But what is missing from these rules as traditionally understood is any obligation on States to actively seek to learn from their [‘mistakes’].”242
This argument has profound implications to the AI-driven LAWS as they call for correcting pattern dependent “mistakes” in training data and correcting the “mistakes” in data. The question, then is as follows: can IHL address “mistakes” in relation to AI-driven LAWS, and can negotiations that result in a binding legal instrument fill this void?
IV. Conclusion: The “Clock is Ticking”
In March 2019, UN Secretary-General António Guterres highlighted the urgency of the GGE’s mandate, urging members to overcome divisions: “It is your task now to narrow these differences and find the most effective way forward […] this will require compromise, creativity and political will. The world is watching, the clock is ticking[.]”243 Despite this exhortation, meaningful international regulation of LAWS remains elusive. Guterres has further warned that the pace of technological advancement in AI, upon which LAWS are premised, is proceeding at “warp speed,” outpacing regulatory deliberations.244 As the UN Secretary-General calls for Member States to negotiate a new, legally-binding instrument setting clear restrictions on LAWS (and to conclude such negotiations by 2026), he has warned that AI-driven LAWS can create a situation that he has often called “morally repugnant.”245
Technological advancements have ushered law to a pivotal inflection point in its relationship with warfare. Machines are increasingly poised to make autonomous decisions with lethal consequences in present and foreseeable conflicts, raising profound legal, ethical, and humanitarian concerns. As AI is increasingly integrated into decision-making systems, including those used in military operations, concern over bias can no longer be set aside. Contrary to the assumption that removing human decision-makers and relying on algorithmic systems diminishes bias,246 empirical evidence suggests that such reliance may magnify and amplify existing forms of discrimination.247
Excessive reliance on AI outputs can diminish human operators’ critical thinking, fostering psychological detachment from the consequences of violence. This detachment may, in turn, lower the threshold for using force, further obscuring accountability.
Moreover, AI systems currently lack genuine emotional intelligence. While advancements in affective computing—the study and development of systems capable of recognizing and simulating human emotions—suggest the possibility of machine-emulated empathy, the ethical implications of simulated emotions are far from resolved.248 Researchers note that systems can be trained to recognize and express emotional cues without genuinely experiencing them.249 This decoupling of perception from feeling raises profound concerns about authenticity and ethical legitimacy.
Creating machines capable of moral reasoning is further complicated because human moral judgment is not fully understood. Ethical decision-making involves cognitive and emotional processes, many of which operate subconsciously.250 Emotions are crucial in helping humans acquire and interpret morally relevant information. The absence of such mechanisms in AI systems challenges the prospect of building machines that can make ethically sound decisions in complex, real-world contexts.
The proliferation of AI across societal domains necessitates sustained critical engagement with its ethical, epistemological, and political dimensions. Moreover, addressing algorithmic bias and the algorithmic divide requires interdisciplinary collaboration, regulatory innovation, and a commitment to inclusivity in technological design. Mirroring the well-documented “digital divide”—which has historically delineated disparities in access to the internet, digital technologies, and information resources—an emergent and increasingly pronounced “algorithmic divide” across the Global North and South now poses a critical threat to equitable access to the political, social, economic, cultural, educational, and professional opportunities facilitated by machine learning and artificial intelligence. This “algorithmic divide” means that the asymmetrical nature of AI development worldwide will mean that certain communities will not be able to harness the potential of AI.
Beyond the theatre of war, AI has demonstrated transformative potential in the Global South to reduce the possibility of conflict—especially in disaster relief, where AI and drone technologies have been deployed to aid post-crisis reconstruction.251 For example, following the 2015 earthquake in Nepal, predictive models helped coordinate aid distribution and assess infrastructure damage.252 In Rwanda, Zipline uses AI-guided drones to deliver essential medical supplies to remote clinics.253
AI applications have similarly improved agricultural development and food security in India, China, Sub-Saharan Africa, the U.S., and Europe. Predictive algorithms assist farmers in determining optimal planting and harvesting schedules based on local climate and soil conditions.254 This technology enhances productivity and market access, potentially lifting rural populations out of food insecurity.
Ultimately, the challenge is not simply to build more ethical AI systems for use in conflict, but to ensure that these systems are used equitably to prevent conflict. In doing so, we must remain vigilant against the reification of human bias in algorithmic form and strive to harness AI’s potential for the collective good.
Autonomous weapons have been characterized as the “third revolution in warfare,” following the advent of gunpowder and nuclear arms.255 Since 2017, the UN has convened a Group of Governmental Experts to examine LAWS’ technical, legal, and ethical implications.256 Established under the framework of the CCW, the GGE on LAWS is composed of representatives from diverse states with divergent national interests. Despite repeated sessions, the GGE failed to achieve consensus on fundamental issues, including whether new international legal instruments are necessary to regulate LAWS or whether non-binding political measures and voluntary guidelines would suffice.
A key obstacle to regulation lies in the definitional and taxonomical ambiguities surrounding LAWS.257 Without a shared understanding of the core technological processes at issue, formulating a coherent regulatory framework is unattainable.
Definitions of autonomy in weapons systems typically fall into two broad categories. The first category frames autonomy in relation to the role of human operators.258 For example, the U.S. defines an “autonomous weapon system” as a weapon system that, once activated, can select and engage targets without further human intervention.259 This definition also encompasses systems subject to human supervision which allow operators to override the weapon, yet can select and engage targets independently once activated.
Despite the varying degrees of autonomy, no weapon system is entirely divorced from human involvement. Human agency is embedded in their design, manufacture, and deployment. AWS are commonly categorized into three types based on the degree of human control: (1) human-in-the-loop, (2) human-on-the-loop; and (3) human-out-of-the-loop systems.260 Human-in-the-loop systems, such as armed drones, require explicit human authorization to select and engage targets. Human-on-the-loop systems allow autonomous target selection and engagement under human supervision, permitting intervention if necessary. By contrast, human-out-of-the-loop systems select and engage targets without human input, raising the most profound legal and ethical concerns.
The second category of ‘autonomy’ definitions in AWS places emphasis on the nature of the tasks performed autonomously—and the attendant legal consequences.261 The International Committee of the Red Cross (ICRC), for instance, defines an “autonomous weapon system” as one possessing autonomy in its “critical functions,” namely the capacity to select (search for, detect, identify, and track) and attack (intercept, neutralize, damage, or destroy) targets without human intervention.262 For the ICRC, these critical functions are central to targeting decision-making and, therefore, to compliance with IHL.
Without consensus on such definitions, establishing an effective regulatory regime for LAWS remains improbable.263 Even if definitional clarity were achieved, states would still need to negotiate substantive provisions for regulation. Short of a complete prohibition, potential regulatory measures could address several issues. At the most basic level, an international instrument might affirm that LAWS are subject to existing IHL principles and require states to conduct legal reviews of such weapons before deployment. Regulations might also mandate “meaningful human control”264 over the use of force, stipulate human-override capabilities, and set standards for LAWS’s sensory and analytical capacities to ensure compliance with IHL principles, particularly the principle of distinction between civilians and combatants.
Furthermore, any binding instrument must clarify what information military commanders must possess before authorizing LAWS and delineate accountability frameworks for unlawful acts committed through their deployment. This includes defining states’ responsibilities and liabilities for the unlawful use of force involving autonomous weapons. To secure the broadest consensus, such provisions would likely be grounded in existing IHL, adapted to address the unique challenges posed by LAWS, and framed with sufficient specificity to guide state practice in this emerging domain.
Applying IHL principles to LAWS must be guided by the imperative to minimize errors in armed conflict. Hugo Grotius observed in his seminal treatise On the Law of War and Peace (1625), that one must strive to prevent the death of innocent persons, wherever possible, even by accident.265 Central to this obligation is the principle of distinction, which requires that hostilities be directed exclusively against combatants and legitimate military objectives and never against civilians or civilian property. Accordingly, states are prohibited from employing weapons incapable of distinguishing between civilian and military targets. In the case of autonomous weapons, this principle necessitates the ability to differentiate between combatants and non-combatants and distinguish legitimate targets from environmental “clutter.”266
Existing AWS, such as the Harpy and High-Speed Anti-Radiation Missile (HARM), as well as the Counter-Rocket, Artillery, and Mortar (C-RAM) system and the Long-Range Anti-Ship Missile (LRASM),267 are designed to function in relatively discrete environments with narrowly defined missions.268 In such cases, the commander or operator authorizes deployment. At the same time, the system, through varying degrees of autonomy, selects the precise target from a pre-defined category, such as enemy radar, indirect fire munitions, or warships. Because these systems are programmed with limited target sets, they operate with a relatively low risk of violating the principle of distinction.
Nonetheless, current technological capabilities remain insufficient for autonomous systems to reliably distinguish between civilian objects and those same objects when repurposed for military use. Where lawful and unlawful targets are co-located, systems unable to discriminate effectively would risk indiscriminate attacks, constituting violations of IHL.269 This raises a further challenge: even where an autonomous system can differentiate between a legitimate military objective and civilian objects, the execution of the strike may involve anticipated collateral damage. In such circumstances, the principle of proportionality becomes applicable.270
The proportionality principle, codified in Article 51(5)(b) of Additional Protocol I, prohibits “[attacks] which may be expected to cause incidental loss of civilian life, injury to civilians, [or] damage to civilian objects . . . , or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.”271 This principle requires weighing the expected incidental harm against the anticipated military advantage.272 While this determination is inherently subjective and context-dependent—posing difficulties even for seasoned commanders—it is far more complex for autonomous systems, particularly in densely populated or urban battlefields where assessing proportionality demands nuanced judgment.273
Beyond distinction and proportionality, IHL also imposes the principle of humanity, or the prohibition against unnecessary suffering. Article 35(2) of Additional Protocol I explicitly prohibits the use of weapons “of a nature to cause superfluous injury or unnecessary suffering,”274 a principle reaffirmed in Article 23(e)275 of the Annex to the Hague Convention IV. Thus, autonomous systems that inflict gratuitous harm or suffering fall afoul of this fundamental constraint of IHL.276
V. Annex 1: Coding Matrix

VI. Annex 2: State Party Positions (Coding Matrix Answer Only)
| state(s) | human control | need for new legally binding instrument | need legal regulations/measures | new prohibitions | mentions ethics | forum | mentions bias |
| Andorra, Argentina, Bolivia (Plurinational State of), Brazil, Colombia, Costa Rica, Cuba, Chile, Dominican Republic, Ecuador, El Salvador, Guatemala, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Portugal, Spain, Uruguay and Venezuela (Bolivarian Republic of)[Ibero-American Member and Observer States] (p. 20-21) |
yes | yes | yes | yes | yes | not mentioned | no |
| Argentina (p. 21-23) | yes | no | no | no | yes | CCW and outside | yes |
| Australia (p. 23) full submission: https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-Australia-EN.pdf | no | no (not yet) | yes | yes (tiered) | yes | CCW ONLY | yes |
| Austria (p. 24-28) | yes | yes | yes | yes | yes throughout | CCW and outside | yes |
| Bulgaria (p. 29-31) | yes | yes | yes | yes (tiered) | yes | CCW | no |
| Canada (p. 31-34) | yes | no (not yet) | yes | yes (tiered) | yes | CCW and outside | yes |
| Chile, Colombia, Costa Rica, Dominican Republic, Ecuador, El Salvador, Guatemala, Kazakhstan, Nigeria, Panama, Peru, Philippines, Sierra Leone, State of Palestine (pp. 34–36) | yes | yes | yes | yes | yes | CCW | no |
| China (pp. 36–37) | yes | yes | yes | yes (tiered) | yes | CCW ONLY | no |
| Costa Rica (pp. 37–40) | yes | yes | yes | yes | yes | CCW and outside | yes |
| Cuba (41–42) | yes | yes | yes | yes (tiered) | yes | CCW | no |
| Egypt (42–44) | yes | yes | yes | yes (tiered) | yes | CCW | no |
| Fiji (44–45) | yes | yes | yes | yes | yes | CCW and outside | yes |
| Finland (46–47) | yes | yes | yes | yes (tiered) | yes | CCW | no |
| France (47–49) | yes | yes | yes | yes (tiered) | yes | CCW | no |
| Germany (49–52) | yes | yes | yes | yes (tiered) | yes | CCW | yes |
| Greece (52–53) | yes | yes | yes | yes (tiered) | yes | CCW | no |
| Guatemala (53) | yes | not mentioned | not mentioned | not mentioned | not mentioned | not mentioned | no |
| Honduras (54–55) | yes | not mentioned | yes | not mentioned | yes | not mentioned | no |
| India (55–57) | yes | no (not yet) | yes | yes (tiered) | yes | CCW ONLY | no |
| Ireland (57–61) | yes | yes | yes | yes (tiered) | yes | CCW and outside | yes |
| Israel (p. 61-63) | no | no | no | no | not mentioned | CCW ONLY | no |
| Italy (63–65) | yes | yes | yes | yes (tiered) | yes | CCW ONLY | no |
| Japan (65–67) | yes | no | yes | yes (tiered) | yes | CCW | no |
| Kiribati (68–70) | yes | yes | not mentioned | yes | yes | CCW and outside | no |
| Luxembourg (70–73) | yes | yes | yes | yes (tiered) | yes | CCW ONLY | yes |
| Malawi (73–74) | yes | yes | yes | yes | yes | CCW and outside | yes |
| Mexico (74–76) | yes | yes | yes | yes (tiered) | yes | CCW ONLY | no |
| Netherlands (77–79) | yes | yes | yes | yes (tiered) | yes | CCW and outside | no |
| New Zealand (79–82) | yes | yes | yes | yes (tiered) | yes | CCW and outside | yes |
| Norway (82–84) | yes | yes | yes | yes (tiered) | yes | CCW | no |
| Pakistan (84–87) | yes | yes | yes | yes (tiered) | yes | CCW ONLY | no |
| Philippines (87–90) | yes | yes | yes | yes (tiered) | yes | CCW ONLY | yes |
| Republic of Korea (90–91) | no | not mentioned but appears to be no (not yet) | yes | yes (tiered) | not mentioned | CCW | no |
| Republic of Moldova (91–93) | yes | no (not yet) | yes | yes | not mentioned | CCW and outside | no |
| Russian Federation (94–97) | yes | no | no | no | not mentioned | CCW ONLY | no |
| Serbia (97–98) | yes | yes | yes | yes | yes | CCW ONLY | yes |
| Sierra Leone (98–101) | yes | yes | yes | yes (tiered) | yes | not mentioned | no |
| Singapore (101–102) | not mentioned | no | yes | yes (tiered) | yes | CCW and outside | yes |
| Sri Lanka (103–106) | yes | yes | yes | yes (tiered) | yes | CCW | no |
| Sweden (pp. 106–108) | yes | yes | yes | yes (tiered) | yes | CCW ONLY | no |
| Switzerland (pp. 109–110) | yes | yes | yes | yes (tiered) | yes | CCW ONLY | no |
| United Kingdom (110–113) | yes | no | yes | yes | yes | CCW | no |
| United States (113–115) | no | no | yes | yes (tiered) | yes | CCW and outside | no |
| Palestine (116–118) | yes | yes | yes | yes | yes | not mentioned | no |
| European Union (118–119) | yes | not mentioned | yes | yes (tiered) | yes | CCW and outside | yes |
- 4In particular, the development of robotic weapons has already played a significant role in asymmetrical conflicts, with systems like the Predator drone used extensively in counterterrorism operations across Afghanistan and Pakistan. See John O. McGinnis, Accelerating AI, 104 Nw. U. L. Rev. 1253, 1266 (2010).
- 5See id. at 1254 (arguing that “the acceleration of technology creates the need for quicker government reaction to the potentially huge effects of disruptive innovations”).
- 6See id. at 1266 (noting that autonomous systems “may be able to discriminate better than other kinds of weapons,” potentially raising the standard of distinction and reducing civilian harm in armed conflict).
- 7See id. at 1254 (arguing that fears surrounding military AI are often overstated and that autonomous systems may actually mitigate human error and malevolence in combat).
- 8Bryan Hance, Is There a Duty to Use Lethal Autonomous Weapons? How AI Will Change Warfare and the International Order, 13 Penn. St. J.L. & Int'l Aff. 139 (2025).
- 9U.N. Secretary-General, Report of the Secretary-General on Lethal Autonomous Weapons Systems, U.N. Doc. A/79/88 ¶ 4 (May 17, 2024) [hereinafter U.N. Secretary-General LAWS Report] (observing that while AI may enhance well-being and development, it also poses challenges to international peace and security, including concerns about the human role in the use of force).
- 10Id. at 5–19.
- 11See infra Annex 2: State Party Positions.
- 12See infra Annex 1: Coding Matrix.
- 13See Alan L. Schuller, At the Crossroads of Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with International Humanitarian Law, 8 Harv. Nat’l Sec. J. 379, 392–97 (2017) (discussing “the critical issue bearing on IHL compliance from a technological perspective . . . whether the AWS has been granted some combination of capabilities that functionally delegates the decision to kill from human to machine”).
- 14See Isaiah Klassen, Risking a New Blitzkrieg: Banning Artificial Intelligence in National Security, 35 Regent U. L. Rev. 573 (2023) (highlighting one counter-argument against blanket prohibition, asserting that enforcement, not new legislation, is the core problem—citing WWII arms-control failures; assuming the proliferation of LAWS is inevitable, it raises the concern that preemptively banning LAWS repeats past mistakes by “disarming” law-abiding states while still arming malicious actors).
- 15The Martens Clause, a fundamental principle of international humanitarian law, provides a legal and moral safety net in situations not explicitly addressed by international treaties. It is reiterated in the 1899 Hague Convention (II), the 1907 Hague Conventions, and the 1977 Additional Protocols to the Geneva Conventions, and it asserts that all human beings and their human rights are protected at all times under international law “derived from established custom, from the principles of humanity and from the dictates of public conscience” (Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I), June 8, 1977). There is scholarly debate on whether the Clause reinforces traditional custom or if the “principles of humanity” and “dictates of public conscience” are distinct sources of law. The Martens Clause is most commonly invoked to protect combatants and civilians during war. It is frequently used to assess the use of new, emerging issues such as nuclear weapons, cyber warfare, and LAWS.
- 16Tetyana (Tanya) Krupiy, Regulating a Game Changer: Using a Distributed Approach to Develop an Accountability Framework for Lethal Autonomous Weapon Systems, 50 Geo. J. Int'l L. 45, 50, 68, 95, 99 (2018).
- 17Id. at 50, 56, 86–109.
- 18Id. at 50, 54, 59, 63, 68, 97, 104, 112.
- 19Id. at 57, 83, 85, 89–95.
- 20U.N. Human Rights Comm., General Comment No. 31, The Nature of the General Legal Obligation Imposed on States Parties to the Covenant, ¶ 15, U.N. Doc. CCPR/C/21/Rev.1/Add.13 (Mar. 29, 2004).
- 21Asif Ali & Subramanian Ramamurthy, Humanity at the Crossroads: Human Rights Challenges in the Age of Lethal Autonomous Weapon Systems, 53 Int’l J. Legal Info. 2, 10 (2025).
- 22See id.
- 23See id.
- 24In the Case Concerning Application of the Convention on the Prevention and Punishment of the Crime of Genocide, the ICJ found that the legal obligation enshrined in Article 1 of the 1951 Genocide Convention is one of due diligence, and the considerations of the Court can be of assistance in the interpretation of such obligations and Common Article 1. First, the Court clarified that the term “undertake,” present in many international conventions, is not “merely hortatory or purposive” but rather a “formal promise, to bind or engage oneself, to give a pledge or promise, to agree, to accept an obligation.” Thus, the obligations to “respect” and to “ensure respect” cannot be considered empty norms since States are under an obligation to “employ all means reasonable available to them” to ensure that its organs and individuals oblige by it. See Application of the Convention on the Prevention and Punishment of the Crime of Genocide (Bosn. & Herz. v. Serb. & Montenegro), Judgment, 2007 I.C.J. 43 (Feb. 26).
- 25Ali & Ramamurthy, supra note 21, at 2.
- 26U.N. Secretary-General LAWS Report, supra note 9, at 5.
- 27Id. at 147.
- 28Id. at 86.
- 29Id. at 45.
- 30Int’l Comm. of the Red Cross, Views of the International Committee of the Red Cross (ICRC) on Autonomous Weapon System, Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), Geneva, Apr. 11, 2016, ¶ 5.
- 31G.A. Res. 79/62, U.N. Doc. A/RES/79/62 (Dec. 2, 2024).
- 32Benjamin Perrin, Lethal Autonomous Weapons Systems & International Law: Growing Momentum Towards a New International Treaty, 29 Am. Soc’y Int’l L. Insights 1, 3 (2025).
- 33See U.N. Secretary-General LAWS Report, supra note 9, ¶ 114 (citing that “[t]he United States’ approach to [LAWS] starts with the recognition that existing [IHL] already provides the applicable framework of prohibitions and restrictions on the use of AWS in armed conflict”).
- 34Id. at 95.
- 35Id. at 62.
- 36Categorized as “no (not yet),” in the coding matrix, see infra Annex 1.
- 37Australia’s Submission to the United Nations Secretary-General’s Report on Lethal Autonomous Weapons Systems, U.N. Doc. ODA-2024-00019/LAWS, ¶ 13 (May 2024).
- 38U.N. Secretary-General LAWS Report, supra note 9, ¶ 32.
- 39Id. at 93.
- 40Id. at 82.
- 41Id. at 105–06.
- 42Id. at 99.
- 43U.N. Secretary-General LAWS Report, supra note 9, at 76.
- 44Id. at 73.
- 45Id. at 47–48.
- 46Id. at 114.
- 47Id. at 29. Bulgaria provides a clear elaboration of the tiered approach in their submission, defining it as the “two-tier approach,” which calls for a distinction between (a) autonomous weapons systems operating completely outside human control and a responsible chain of command; and (b) autonomous weapons systems featuring autonomous functions, requiring regulations to ensure compliance with international law and, more specifically, international humanitarian law.
- 48U.N. Secretary-General LAWS Report, supranote 9, at 114.
- 49Id. at 55.
- 50Id. at 55–56.
- 51Id. at 57.
- 52U.N. Secretary-General LAWS Report, supra note 9, at 62.
- 53Id.
- 54Id. at 53.
- 55Id. at 86.
- 56Id. at 117.
- 57U.N. Secretary-General LAWS Report, supra note 9, at 118.
- 58Id.
- 59Id. at 91.
- 60Id. at 78.
- 61Working paper submitted by Finland, France, Germany, the Netherlands, Norway, Spain, and Sweden to the 2022 Chair of the Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS), at 1, (July 13, 2022) https://perma.cc/PPJ9-PU6A (emphasis omitted).
- 62Can., Canada’s views on Lethal autonomous weapons systems as it relates to UN Resolution 78/241 “Lethal autonomous weapons systems”, U.N. Doc. 78-241-Canada-EN, at 2 (2024), https://perma.cc/5VY4-NJ2V (emphasis added).
- 63Id.(emphasis added).
- 64U.N. Secretary-General LAWS Report, supra note 9, at 81.
- 65Id. at 111.
- 66Id. at 69.
- 67Ali & Ramamurthy, supra note 21, at 6.
- 68Id. at 4.
- 69Perrin, supra note 32 at 3.
- 70Ali & Ramamurthy, supranote 21, at 4.
- 71Id. at 11.
- 72See Peter C. Combe II, Autonomous Doctrine: Operationalizing the Law of Armed Conflict in the Employment of Lethal Autonomous Weapons Systems, 51 St. Mary’s L.J. 35, 53–66 (2020) ( “For machines with high degrees of autonomy, current capabilities are likely insufficient to simply ‘turn them loose’ on the battlefield” in a way that will meet IHL standards, because LAWS’ physical, sensory, and “cognitive” capabilities are not usually programmed to de-escalate or use non-lethal force to neutralize threats and do not “predictably distinguish between civilians and combatants.”).
- 73See Erik Lin-Greenberg, Allies and Artificial Intelligence: Obstacles to Operations and Decision-Making, 3 Tex. Nat’l Sec. Rev. 56, 69 (2020) (emphasis added) (“AI generally operates in a ‘black box.’ The neural networks that underpin many cutting-edge AI systems are opaque and offer little insight into how they arrive at their conclusions. These networks rely on deep learning, a process that passes information from large data sets through a hierarchy of digital nodes that analyze data inputs and make predictions using mathematical rules. As data flows through the neural network, the [neutral] net makes internal adjustments to refine the quality of outputs. Researchers are often unable to explain how neural nets make these internal adjustments. Because of this lack of ‘explainability,’ users of AI systems may have difficulty understanding failures and correcting errors.”).
- 74Afonso Seixas-Nunes, Autonomous Weapons Systems and the Procedural Accountability Gap, 46 Brook. J. Int’l L. 421, 448 (2021).
- 75Joseph Blocher & Darrell A.H. Miller, Lethality, Public Carry, and Adequate Alternatives, 53 Harv. J. On Legis. 279, 291 (2016) (applying “Adequate Alternatives Analysis” to permissions to use lethal weapons to assess whether non-lethal alternatives are adequate and whether lethal weapons are necessary in a given context, since “[s]trictly speaking, to say that something is necessary is to say that there is no alternative for it”).
- 76U.N. Secretary-General LAWS Report, supra note 9, at 41–42.
- 77Id. at 46.
- 78Id. at 75.
- 79Tetyana (Tanya) Krupiy, Of Souls, Spirits and Ghosts: Transposing the Application of the Rules of Targeting to Lethal Autonomous Robots, 16 Melb. J. Int’l L. 145, 164–85 (2015).
- 80Id. at 175.
- 81Ali & Ramamurthy, supra note 21, at 10–11.
- 82Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), art. 48, June 8, 1977, 1125 U.N.T.S. 3.
- 83Ali & Ramamurthy, supra note 21, at 4.
- 84Krupiy, supra note 79, at 164.
- 851 Customary International Humanitarian Law, 46 (Jean-Marie Henckaerts & Louise Doswald-Beck eds., Cambridge Univ. Press 2005).
- 86Protocol I, supra note 82,art. 51(5)(b).
- 87Prosecutor v. Galić, Case IT-98-29-A, Judgment, ¶ 58 (Int’l Crim. Trib. for the Former Yugoslavia Nov. 30, 2006).
- 88Prosecutor v. Prlić, IT-04-74-T, Judgment, ¶¶ 345–72, 1583 (Int’l Crim. Trib. for the Former Yugoslavia May 29, 2013).
- 89Id.
- 90Id.
- 91See, e.g., Human Rights Watch, Civilian Deaths in the NATO Air Campaign, 12 Hum. Rts. Watch Rep. no. 1(D) at 7–8 (Feb. 2000) [https://perma.cc/J86C-TEWH] (noting “there is almost a complete lack of any public accountability by any of the national NATO members for missions undertaken in the NATO alliance's name” and concluding “that civilian deaths in Operation Allied Force were not necessarily related to the pace or intensity of the war, but occurred as a result of decisions regarding target and weapons selection, or were caused by technical malfunction or human error”).
- 92The report considered the allegation that in flying bomb aircrafts at an altitude that would protect NATO pilots from enemy air defenses, NATO air commanders were not able to reasonably distinguish their targets. The Report concluded that “NATO air commanders have a duty to take practicable measures to distinguish military objectives from civilians or civilian objectives. The 15,000 feet minimum altitude adopted for part of the campaign may have meant the target could not be verified with the naked eye. However, it appears that with the use of modern technology, the obligation to distinguish was effectively carried out in the vast majority of cases during the bombing campaign.” Final Report to the Prosecutor by the Committee Established to Review the NATO Bombing Campaign Against the Federal Republic of Yugoslavia, 39 I.L.M. 1257, ¶¶ 50, 56 (June 13, 2000).
- 93Int’l Comm. of the Red Cross (ICRC), Commentary on the Additional Protocols of 8 June 1977 to the Geneva Conventions of 12 August 1949, ch. 2.4, at 683–84 (Yves Sandoz et al. eds., 1987).
- 94The ICRC has voiced concern over the lawfulness of AWS, noting that the lack of human control poses fundamental issues with IHL’s requirement that weapon users in armed conflict be able to anticipate, control, and limit the effects of their weapons. See Australia’s Submission to the United Nations Secretary-General’s Report on Lethal Autonomous Weapons Systems, supra note 37, ¶ 21(c).
- 95Charles P. Trumbull IV, Autonomous Weapons: How Existing Law Can Regulate Future Weapons, 34 Emory Int’l L. Rev. 533, 545–48 (2020).
- 96See Group of Governmental Experts on Emerging Techs. in the Area of Lethal Autonomous Weapons Sys., Report of the 2023 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, U.N. Doc. CCW/GGE.1/2023/2, ¶ 21(c) (May 24, 2023) (concluding “[c]ontrol with regard to weapon systems based on emerging technologies in the area of LAWS is needed to uphold compliance with international law, in particular IHL, including the principles and requirements of distinction, proportionality and precautions in attack”). The report also concluded that “[i]n accordance with States’ obligations under international law, in the study, development, acquisition, or adoption of a new weapon, means or method of warfare, determination must be made whether its employment would, in some or all circumstances, be prohibited by international law.” Id. ¶ 23.
- 97Protocol I, supra note 82, art. 1(2); Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II), June 8, 1977, 1125 U.N.T.S. 609.
- 98Id.
- 99Neil Davison, A Legal Perspective: Autonomous Weapon Systems Under International Humanitarian Law, in UNODA Occasional Papers No. 30, November 2017: Perspectives on Lethal Autonomous Weapons Systems 5, 8 (2018).
- 100Bryan Hance, Is There a Duty to Use Lethal Autonomous Weapons? How AI Will Change Warfare and the International Order, 13 Penn. St. J.L. & Int'l Affs. 139, 157 (2025).
- 101Perrin, supranote 32, at 4.
- 102U.N. Secretary-General LAWS Report, supra note 9, at 28.
- 103Id. at 59.
- 104Id. at 20.
- 105Id. at 95.
- 106Id. at 49.
- 107U.N. Secretary-General LAWS Report, supra note 9, at 64.
- 108Id. at 47.
- 109Id. at 93.
- 110Id. at 25.
- 111Id. at 81.
- 112Id. at 89.
- 113Id. at 95.
- 114Rome Statute of the International Criminal Court, art. 25(1), July 17, 1998, 2187 U.N.T.S. 90.
- 115See Krupiy, supra note 16, at 70, 86, 111 (proposing that, in response to current inadequate legal frameworks, LAWS should be re-conceptualized as “relational entities” in a distributed “matrix” of accountability—not independent moral agents, but technological artefacts whose actions are deeply embedded in, and shaped by, relationships with various human stakeholders, and a new legal test: accountability should attach to any individual who exercises meaningful authority over an AWS, if (1) their exercise of authority was significant to development, policy, deployment, or operation, and (2) they acted with intent, recklessness, or even (in some contexts) gross negligence or negligence, and (3) their actions or omissions foreseeably contributed to a war crime).
- 116U.N. Secretary-General LAWS Report, supra note 9, at 112.
- 117Ali & Ramamurthy, supra note 21, at 11.
- 118U.N. Secretary-General LAWS Report, supra note 9, at 20.
- 119Id. at 22.
- 120Id. at 53.
- 121Id. at 45.
- 122Id. at 36.
- 123U.N. Secretary-General LAWS Report, supra note 9, at 114.
- 124Id. at 43.
- 125Id. at 115.
- 126Id. at 62–63.
- 127Id. at 91.
- 128Australia’s Submission to the United Nations Secretary-General’s Report on Lethal Autonomous Weapons Systems, supra note 37, ¶ 26.
- 129Ingvild Bode, Falling Under the Radar: The Problem of Algorithmic Bias and Military Applications of AI, ICRC Hum. L. & Pol’y Blog (Mar. 14, 2024) https://perma.cc/7DD3-BJ8G.
- 130See e.g., Katherine Chandler, Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applications of AI, United Nations Inst. for Disarmament Research (UNIDIR) 11 (2021), https://perma.cc/GJX6-C9KP (“Artificial intelligence encodes the patterns found in the data it is trained on. A machine learning model trained on Google News articles, for example, exhibited disturbing patterns of female/male gender stereotypes, reproducing historical bias.”)
- 131Julia Angwin et al., How We Analyzed the COMPAS Recidivism Algorithm, ProPublica (May 23, 2016), https://perma.cc/2DU9-SPJG.
- 132See e.g. Chandler, supra note 130, at 4, 10.
- 133See Ritika Sagar, Google Translate Has Gender Bias. And It Needs Fixing, Analytics India Mag. (June 30, 2021), https://perma.cc/YS3R-3AB2; Aleksandra Sorokovikova et al., Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models, arXiv:2506.10491v2 [cs.CL] (Sep. 1, 2025).
- 134U.N. Secretary-General LAWS Report, supranote 9, at 23.
- 135Id. at 26.
- 136Id. at 51.
- 137Id. at 60.
- 138Id. at 34.
- 139Id. at 72.
- 140Id. at 81.
- 141Id.
- 142Alexander Blanchard & Laura Bruun, Bias in Military AI, Stockholm Int’l Peace Research Inst. (SIPRI), SIPRI Background Paper No. 7 (2024).
- 143Chandler, supra note 130, at 12.
- 144SeeSinead O’Connor & Helen Liu, Gender Bias Perpetuation and Mitigation in AI Technologies: Challenges and Opportunities, 29 AI & Soc., 2045, 2045–57 (2024) (discussing how opaque AI systems can reproduce both the biases embedded in their training data and those introduced by developers).
- 145See e.g. International Committee of the Red Cross, ICRC Submission on Autonomous Weapon Systems to the United Nations Secretary-General Re: ODA-2024-00019/Laws, at 5 (Mar. 19, 2024), perma.cc/6RBU-2LBF (submitting concerns that the “black box” nature of military AWS holds the risk that force could be applied in “circumstances and in a manner unforeseen to the human user,” making the weapon indiscriminate).
- 146U.N. Secretary-General LAWS Report, supra note 9, at 60.
- 147Ireland, National Submission Pursuant to G.A. Res. 78/241, U.N. Secretary-General’s Report on Lethal Autonomous Weapons Systems, at 4–5 (May 24, 2024), https://perma.cc/C8DC-ZRYJ.
- 148Can., Reply received pursuant to General Assembly Resolution 78/241, U.N. Doc. 78-241-Canada-EN, at 3 (2024), https://perma.cc/BQ5F-CYLD.
- 149Id.
- 150Id.
- 151U.N. Secretary-General LAWS Report, supra note 9, at 33.
- 152U.N. Secretary-General LAWS Report, supra note 9, at 44.
- 153Population of global offline continues steady decline to 2.6 billion people in 2023, ITU (Sep. 12, 2023), perma.cc/2Z35-ZUBV.
- 154Tshilidzi Marwala, Militarization of AI Has Severe Implications for Global Security and Warfare, United Nations Univ. (July 23, 2023), perma.cc/CWA9-BE4S (highlighting that “striking a balance between the need for data to train AI systems and the need to safeguard privacy remains a formidable challenge. Data bias is a further crucial concern. If the data used to train AI systems is prejudiced, the AI systems can become biased and produce unfair or discriminatory results”).
- 155G.A. Res. A/78/L.49 (Mar. 11, 2024).
- 156U.N. Secretary-General LAWS Report, supra note 9, at 44.
- 157Id.
- 158Id. at 51.
- 159Id. at 72.
- 160Id. at 81.
- 161Id. at 175.
- 162Id. at 55.
- 163Id.
- 164Their study examined three facial recognition systems and found that all demonstrated significantly higher accuracy in recognizing male faces compared to female faces, with overall superior performance in identifying individuals with lighter skin tones. The most flawed system failed to recognize the faces of dark-skinned women in one-third of cases. Joy Buolamwini & Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, 81 Proc. Mach. Learning Rsch. (PLMR) 1, 9 (2018).
- 165See generally, Women in Tech Stats 2025: Uncovering Trends and Unseen Data, WomenTech Network, perma.cc/2F54-LKH4 (last visited July 2025).
- 166See e.g. McGinnis, supra note 4, at 1255 (“More powerful machine intelligence would help analyze the data about all aspects of the world—data that is growing at an exponential rate. It may also help make connections between policies and consequences that would otherwise go overlooked by humans, acting as a fire alarm against dangers from new technologies whose chain of effects may be hard to assess even if the changes are quite imminent in historical terms. Such analysis is not only useful to avoiding disaster but also to taking advantage of the cornucopia of benefits from accelerating technology. Better analysis of future consequences may help the government craft the best policy toward nurturing such beneficent technologies, including providing appropriate prizes and support for their development. Perhaps more importantly, better analysis about the effects of technological advances will tamp down on the fears often sparked by technological change. The better our analysis of the future consequences of current technology, the less likely it is that such fears will smother beneficial innovations before they can deliver Promethean progress.”).
- 167See Major Annemarie Vazquez, Laws and Lawyers: Lethal Autonomous Weapons Bring LOAC Issues to the Design Table, and Judge Advocates Need to Be There, 228 MIL. L. Rev. 89 (2020), at 101 (enumerating the “multiple human touch points occurring across the design timeline that offer critical opportunities for human involvement and understanding”).
- 168Australia’s Submission to the United Nations Secretary-General’s Report on Lethal Autonomous Weapons Systems RE: ODA-2024-00019/LAWS, ¶ 43 (May 2024), perma.cc/555V-9CGV.
- 169Id. at 45.
- 170See Korin Munsterman, When Algorithms Judge Your Credit: Understanding AI Bias in Lending Decisions, Accessible L. Rev. (Spring 2025), perma.cc/58R2-VMJ7 (describing how AI-powered lending algorithms can perpetuate discrimination—for example, the Apple Card controversy where women were assigned lower credit limits than men; algorithms flagging minority applicants as high risk; and Wells Fargo’s mortgage algorithm giving Black and Latino applicants higher risk scores than similarly qualified white applicants).
- 171See Kadin Mesriani, AI & HR: Algorithmic Discrimination in the Workplace, Cornell J.L. & Pub. Pol’y: The Issue Spotter (Nov. 21, 2024), perma.cc/Y5BE-ZRFV (detailing cases of AI-driven hiring bias, such as HireVue’s video assessments discriminating against candidates based on facial expressions and Workday’s algorithms excluding applicants by race, age, or disability).
- 172See James Zou & Londa Schiebinger, Ensuring that Biomedical AI Benefits Diverse Populations, 67 eBioMedicine 103358 (2021) (highlighting biases in biomedical AI, such as under-identifying Black patients’ needs, inaccurate pulse oximeter readings for darker skin, and lack of diverse skin images in AI datasets). See also Ted A. James, Confronting the Mirror: Reflecting on Our Biases Through AI in Health Care, Harv. Med. Sch. Pro. Corp. Continuing Educ. (Sep. 24, 2024), perma.cc/HC2E-YPKM (describing how AI models in U.S. health systems prioritized healthier white patients over sicker Black patients by learning from biased cost data, thus entrenching disparities in care).
- 173See Eylül Bozkır et al., Predictive Policing or Predictive Prejudice? A Study of the Legal, Historical and Ethical Implications of AI in Policing, OxJournal (Aug. 18, 2025), perma.cc/E29C-AMAB (highlighting predictive policing’s replication of racial bias through data reflecting past discrimination—e.g., Chicago “heat list” targeting young Black men without criminal records).
- 174See Melissa Hamilton, Justice Served? Discrimination in Algorithmic Risk Assessment, Rsch. Outreach (Sep. 19, 2019), perma.cc/6J6L-FX43 (illustrating how risk assessment algorithms perpetuate racial and gender bias by recycling historical arrest and conviction data); one notable case involved the COMPAS Risk Assessment System, an algorithm used in the U.S. criminal justice system to predict recidivism. Studies found that the algorithm disproportionately flagged Black defendants as high-risk, reflecting broader systemic biases in the criminal justice system.
- 175See Jeffrey Dastin, Insight: Amazon scraps secret AI recruiting tool that showed bias against women, Reuters (Oct. 11, 2018), perma.cc/ZN4W-BZE9. Amplifying gender bias by training on male-dominated historical data, Amazon’s AI recruiting tool illustrates how feedback loops in algorithms can perpetuate discrimination.
- 176See Artificial Intelligence in Predictive Policing Issue Brief, NAACP, perma.cc/R22Z-BTDQ (highlighting risks of racial bias stemming from reliance on historically discriminatory crime data in AI predictive policing and calling for oversight and regulation).
- 177See Jonathan Shaw, Confronting Pitfalls of Machine Learning, Artificial Intelligence, HARV. MAG. (Dec. 6, 2018), https://perma.cc/P57R-78PK (arguing that data-driven bias rooted in social inequality demands algorithmic fairness solutions that push beyond computer science to interdisciplinary, systems-wide interventions). See also Eleanor Drage & Kerry Mackereth, Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference”, 35 Phil. Tech. 89 (2022) (discussing claims of algorithmic neutrality in recruitment AI—algorithms described as “blind to age, gender, or skin color”—and arguing that such claims misunderstand race and gender as isolatable rather than systemic, overlooking how algorithms reinforce broader systems of inequality by inadvertently entrenching rather than eradicating bias).
- 178See Lin-Greenberg, supra note 73 (defining “black box”).
- 179Seixas-Nunes, supra note 74, at 421.
- 180See Vazquez, supra note 167, at 100 (defining “explainable AI”—one of the U.S. Department of Defenses’ five AI Ethics Principles—as “traceable and knowable,” but arguing that, “its early-stage tools are not yet suited for LAWs”).
- 181Bernard Marr, What Is Deep Learning AI? A Simple Guide with 8 Practical Examples, Forbes (Oct. 1, 2018, 12:16 AM), https://perma.cc/X6AH-3NSY (updated Dec. 10, 2021).
- 182Chandler, supra note 130.
- 183AI in Weapons Systems Committee, Proceed with Caution: Artificial Intelligence in Weapon Systems, HL Paper 16 (Dec. 1, 2023).
- 184U.N. Secretary-General LAWS Report, supra note 9, at 119.
- 185Id.
- 186See Bode, supra note 129.
- 187See U.N. Office of Counter-Terrorism & U.N. Interregional Crime and Justice Research Institute, Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes, at 21, U.N. Doc. S/2716 (2021), perma.cc/FYZ4-YVPT (“The violation of rights can result from an unjustifiable or disproportionate use of AI, or it can be unintentional, for instance, through the use of unconsciously biased data to train machine learning algorithms, resulting in unfair decisions discriminating against individuals, groups or communities on prohibited grounds.”).
- 188See Blanchard & Bruun, supra note 142, at 8.
- 189Seixas-Nunes, supra note 74, at 435.
- 190See Vinhcent Le, Algorithmic Bias Explained: How Automated Decision-Making Becomes Automated Discrimination, The Greenlining Institute, at 24 (Feb. 18, 2021) (citing, for example, how a pricing algorithm used by The Princeton Review, an educational company, charged higher prices to customers in ZIP codes with large Asian populations, resulting in Asian-Americans being nearly twice as likely to receive high online SAT tutoring quotes—even in low-income neighborhoods). See also Emma Stone, AI in Healthcare: Counteracting Algorithmic Bias, B.U. Deerfield: J. CAS Writing Program (Apr. 14, 2024), perma.cc/6LWH-MGNN (showing bias when spending or zip code was used as a proxy for race—at-risk Black patients’ needs were missed due to historic access barriers).
- 191See Mario DeSean Booker & FaLessia Camille Booker, The Name Game: Algorithmic Gatekeeping and the Systematic Exclusion of Ethnic Names in Digital Hiring, 9 Int’l J. Rsch. & Innovation Soc. Sci. 2210 (2025) (showing algorithms penalize ethnic-sounding names, e.g., “Adewale Adeyemi” gets fewer callbacks than “Andrew Anderson”; downgrading African American Vernacular English and accents).
- 192Robert Bartlett et al., Consumer-Lending Discrimination in the FinTech Era, 143 J. Fin. Econ. 30 (2022).
- 193Public Affairs, Mortgage algorithms perpetuate racial bias in lending, study finds, Berkeley News (Nov. 13, 2018), perma.cc/EC4L-3UKK.
- 194Seixas-Nunes, supra note 74, at 441.
- 195See Blanchard & Bruun, supra note 142, at 8.
- 196Sinead O’Connor & Helen Liu, Gender Bias Perpetuation and Mitigation in AI Technologies: Challenges and Opportunities, 39 AI & Soc. 2045–57 (2024).
- 197U.N. Secretary-General LAWS Report, supra note 9, at 102.
- 198Bode, supra note 129.
- 199Blanchard & Bruun, supra note 142.
- 200Jack M. Beard, Autonomous Weapons and Human Responsibilities, 45 Geo. J. Int’l L. 617, 669–70 (2014).
- 201U.N. Secretary-General LAWS Report, supra note 9 at 60.
- 202Id.
- 203Id.
- 204Id. at 61.
- 205Id. at 26.
- 206Id. at 108.
- 207Id. at 142.
- 208Trishan Panch et al., Artificial Intelligence and Algorithmic Bias: Implications for Health Systems, 9(2) J. Glob. Health 1 (2019).
- 209Anna Nadibaidze, ‘Responsible AI’ in the Military Domain: Implications for Regulation, OpinioJuris (Mar. 23, 2023), perma.cc/V84K-WEXV.
- 210Ingvild Bode & Ishmael Bhila, The Problem of Algorithmic Bias in AI-based Military Decision Support Systems, ICRC Humanitarian L. & Pol’y (2024).
- 211See Blanchard & Bruun, supra note 142, at 8 (warning that “machine learning systems could infer threats based on racial and gender stereotypes. This exacerbates the risks already associated with militaries conducting operations in locations where they have a poor understanding of sociocultural context and traditions”).
- 212Bode & Bhila, supra note 210.
- 213Id.
- 214U.N. Off. for Disarmament Affs., Convention on Certain Conventional Weapons—Group of Governmental Experts on Lethal Autonomous Weapons Systems (Mar. 4–8, 2024) https://perma.cc/B6XE-DR4B.
- 215Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System, Addressing Bias in Autonomous Weapons, Submitted by Austria, Belgium, Canada, Costa Rica, Germany, Ireland, Luxembourg, Mexico, Panama, and Uruguay, CCW/GGE.1/2024/WP.5 (Mar. 8, 2024).
- 216See Panch, et al., supra note 208, and accompanying discussion and definition of “algorithmic bias.”
- 217See CIHL, Rule 88: Non-Discrimination. See also ICRC, Customary International Humanitarian Law, Vol. I: Rules, Rule 88 (2005), https://perma.cc/LP9R-AW64.
- 218Protocol I, supra note82, art. 85(4)(c).
- 219Erica H. Ma, Autonomous Weapons Systems Under International Law, 95 N.Y.U. L. Rev. 1435, 1468–69 (2020).
- 220Austin Tarullo, Shock & Awe: Lethal Autonomous Weapons Systems and the Erosion of Military Accountability, 2024 B.C. Intell. Prop. & Tech. F. 11 (2024).
- 221Ali & Ramamurthy, supra note 21, at 2, 5 (2025).
- 222See Bode & Bhila, supra note 210.
- 223See Blanchard & Bruun, supra note 142, at 8.
- 224Id.
- 225Perrin, supra note 32, at 1, 5.
- 226See Gary E. Marchant et al., International Governance of Autonomous Military Robots, 12 Colum. Sci. & Tech. L. Rev. 272, 281 (2011) (discussing Australian philosopher Robert Sparrow’s philosophical examination of the ongoing debate on the ethics of lethal autonomous robots, including the complexities of “assigning ethical and legal responsibility to someone, or something, if an autonomous robot commits a war crime,” and exploring the respective difficulties of attributing culpability to “‘the programmer,’ ‘the commanding officer,’ and ‘the machine’ itself”).
- 227Seixas-Nunes, supra note 74, at 448.
- 228John Cherry & Durward Johnson, Maintaining Command and Control (C2) of Lethal Autonomous Weapon Systems: Legal and Policy Considerations, 27 Sw. J. Int’l L. 1, 5 (2020).
- 229See Blanchard & Bruun, supra note 142, at 8.
- 230Thompson Chengeta, Defining the Emerging Notion of "Meaningful Human Control" in Weapon Systems, 49 N.Y.U. J. Int’l. & Pol. 833, 862–63 (2017).
- 231See Blanchard & Bruun, supra note 142, at 8.
- 232See Bryant Walker Smith, Controlling Humans and Machines, 30 Temple Int'l & Comp. L.J. 167, 176 (2016) (arguing that focusing on the concept of “meaningful human control” can minimize automated system complexity, since “human and machine both empower and limit the other,” and that “[t]he fundamental functional question is whether such a system . . . committed to international humanitarian law—can remain robust when human or machine elements fail”).
- 233Caitlin Mitchell, When Laws Govern Laws: A Review of the 2018 Discussions of the Group of Governmental Experts on the Implementation and Regulation of Lethal Autonomous Weapons Systems, 36 Santa Clara High Tech L. J. 407, 421 (2020).
- 234See Blanchard & Bruun, supra note 142, at 8.
- 235Id.
- 236Seixas-Nunes, supra note 74, at 440.
- 237Tay: Microsoft Issues Apology over Racist Chatbot Fiasco, BBC News, https://perma.cc/2A4N-ZCTP (Mar. 24, 2016).
- 238Oona A. Hathaway & Azmat Khan, “Mistakes” in War, 173 U. Pa. L. Rev. 1, 7 (2024).
- 239Id. at 2.
- 240Id.
- 241Id.
- 242Id. at 84.
- 243U.N. Secretary-General, Secretary-General’s Message to Meeting of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (Mar. 25, 2019), U.N., perma.cc/8SW2-L8R7 [hereinafter U.N. Secretary-General GGE LAWS Statement].
- 244‘Warp Speed’ Technology Must be ‘Force for Good’ UN Chief Tells Web Leaders, U.N. News (Nov. 5, 2018), perma.cc/8SHT-4SQT.
- 245Press Release, U.N. Secretary-General, Lethal Autonomous Weapon System “Politically Unacceptable, Morally Repugnant and Should Be Banned,” Secretary-General Says during Informal Consultations on Issue, U.N. Press Release SG/SM/22643 (May 12, 2025), perma.cc/54SD-PEQ7.
- 246See Gary Brown, Out of the Loop, 30 Temple Int’l & Comp. L. J. 43, 46–47 (2017) (highlighting the weaknesses of human decision-making in combat situations, like exhaustion, or how anger, fear, anxiety, and other negative emotions can distort morality—leading to lopsided mercy—or simply self-preservation, which leads people to cover up misconduct or jeopardize others’ safety for their own).
- 247See Edmund L. Andrews, How Flawed Data Aggravates Inequality in Credit, Stan. Univ. Hum. Centered A.I., (Aug. 6, 2021), perma.cc/9RWC-KP53 (documenting how AI credit-scoring models exhibit 5-10% lower accuracy for minority and low-income borrowers due to flawed underlying data rather than algorithmic bias, thereby perpetuating discriminatory lending practices).
- 248Peter C. Combe II, Autonomous Doctrine: Operationalizing the Law of Armed Conflict in the Employment of Lethal Autonomous Weapons Systems, 51 St. Mary’s L. J. 35, 42–43 (2019).
- 249See Zaira Romeo & Alberto Testolin, Artificial Intelligence Can Emulate Human Normative Judgments on Emotional Visual Scenes, 12 Royal Soc’y Open Sci. 1, 6 (2025) (finding that AI can closely emulate average human emotional ratings for images via statistical learning, though it does so without genuine affective experience).
- 250See Paul R. Williams & Ryan Jane Westlake, A Taste of Armageddon: Legal Considerations for Lethal Autonomous Weapons Systems, 57 Case W. Rsrv. J. Int’l L. 187, 197–99 (2025) (recounting a story by a U.S. soldier where his troop saw a five or six-year-old Afghan girl use a radio and, shortly thereafter, they were attacked by Taliban forces; “under the laws of armed conflict, the girl would qualify as a combatant . . . however, [the U.S. veteran] reflects that killing the girl was not an option that the soldiers even considered—even though such action would be permissible under law, empathy and moral decency would convince many people not to take it,” pointing to some consistency in human judgment and raising questions of the impacts of “the absence of moral reasoning” on AI-driven warfare).
- 251See Ryan Lanclos, Drones for Humanitarian Aid—An Innovative Approach in the Global South, Forbes (Mar. 6, 2025), perma.cc/4NGU-D2EP (describing use of drones and GIS mapping to localize humanitarian aid in the Global South). See also Devanand Ramiah, Five Ways Artificial Intelligence Can Help Crisis Response, World Econ. F. (Dec. 6, 2024), perma.cc/5WDF-MBTR (describing how the U.N. Development Programme leverages AI for crisis impact assessment, staff deployment, training, digital assistants, and early warning systems).
- 252Chris Baraniuk, Earthquake Artificial Intelligence Knows Where Damage Is Worst, NewScientist (Sep. 30, 2015), perma.cc/LSD9-FCA6.
- 253Timothy Amukele, Using Drones to Deliver Blood Products in Rwanda, 10 Lancet Glob. Health 463, 463–64 (2022).
- 254See, e.g., Mansi Nautiyal et al., Revolutionizing Agriculture: A Comprehensive Review on Artificial Intelligence Applications in Enhancing Properties of Agricultural Produce, 29 Food Chemistry X 1 (2025).
- 255Autonomous Weapons: An Open Letter from AI & Robotics Researchers, Future Life Inst. (July 28, 2015), https://perma.cc/77DG-X2KR.
- 256See Mitchell, supra note 233, at 419–22 (outlining the first set of issues discussed by the GGE at its initial 2018 session, including the spectrum of autonomy in the characterization of LAWS and the human element in the development and deployment of LAWS).
- 257See Chris Jenks, False Rubicons, Moral Panic, & Conceptual Cul-De-Sacs: Critiquing & Reframing the Call to Ban Lethal Autonomous Weapons, 44 Pepperdine. L. Rev. 1, 13–19, 59–69 (2016) (describing how the debate on how to define “autonomy” and “full autonomy” is essentially meaningless in policy and technical discourse because of how human and machine abilities are integrated, and advocating instead for a shift towards narrowly focused, practical moratoria on specific LAWS technologies, which could more effectively move conversations and policy forward).
- 258Shane R. Reeves et al., Challenges in Regulating Lethal Autonomous Weapons Under International Law, 27 Sw. J. Int’l L. 101, 105 (2020).
- 259See, e.g., Office of the Under Secretary of Defense for Policy, DOD Directive 3000.09, “Autonomy in Weapon Systems” (Jan. 25, 2023) (outlining the U.S. military policy for autonomous weapons), https://perma.cc/BVM8-KXK4.
- 260Reeves et al., supra note 258, at 105.
- 261Id. at 106.
- 262Autonomous Weapon Systems: Is It Morally Acceptable for a Machine to Make Life and Death Decisions?, ICRC (Apr. 13, 2015) (statement of the ICRC to the UN GGE on Lethal Autonomous Weapons Systems), https://perma.cc/QX5L-MU3C.
- 263Reeves et al., supra note 258, at 107.
- 264See Chengeta, supra note 230, at 888–89 (proposing a multi-pronged definition for “meaningful human control”).
- 265Hugo Grotius, On the Law of War and Peace,Book III, Chapters XI–XII (A.C. Campbell trans., 1814) (first published 1625).
- 266Zachary Kallenborn, Swords and Shields: Autonomy, AI, and the Offense-Defense Balance, Geo. J. Int’l Affs. (Nov. 22, 2021).
- 267Williams & Westlake, supra note 250, at 191.
- 268Trumbull, supra note 95, at 576.
- 269Id.
- 270See Beard, supra note 200, at 664 (discussing the need for human judgment and how it differs from concepts of morality when discussing the capabilities of existing and future autonomous systems in making “[t]he inherently subjective and complex decisions required for applying the proportionality principle so as to avoid ‘excessive’ civilian casualties”).
- 271Protocol I, supra note 82, art. 51(5)(b).
- 272See Beard, supra note 200, at 665 (describing the proportionality principle as encompassing “balancing tests”).
- 273Krupiy, supranote 79, at section VI.
- 274See Protocol I, supra note 82, art. 35(2).
- 275Hague Convention (IV) Respecting the Laws and Customs of War on Land art. 23(e), Oct. 18, 1907, 36 Stat. 2277.
- 276Michael A. Newton, Back to the Future: Reflections on the Advent of Autonomous Weapons Systems, 47 Case W. Rsrv. J. Int’l L. 5, 17–19 (2015).