This Article discusses the development of the modern legal consequences of surrender under the law of armed conflict and explores how technologically enabled surrender is being used in Ukraine. It concludes with an analysis of the impact of these technologies on the surrender process and presents an adaptive interpretation of existing norms, leading to three overarching themes.
Law of War and Armed Conflict
In paragraph two of its resolution on lethal autonomous weapon systems, pursuant to U.N. General Assembly resolution 78/241, the General Assembly requested the Secretary-General to solicit the views of Member States and Observer States regarding lethal autonomous weapons systems (LAWS). Specifically, the request encompassed perspectives on addressing the multifaceted challenges and concerns raised by LAWS, including humanitarian, legal, security, technological, and ethical dimensions, as well as reflections on the role of human agency in the deployment of force. The Secretary-General was further mandated to submit a comprehensive report to the General Assembly at its seventy-ninth session, incorporating the full spectrum of views received and including an annex containing those submissions for further deliberation by Member States.
In implementation of this directive, on February first, 2024, the Office for Disarmament Affairs issued a note verbale to all Member States and Observer States, drawing attention to paragraph two of resolution 78/241 and inviting their formal input. Corresponding communications—notes verbales and letters—were also disseminated to the entities identified in paragraph three of the resolution, requesting their contributions on the matter. For the first time, this Article analyzes the positions of States parties on LAWS submitted to the Secretary-General in 2024, pursuant to UN General Assembly Resolution 78/241 calling for the views of Member States and Observer States on lethal autonomous weapons systems, inter alia, “on ways to address the related challenges and concerns they raise from humanitarian, legal, security, technological and ethical perspectives and on the role of humans in the use of force.” The Article focuses on Member States’ positions in relation to human-centric approaches to LAWS and compliance with international humanitarian law. Moreover, it argues that the standard for autonomous weapons systems’ compliance with the laws of war should not only be whether they follow the principles of international humanitarian law of distinction, proportionality, and precaution, but whether they can be free of algorithmic bias. The last several years of data analysis have shown that data bias and algorithmic bias can result in unintended consequences that pose the risk of unlawful discrimination. From housing to finance, mortgage lending to credit worthiness, and college applications to job recruitment, the use of artificial intelligence (AI) can result in unintended consequences that pose the biggest risk to women and minorities. While relying on potentially biased inputs, the “black box” of a machine can magnify these biases in its outputs or decisions. Furthermore, machine learning can help algorithms even learn to discriminate.
AI mistakes are often patterned, reflecting patterns in training data, algorithms, or the AI’s fundamental design. The Article asks whether Yale Law School professor Oona Hathaway’s recent arguments on individual and state responsibility for the patterns of “Mistakes” in War may also apply to the pattern of biases in AI-driven LAWS. In current and future disputes, machines do and will continue to make life-and-death decisions without the help of human decision-making. Who will then be responsible for the “mistakes” in war?
Although much has been written about algorithmic bias, an “algorithmic divide” can create an AI-driven weapons asymmetry between different nation states depending on who has access to AI. In the final analysis, the Article argues that the transformative potential of AI must be harnessed not in conflict but in conflict resolution.
Big data—extremely large quantities of information and the analytics used to process it—is now crucial to the way militaries operate on the battlefield. Data is used to run weapons systems, analyze intelligence, procure and deploy personnel, evaluate battlefield conditions, detain prisoners, and more. And not only is data increasingly being used on the battlefield, but operations targeting adversaries’ data—to acquire it, delete and destroy it, or distort or poison it—are becoming increasingly important as well. Beyond the battlefield, big data lies at the epicenter of adversarial activities below the armed conflict threshold. Because data is the fuel of artificial intelligence (AI), it is generating an AI arms race among the U.S., China, Russia, and other states, incentivizing large-scale cyber operations related to data. And big data is increasingly central to humanitarian operations on, and adjacent to, the battlefield, for example to monitor humanitarian crises, facilitate early warning systems, and deliver aid, as well as to investigate and prosecute atrocities.
All of these uses of data in military operations raise challenging interpretive questions under key bodies of international law: international humanitarian law (IHL), the jus ad bellum and international human rights law (IHRL). But they also challenge us to consider anew various long-standing critiques of legalism in the international sphere more generally: what we might call the efficacy critique—are these laws effective at all in constraining state and non-state actors?—what we might call the legitimation critique—do laws of war actually sanitize, and thereby legitimate, acts of aggression?—and the critique that law is simply ineffective in adapting to rapid technological or societal change.
This Article uses the rise of big data on the battlefield first to respond to these critiques and defend the importance of legalism when addressing armed conflict, and second to consider the multiple interpretive challenges and gaps in the law that are created by the new techno-social reality of big data on the battlefield. As in other instances of disruptive technological and societal change, the laws of armed conflict must be both justified anew and then adjusted, either through textual gap-filling, interpretive translation, policymaking, or the construction of new legal paradigms.
In a potential future peer-on-peer or near peer conflict, the technological capabilities that are both taken for granted and a source of military superiority will be an immediate and high-value target. Global navigation and positioning systems, satellite imaging, precision guidance, instantaneous communication, and much more— the adversary will seek to shut down these capabilities. Turning off the technology, or fighting “in the dark,” presents complex operational and tactical challenges of navigation, logistics, communication, command and control, coordination, and targeting, to name just a few. However, executing military operations in such a technology-deprived environment also requires the application and implementation of the law of armed conflict (LOAC) in the dark, which introduces a set of parallel challenges and concerns.
This Essay explores the challenges for the law when all the technological capabilities that are deeply incorporated into our daily lives and our military operations are not available in armed conflict—because the capabilities have been turned off, jammed, spoofed, or taken down. The law of armed conflict, in contrast, will not be turned off. LOAC applies regardless of capability, type of conflict, or any other distinguishing scenario about a particular conflict. A first challenge lies in the application of LOAC in such situations, including training for the wars the military will need to fight, new questions of interoperability with partners and allies, and a more careful understanding of the relationship between law and policy in the implementation of military operations. Second, the application of LOAC “in the dark” presents the risk of significant pressures on the law as our understandings of and discourse about key principles are put to new tests. Consider proportionality and precautions, for example—current implementation of both core principles of targeting is replete with reliance on technological capabilities that may or will be degraded or rendered unavailable. And yet the absence of those capabilities does not diminish or alter these core legal obligations, highlighting the need to analyze and reaffirm the meaning and application of these fundamental rules. Other pillars of LOAC that will face significant pressure are the role of reasonableness, doubt, and certainty in decision-making and the relationship between capabilities and obligations.
China has upset the security balance in East Asia through the development of a long-range strike complex composed of anti-ship ballistic missiles, drones and cruise missiles, and hypersonic missiles that put U.S. naval fleets at risk. Beijing’s innovative approach to sea control through the projection of power from land-based fires highlights three important differences between the law applicable to naval warfare and the law of armed conflict (LOAC) as it is implemented on land.1 These legal distinctions are subtle in law, but they shape concrete choices available to naval commanders and could determine the outcome of war at sea.
First, the standard for what constitutes a military objective in naval warfare is broader than in land warfare. For example, enemy war-sustaining industries and commercial shipping may be captured or even destroyed in conflict at sea, whereas private property on land is generally protected.
Second, in war on land, commanders must take all feasible precautions in attack to consider alternative methods or means to reduce injury to civilians or civilian objects, a high standard. During armed conflict at sea, only reasonable precautions must be taken. This lower bar makes sense because it is less likely that civilians and civilian objects will be caught up in a naval war. The practical result is that war at sea has fewer precautions.
Third, attacks against military objectives in the law of armed conflict require a proportionality analysis, which operates differently at sea than on land. Those who plan, approve, or execute an attack are subject to the rule of proportionality, which prohibits attacks in which the expected collateral damage is excessive relative to the anticipated military advantage to be gained. Since naval warfare is fought from platforms, such as warships, submarines, and military aircraft, the proportionality analysis includes only civilians or civilian objects near the platform but does not include civilians or civilian objects on board.
These legal nuances would govern any naval conflict between China and the United States and could quickly intensify and widen the conflict.
- This Article uses the terms “law of naval warfare” and international law applicable to “war at sea” or “conflict at sea” interchangeably. James Kraska et al., The Newport Manual on the Law of Naval Warfare, 101 Int’l L. Stud. i, xiii (2023).
The temporal boundaries of the international rules governing military force are myopic. By focusing only on the initiation and conduct of war, the legal dichotomy between Jus Ad Bellum and Jus In Bello fails to address the critical role of peacetime military preparations in shaping future conflicts. Disruptive military technologies, such as artificial intelligence and cyber offensive capabilities, only further underscore this deficiency. During their pre-war development, these technologies embed countless design choices, hardcoding into their software and user interfaces policy rationales, legal interpretations, and value judgments. Once deployed in battle, these choices have the potential to precondition warfighters and set in motion violations of international humanitarian law.
This Article highlights glaring inadequacies in how the U.N. Charter, international humanitarian law, and international criminal law currently regulate peacetime military preparations, particularly those involving disruptive technologies. The Article juxtaposes these normative gaps with a growing literature in moral philosophy and theology advocating for Jus Ante Bellum (just preparation for war) as a new limb in the Just War Theory model. By reimagining international law’s temporalities, Jus Ante Bellum offers a proactive framework for addressing the risks posed by the development of disruptive military technologies. Without this recalibration, international law will continue to cede regulatory authority to the silent decisions made in the server farms of defense contractors and the fortified war rooms of central command, where algorithms and military strategies converge to dictate the contours of conflict long before it even begins.
A number of emerging technologies increasingly prevalent on contemporary battlefields—notably unmanned autonomous systems (UAS) and various military applications of artificial intelligence (AI)—are working a sea change in the way that wars are fought. These technological developments also carry major implications for the investigation and prosecution of serious crimes committed in armed conflict, including for an under-examined yet potentially valuable form of evidence: information and material collected or obtained by military forces themselves.
Such “battlefield evidence” poses various legal and practical challenges. Yet it can play an important role in justice and accountability processes, in which it addresses the longstanding obstacle of law enforcement actors’ inability to access the conflict-torn crime scenes. Indeed, military-collected information and material has been critical to prosecutions of international crimes and terrorism offenses in recent years.
The present Article briefly surveys the historical record of battlefield evidence’s use. It demonstrates that previous technological advances—including in remote sensing, communications interception, biometrics, and digital data storage and analysis—not only enlarged and diversified the broader pool of military data but also had similar downstream effects on the (far) smaller subset of information shared and used for law enforcement purposes.
The Article then examines how current evolutions in the means and methods of warfare impact the utility of this increasingly prominent evidentiary tool. Ultimately, it is argued that the technical features of UAS and military AI give rise to significant, although qualified, opportunities for collection and exploitation of battlefield evidence. At the same time, these technologies and their broader impacts on the conduct of warfare risk inhibiting the sharing of such information and complicating its courtroom use.