This Article discusses the development of the modern legal consequences of surrender under the law of armed conflict and explores how technologically enabled surrender is being used in Ukraine. It concludes with an analysis of the impact of these technologies on the surrender process and presents an adaptive interpretation of existing norms, leading to three overarching themes.
Public International Law
On February 20, 2024, the European Court of Human Rights ruled in Mohamed Wa Baile’s favor on his claim that Switzerland violated his rights under Article 14 of the European Convention on Human Rights, in conjunction with Article 8. Mr. Wa Baile alleged that Swiss law enforcement violated his rights when they subjected him to an identity check without offering an objective reason for doing so. The Court’s finding of a substantive violation of Article 14 with respect to racial profiling was unprecedented. The Court relied on factors such as the Administrative Court’s ruling that law enforcement had no objective reason for requesting identification from Mr. Wa Baile, statistics showing the extent of racial profiling practices in Switzerland, as well as Switzerland’s failure to implement adequate measures to remedy the issue. Similar to Switzerland, statistics show the prevalence of racial profiling in France. France also hesitates to enact laws and implement measures to prevent racial profiling. This is noted in a pending class action lawsuit in the European Court of Human Rights: Seydi and others v. France. This Comment argues that the theory behind Switzerland’s liability in Wa Baile v. Switzerland gives rise to allegations of Article 14 violations in France, as law enforcement exhibits a pattern of conducting identity checks absent reasonable suspicion and objective justifications.
Big data—extremely large quantities of information and the analytics used to process it—is now crucial to the way militaries operate on the battlefield. Data is used to run weapons systems, analyze intelligence, procure and deploy personnel, evaluate battlefield conditions, detain prisoners, and more. And not only is data increasingly being used on the battlefield, but operations targeting adversaries’ data—to acquire it, delete and destroy it, or distort or poison it—are becoming increasingly important as well. Beyond the battlefield, big data lies at the epicenter of adversarial activities below the armed conflict threshold. Because data is the fuel of artificial intelligence (AI), it is generating an AI arms race among the U.S., China, Russia, and other states, incentivizing large-scale cyber operations related to data. And big data is increasingly central to humanitarian operations on, and adjacent to, the battlefield, for example to monitor humanitarian crises, facilitate early warning systems, and deliver aid, as well as to investigate and prosecute atrocities.
All of these uses of data in military operations raise challenging interpretive questions under key bodies of international law: international humanitarian law (IHL), the jus ad bellum and international human rights law (IHRL). But they also challenge us to consider anew various long-standing critiques of legalism in the international sphere more generally: what we might call the efficacy critique—are these laws effective at all in constraining state and non-state actors?—what we might call the legitimation critique—do laws of war actually sanitize, and thereby legitimate, acts of aggression?—and the critique that law is simply ineffective in adapting to rapid technological or societal change.
This Article uses the rise of big data on the battlefield first to respond to these critiques and defend the importance of legalism when addressing armed conflict, and second to consider the multiple interpretive challenges and gaps in the law that are created by the new techno-social reality of big data on the battlefield. As in other instances of disruptive technological and societal change, the laws of armed conflict must be both justified anew and then adjusted, either through textual gap-filling, interpretive translation, policymaking, or the construction of new legal paradigms.
This Article explores the rise of a new model of global governance: the “click-and-commit world order,” characterized by digitally mediated pledging platforms through which a wide array of actors—states, corporations, cities, NGOs, and individuals—publicly commit to addressing global problems through non-binding promises. In contrast to traditional treaty-making, these pledging platforms offer a decentralized, voluntary framework for international cooperation that relies on public declarations rather than negotiated obligations.
Within the U.N. system, this mode of governance developed within the United Nations Global Compact and the Paris Climate Agreement, where bottom-up pledges were institutionalized within formal and informal international structures. The internet now amplifies and democratizes this model, enabling coordination and norm diffusion without requiring state action or legal enforcement. Examples such as the Net Zero Space Initiative and a range of climate-related platforms illustrate how the pledging order bypasses formal treaty regimes in favor of reputational incentives, public transparency, and symbolic participation.
The Article evaluates the values, risks, and institutional dynamics of this emergent order, including its emphasis on pluralism, voluntarism, and functional over status-based participation. Ultimately, the pledging order reflects a shift from constitutional, rule-restraining global law toward a voluntarist, productivity-oriented attempt to address 21st-century transnational challenges—particularly where formal multilateralism has stalled.
The modern system of international criminal justice, which was born out of World War II and built in its current form during the early 1990s, is both revolutionary and a relic. The ideals, innovation, and vision that created the international legal order were ground-breaking at the time but have failed to evolve at a pace that ensures its relevance and efficacy. The challenges we face today are drastically different from those in the period in which the framework was conceived, the institutions were formed, and the laws were drafted. While these changes have been incremental over several decades, technological advances have led to fundamental transformations in how individuals communicate, how societies interact, and how states engage with each other and their constituents. The law, in contrast, has been slower to evolve, owing in large part to the dearth of enforcement mechanisms. One can point to an abundance of academic literature and soft law instruments that provide scholarly guidance on the interpretation of international law applicable to new and emerging technologies. However, this debate is siloed from the practical realities of international law in which very few court cases have tested how international law applies to these technologies in practice. This Article assesses the effectiveness of the current system of international criminal justice in the face of emerging threats, assessing whether and how existing international law applies and identifying where it falls short.
In a potential future peer-on-peer or near peer conflict, the technological capabilities that are both taken for granted and a source of military superiority will be an immediate and high-value target. Global navigation and positioning systems, satellite imaging, precision guidance, instantaneous communication, and much more— the adversary will seek to shut down these capabilities. Turning off the technology, or fighting “in the dark,” presents complex operational and tactical challenges of navigation, logistics, communication, command and control, coordination, and targeting, to name just a few. However, executing military operations in such a technology-deprived environment also requires the application and implementation of the law of armed conflict (LOAC) in the dark, which introduces a set of parallel challenges and concerns.
This Essay explores the challenges for the law when all the technological capabilities that are deeply incorporated into our daily lives and our military operations are not available in armed conflict—because the capabilities have been turned off, jammed, spoofed, or taken down. The law of armed conflict, in contrast, will not be turned off. LOAC applies regardless of capability, type of conflict, or any other distinguishing scenario about a particular conflict. A first challenge lies in the application of LOAC in such situations, including training for the wars the military will need to fight, new questions of interoperability with partners and allies, and a more careful understanding of the relationship between law and policy in the implementation of military operations. Second, the application of LOAC “in the dark” presents the risk of significant pressures on the law as our understandings of and discourse about key principles are put to new tests. Consider proportionality and precautions, for example—current implementation of both core principles of targeting is replete with reliance on technological capabilities that may or will be degraded or rendered unavailable. And yet the absence of those capabilities does not diminish or alter these core legal obligations, highlighting the need to analyze and reaffirm the meaning and application of these fundamental rules. Other pillars of LOAC that will face significant pressure are the role of reasonableness, doubt, and certainty in decision-making and the relationship between capabilities and obligations.
The emergence of user-generated evidence has revolutionized how atrocities and human rights violations are documented globally. Since 2011, when Syrian human rights defenders began documenting atrocities on their smartphones, a professional field has emerged around the collection, authentication, and preservation of digital evidence. However, this professionalization has created unintended consequences, as expertise and verification power shifted away from frontline communities to Global North institutions. This Article examines this tension through two case studies: the Rohingya Genocide Archive, and Nigeria's #EndSARS movement. These examples demonstrate both the power of locally-informed evidence collection and the challenges when verification skills remain concentrated among elite institutions. As the rise of synthetic media through generative artificial intelligence poses new threats to the practice of fortifying the truth through digital evidence, we urge collaborative work to ensure that frontline communities are empowered with locally relevant skills and tools to protect their rights.
China has upset the security balance in East Asia through the development of a long-range strike complex composed of anti-ship ballistic missiles, drones and cruise missiles, and hypersonic missiles that put U.S. naval fleets at risk. Beijing’s innovative approach to sea control through the projection of power from land-based fires highlights three important differences between the law applicable to naval warfare and the law of armed conflict (LOAC) as it is implemented on land.1 These legal distinctions are subtle in law, but they shape concrete choices available to naval commanders and could determine the outcome of war at sea.
First, the standard for what constitutes a military objective in naval warfare is broader than in land warfare. For example, enemy war-sustaining industries and commercial shipping may be captured or even destroyed in conflict at sea, whereas private property on land is generally protected.
Second, in war on land, commanders must take all feasible precautions in attack to consider alternative methods or means to reduce injury to civilians or civilian objects, a high standard. During armed conflict at sea, only reasonable precautions must be taken. This lower bar makes sense because it is less likely that civilians and civilian objects will be caught up in a naval war. The practical result is that war at sea has fewer precautions.
Third, attacks against military objectives in the law of armed conflict require a proportionality analysis, which operates differently at sea than on land. Those who plan, approve, or execute an attack are subject to the rule of proportionality, which prohibits attacks in which the expected collateral damage is excessive relative to the anticipated military advantage to be gained. Since naval warfare is fought from platforms, such as warships, submarines, and military aircraft, the proportionality analysis includes only civilians or civilian objects near the platform but does not include civilians or civilian objects on board.
These legal nuances would govern any naval conflict between China and the United States and could quickly intensify and widen the conflict.
- 1This Article uses the terms “law of naval warfare” and international law applicable to “war at sea” or “conflict at sea” interchangeably. James Kraska et al., The Newport Manual on the Law of Naval Warfare, 101 Int’l L. Stud. i, xiii (2023).
The temporal boundaries of the international rules governing military force are myopic. By focusing only on the initiation and conduct of war, the legal dichotomy between Jus Ad Bellum and Jus In Bello fails to address the critical role of peacetime military preparations in shaping future conflicts. Disruptive military technologies, such as artificial intelligence and cyber offensive capabilities, only further underscore this deficiency. During their pre-war development, these technologies embed countless design choices, hardcoding into their software and user interfaces policy rationales, legal interpretations, and value judgments. Once deployed in battle, these choices have the potential to precondition warfighters and set in motion violations of international humanitarian law.
This Article highlights glaring inadequacies in how the U.N. Charter, international humanitarian law, and international criminal law currently regulate peacetime military preparations, particularly those involving disruptive technologies. The Article juxtaposes these normative gaps with a growing literature in moral philosophy and theology advocating for Jus Ante Bellum (just preparation for war) as a new limb in the Just War Theory model. By reimagining international law’s temporalities, Jus Ante Bellum offers a proactive framework for addressing the risks posed by the development of disruptive military technologies. Without this recalibration, international law will continue to cede regulatory authority to the silent decisions made in the server farms of defense contractors and the fortified war rooms of central command, where algorithms and military strategies converge to dictate the contours of conflict long before it even begins.
Large Language Models (LLMs) have the potential to transform public international lawyering in at least five ways: (i) helping to identify the contents of international law; (ii) interpreting existing international law; (iii) formulating and drafting proposals for new legal instruments or negotiating positions; (iv) assessing the international legality of specific acts; and (v) collating and distilling large datasets for international courts, tribunals, and treaty bodies.
This Article uses two case studies to show how LLMs may work in international legal practice. First, it uses LLMs to identify whether particular behavioral expectations rise to the level of customary international law. In doing so, it tests LLMs’ ability to identify persistent objectors and a more egalitarian collection of state practice, as well as their proclivity to produce inaccurate answers. Second, it explores how LLMs perform in producing draft treaty texts, ranging from a U.S.-China extradition treaty to a treaty banning the use of artificial intelligence in nuclear command and control systems.
Based on these analyses, the Article identifies four roles for LLMs in international law: as collaborator, confounder, creator, or corruptor. In some cases, LLMs will be collaborators, complementing existing international lawyering by drastically improving the scope and speed with which users can assemble and analyze materials and produce new texts. At the same time, without careful prompt engineering and curation of results, LLMs may generate confounding outcomes, leading international lawyers down inaccurate or ambiguous paths. This is particularly likely when LLMs fail to accurately explain particular conclusions. Further, LLMs hold surprising potential to help to create new law by offering inventive proposals for treaty language or negotiating positions.
Most importantly, LLMs hold the potential to corrupt international law by fostering automation bias in users. That is, even where analog work by international lawyers would produce different results, humans may soon perceive LLM results to accurately reflect the contents of international law. The implications of this potential are profound. LLMs could effectively realign the contents and contours of international law based on the datasets they employ. The widespread use of LLMs may even incentivize states and others to push their desired views into those datasets to corrupt LLM outputs. Such risks and rewards lead us to conclude with a call for further empirical and theoretical research on LLMs’ potential to assist, reshape, or redefine international legal practice and scholarship.