Print Archive
The emergence of user-generated evidence has revolutionized how atrocities and human rights violations are documented globally. Since 2011, when Syrian human rights defenders began documenting atrocities on their smartphones, a professional field has emerged around the collection, authentication, and preservation of digital evidence. However, this professionalization has created unintended consequences, as expertise and verification power shifted away from frontline communities to Global North institutions. This Article examines this tension through two case studies: the Rohingya Genocide Archive, and Nigeria's #EndSARS movement. These examples demonstrate both the power of locally-informed evidence collection and the challenges when verification skills remain concentrated among elite institutions. As the rise of synthetic media through generative artificial intelligence poses new threats to the practice of fortifying the truth through digital evidence, we urge collaborative work to ensure that frontline communities are empowered with locally relevant skills and tools to protect their rights.
Imaginations of Planet Earth as-a-whole—that is, Earth conceived in planetary terms by wide publics—have been shaped over several decades by the growing capabilities of artificial Earth satellites to image the whole Earth, to specify all locations, and to integrate the Earth’s diverse orbital space with everyday human activities. Different Earth orbits are becoming more densely used, more securitized, more intensely managed from Earth, and more integral to activities on Earth.
This Article focuses on two categories of satellite systems that contribute directly to planetary knowledge, Global Navigation Satellite Systems (GNSS) and Earth Observation Satellite Systems (EOSS). GNSS and EOSS have earlier military and intelligence origins, but were readily associated with 1990s-type “globalization”—the encouragement of trade and communication, and the monitoring and discouragement of illicit activities and flows. More recently both have also been integral to a process of “planetization”—the construction and wide diffusion of understandings of Earth in planetary terms, as a shared and contingent habitat with many dependencies. This Article traces the policies and conditions under which data from these satellite systems has become (for the time being) open and widely available to general publics, and the basis for “planetary” infrastructural development and dependence.
We argue that the major GNSS have all become “infrastructural”: broadcasting without charge freely available signals which enable timing, positioning, and navigation via receivers and downstream products for billions of users, as well as a fast-increasing range of important environmental uses. EOSS supply images and other data which flow into scientific models of Earth systems and many business and governmental use cases—with or without charge or restriction, depending on the provider and on government controls. EOSS have become, or are becoming, infrastructural for many forms of planetary knowledge. However, the provision of comprehensive, free-to-all, and highly reliable GNSS and EOSS data and services is not legally embedded or guaranteed, and it is far from assured. Both are “dual use” and vulnerable to kinetic or cyber disruption in conflict. GNSS are government-provided but readily spoofed or jammed, and governments are seeking to develop more resilient alternatives. EOSS are often privately owned or government-controlled, and the data or downstream products are increasingly liable to private enclosure or to government restriction on release. Questions about their assured availability and extension swirl together with renewed nationalism, military prioritization, and contestations of “planetary” politico-legal thinking and its imaginaries. It is now necessary to “think infrastructurally” about legal, policy, and economic means to ensure the reliable and universal availability, sustenance, and supplementation of these important foundations of planetary knowledge.
This article discusses a new guide that has been developed to support the responsible use of digital open-source information to investigate systematic and conflict-related sexual violence (SCRSV). Drafted by the Institute for International Criminal Investigations and the Human Rights Center at UC Berkeley School of Law, the just-published pilot version of the Open-Source Practitioner’s Guide to the Murad Code aims to minimize the risks and maximize the potential for digital investigations into SCRSV. Part I of this article opens with a brief history of accountability for SCRSV, touching on the need to strengthen SCRSV investigations and providing a brief introduction to existing ethical guidelines. That is followed by a short history of digital open source investigations. Part II brings those histories together, touching on the various roles that digital investigations are beginning to play in the investigation and prosecution of SCRSV, acknowledging challenges to integrating digital methods into investigations, offering suggestions for resolving those challenges, and summarizing the guide’s relevant content. Part III looks to the future, exploring the potential for both tech-assisted and machine-led processes to strengthen the investigation and prosecution of SCRSV. The article concludes with some thoughts on how emerging digital technologies, and especially machine learning-based research methods, may prove useful to future accountability for SCRSV crimes.
China has upset the security balance in East Asia through the development of a long-range strike complex composed of anti-ship ballistic missiles, drones and cruise missiles, and hypersonic missiles that put U.S. naval fleets at risk. Beijing’s innovative approach to sea control through the projection of power from land-based fires highlights three important differences between the law applicable to naval warfare and the law of armed conflict (LOAC) as it is implemented on land.1 These legal distinctions are subtle in law, but they shape concrete choices available to naval commanders and could determine the outcome of war at sea.
First, the standard for what constitutes a military objective in naval warfare is broader than in land warfare. For example, enemy war-sustaining industries and commercial shipping may be captured or even destroyed in conflict at sea, whereas private property on land is generally protected.
Second, in war on land, commanders must take all feasible precautions in attack to consider alternative methods or means to reduce injury to civilians or civilian objects, a high standard. During armed conflict at sea, only reasonable precautions must be taken. This lower bar makes sense because it is less likely that civilians and civilian objects will be caught up in a naval war. The practical result is that war at sea has fewer precautions.
Third, attacks against military objectives in the law of armed conflict require a proportionality analysis, which operates differently at sea than on land. Those who plan, approve, or execute an attack are subject to the rule of proportionality, which prohibits attacks in which the expected collateral damage is excessive relative to the anticipated military advantage to be gained. Since naval warfare is fought from platforms, such as warships, submarines, and military aircraft, the proportionality analysis includes only civilians or civilian objects near the platform but does not include civilians or civilian objects on board.
These legal nuances would govern any naval conflict between China and the United States and could quickly intensify and widen the conflict.
- 1This Article uses the terms “law of naval warfare” and international law applicable to “war at sea” or “conflict at sea” interchangeably. James Kraska et al., The Newport Manual on the Law of Naval Warfare, 101 Int’l L. Stud. i, xiii (2023).
The temporal boundaries of the international rules governing military force are myopic. By focusing only on the initiation and conduct of war, the legal dichotomy between Jus Ad Bellum and Jus In Bello fails to address the critical role of peacetime military preparations in shaping future conflicts. Disruptive military technologies, such as artificial intelligence and cyber offensive capabilities, only further underscore this deficiency. During their pre-war development, these technologies embed countless design choices, hardcoding into their software and user interfaces policy rationales, legal interpretations, and value judgments. Once deployed in battle, these choices have the potential to precondition warfighters and set in motion violations of international humanitarian law.
This Article highlights glaring inadequacies in how the U.N. Charter, international humanitarian law, and international criminal law currently regulate peacetime military preparations, particularly those involving disruptive technologies. The Article juxtaposes these normative gaps with a growing literature in moral philosophy and theology advocating for Jus Ante Bellum (just preparation for war) as a new limb in the Just War Theory model. By reimagining international law’s temporalities, Jus Ante Bellum offers a proactive framework for addressing the risks posed by the development of disruptive military technologies. Without this recalibration, international law will continue to cede regulatory authority to the silent decisions made in the server farms of defense contractors and the fortified war rooms of central command, where algorithms and military strategies converge to dictate the contours of conflict long before it even begins.
For better or worse, technology at heart is—except to the extent that artificial intelligence fundamentally becomes involved—not so much a creator as a facilitator and enhancer of human acts, actions and activities, allowing them to become more effective, less costly, or sometimes even just merely feasible. Perhaps nowhere that is more pertinent then when it comes to human activities in outer space, which are still overwhelmingly conducted remotely and hence crucially dependent on technology. Given that “the law” has always been geared to address humans and their acts, actions, and activities, this gives rise to a rather special approach to maintaining and further developing a legal regime for outer space. The present Article intends to address and assess some of the most pertinent aspects of the unique body of space law from precisely this perspective, to shed some light on how “the law” would, could, and/or should handle relevant human endeavours in or with regard to outer space, in particular in the context of legal responsibilities and liabilities.
A number of emerging technologies increasingly prevalent on contemporary battlefields—notably unmanned autonomous systems (UAS) and various military applications of artificial intelligence (AI)—are working a sea change in the way that wars are fought. These technological developments also carry major implications for the investigation and prosecution of serious crimes committed in armed conflict, including for an under-examined yet potentially valuable form of evidence: information and material collected or obtained by military forces themselves.
Such “battlefield evidence” poses various legal and practical challenges. Yet it can play an important role in justice and accountability processes, in which it addresses the longstanding obstacle of law enforcement actors’ inability to access the conflict-torn crime scenes. Indeed, military-collected information and material has been critical to prosecutions of international crimes and terrorism offenses in recent years.
The present Article briefly surveys the historical record of battlefield evidence’s use. It demonstrates that previous technological advances—including in remote sensing, communications interception, biometrics, and digital data storage and analysis—not only enlarged and diversified the broader pool of military data but also had similar downstream effects on the (far) smaller subset of information shared and used for law enforcement purposes.
The Article then examines how current evolutions in the means and methods of warfare impact the utility of this increasingly prominent evidentiary tool. Ultimately, it is argued that the technical features of UAS and military AI give rise to significant, although qualified, opportunities for collection and exploitation of battlefield evidence. At the same time, these technologies and their broader impacts on the conduct of warfare risk inhibiting the sharing of such information and complicating its courtroom use.
Space exploration promises new opportunities but also new risks. After centuries of national settlements and international conflicts on Earth, and the Cold War era of two great power states racing to the Moon, today we see a rapidly proliferating arena of actors, both governmental and non-governmental, undertaking bold new ventures off-Earth while posing an array of new risks. These multiple activities, actors, and risks raise the prospects of regulatory gaps, costs, conflicts, and complexities that warrant reconsideration and renovation of legacy legal regimes such as the international space law agreements. New approaches are needed, beyond current national and international law, beyond global governance. We suggest that interplanetary risks warrant new institutions for risk regulation at the interplanetary scale. We discuss several examples, recognizing that interplanetary risks may be difficult to foresee. Some interplanetary risks may arise in the future, such as if settlements on other planets entail the need to manage interplanetary relations. Some interplanetary risks are already arising today, such as space debris, space weather, planetary protection against harmful contamination, planetary defense against asteroids, conflict among spacefaring actors, and potentially settling and terraforming other planets (whether to conduct scientific research, exploit space mining, or hedge against risks to life on Earth). These interplanetary risks pose potential tragedies of the commons, tragedies of complexity, and tragedies of the uncommons, in turn challenging regulatory institutions to manage collective action, risk-risk tradeoffs, and extreme catastrophic/existential risks. Optimal interplanetary risk regulation can learn from experience in terrestrial risk regulation, including by designing for adaptive policy learning. Beyond national and international law on Earth, the new space era will need interplanetary risk regulation.
This Essay sketches an informal theory of the impact of technological change on international economics, and hence international relations expressed as international law. The theory points to a policy trilemma, something that I call Cerberus in a perhaps futile attempt at an arresting metaphor. The Essay uses the trilemma to illuminate the general trends in technology policy we see playing out in China, Europe, and the United States. It argues that we have the privilege of witnessing an ongoing natural experiment in optimal technology regulation and legal policy, with no guarantee as to which approach will prevail.
Of course, like all natural experiments, the signal struggles to emerge against a background of geopolitical noise. Events and projects unrelated to policy competition might decide the game, and we might never find out what an optimal strategy may entail. Still, we can’t rule out the chance that we might learn something as the great game plays out.
From the launch of Sputnik I in 1957 to proposals for In-Space Servicing, Assembly and Manufacturing (ISAM) and new lunar activities such as resource utilization, advancing technology has always been a driving factor in the creation of space law. From a legal-historical perspective, the notion of law as creation should be contextualized in a broader legal-philosophical transition that began with the rise of positivism. Article VI of the Outer Space Treaty orbits unsteadily between international obligations and national implementation measures, rendering significant States’ understandings of those provisions. Our understanding of Article VI turns on perhaps the most creative legal endeavor: interpretation. Bing Cheng established Article VI as a lynchpin between international obligations and national measures by finding in its first sentence an attribution clause extending responsibility to non-governmental activities falling under the jurisdiction of States. Though Cheng’s interpretation has been accepted by scholars, and some domestic rules evidence its employ by States, the interpretation has been assailed on the basis that Cheng did not follow the strictures of the Vienna Convention on the Law of Treaties (VCLT). Codification, such as the VCLT, is itself an act of creation, which can have unintended consequences. Through the lens of Article VI, this Article explores interpretation as creation. It seeks to demonstrate that antipodal interpretations can be correct, that our determination of which interpretation to follow involves something other than a strict, positivist approach, and that the outcome of this debate may be more significant than perceived as states create a path forward for new space activities.
Large Language Models (LLMs) have the potential to transform public international lawyering in at least five ways: (i) helping to identify the contents of international law; (ii) interpreting existing international law; (iii) formulating and drafting proposals for new legal instruments or negotiating positions; (iv) assessing the international legality of specific acts; and (v) collating and distilling large datasets for international courts, tribunals, and treaty bodies.
This Article uses two case studies to show how LLMs may work in international legal practice. First, it uses LLMs to identify whether particular behavioral expectations rise to the level of customary international law. In doing so, it tests LLMs’ ability to identify persistent objectors and a more egalitarian collection of state practice, as well as their proclivity to produce inaccurate answers. Second, it explores how LLMs perform in producing draft treaty texts, ranging from a U.S.-China extradition treaty to a treaty banning the use of artificial intelligence in nuclear command and control systems.
Based on these analyses, the Article identifies four roles for LLMs in international law: as collaborator, confounder, creator, or corruptor. In some cases, LLMs will be collaborators, complementing existing international lawyering by drastically improving the scope and speed with which users can assemble and analyze materials and produce new texts. At the same time, without careful prompt engineering and curation of results, LLMs may generate confounding outcomes, leading international lawyers down inaccurate or ambiguous paths. This is particularly likely when LLMs fail to accurately explain particular conclusions. Further, LLMs hold surprising potential to help to create new law by offering inventive proposals for treaty language or negotiating positions.
Most importantly, LLMs hold the potential to corrupt international law by fostering automation bias in users. That is, even where analog work by international lawyers would produce different results, humans may soon perceive LLM results to accurately reflect the contents of international law. The implications of this potential are profound. LLMs could effectively realign the contents and contours of international law based on the datasets they employ. The widespread use of LLMs may even incentivize states and others to push their desired views into those datasets to corrupt LLM outputs. Such risks and rewards lead us to conclude with a call for further empirical and theoretical research on LLMs’ potential to assist, reshape, or redefine international legal practice and scholarship.
For a quarter-century, a consensus has prevailed that territorial sovereignty applies online as it does offline. Since practically all the Internet’s infrastructure and its billions of users reside on the territory of states, conventional wisdom holds that sovereignty must extend to cyberspace. Such accounts ignore how people experience cyberspace as a distinctive place, and how current international law lacks safeguards to prevent states from exercising their sovereignty to splinter the Internet into a set of national networks. Territorial sovereignty is also hard to square with pledges by the world’s democracies to keep the Internet free, open, and global; yet it is not the only way that international law knows to define the powers of a state.
Drawing from the law of the sea, this Article argues that we should anchor the nature of state authority in cyberspace in the limited sovereign rights that coastal states possess in the waters off their shores. Unlike the plenary powers that sovereignty vests in states over their entire land territory, a coastal state’s sovereign rights weaken the further one goes out to sea, and they are subject to the rights of other states (and of their nationals) to engage in certain peaceful uses of such waters. By redefining state authority over cyberspace in terms of layers of sovereign rights that are subject to the digital rights of others, states can enact legitimate online regulations within international legal constraints that preserve the Internet’s free, open, and global character.