Criminal Law

Print
Article
26.2
“Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems
Rangita de Silva de Alwis
Distinguished Adjunct Professor of Law and Global Leadership at the University of Pennsylvania Law School and Wharton and an expert on the treaty body to the Convention on the Elimination of Discrimination against Women (CEDAW).

She thanks Sienna Colbert, her Research Assistant at Penn Law for developing a rigorous coding matrix to evaluate each UN Member State’s position on LAWS across different categories. She also thanks Kirra Klein for her research support. The author began this study as a Fellow at Oxford’s Mansfield College in the 2024 Trinity Term and concluded it at Oxford’s Internet Institute in the 2025 Trinity Term. The author thanks Baroness Helena Kennedy KC, MIT’s Sanjay Sarma, Penn Law’s Cary Coglianese, and Amal Clooney, founder of the Clooney Foundation and founder of the Oxford Institute of Technology and Justice for their inspiration. This article marks the 25th anniversary of the UN Women Peace and Security Agenda and builds on the author’s work on drafting the addendum to the CEDAW Committee’s General Recommendation 30 on the Women, Peace, and Security Agenda. 

In paragraph two of its resolution on lethal autonomous weapon systems, pursuant to U.N. General Assembly resolution 78/241, the General Assembly requested the Secretary-General to solicit the views of Member States and Observer States regarding lethal autonomous weapons systems (LAWS). Specifically, the request encompassed perspectives on addressing the multifaceted challenges and concerns raised by LAWS, including humanitarian, legal, security, technological, and ethical dimensions, as well as reflections on the role of human agency in the deployment of force. The Secretary-General was further mandated to submit a comprehensive report to the General Assembly at its seventy-ninth session, incorporating the full spectrum of views received and including an annex containing those submissions for further deliberation by Member States.

In implementation of this directive, on February first, 2024, the Office for Disarmament Affairs issued a note verbale to all Member States and Observer States, drawing attention to paragraph two of resolution 78/241 and inviting their formal input. Corresponding communications—notes verbales and letters—were also disseminated to the entities identified in paragraph three of the resolution, requesting their contributions on the matter.  For the first time, this Article analyzes the positions of States parties on LAWS submitted to the Secretary-General in 2024, pursuant to UN General Assembly Resolution 78/241 calling for the views of Member States and Observer States on lethal autonomous weapons systems, inter alia, “on ways to address the related challenges and concerns they raise from humanitarian, legal, security, technological and ethical perspectives and on the role of humans in the use of force.” The Article focuses on Member States’ positions in relation to human-centric approaches to LAWS and compliance with international humanitarian law. Moreover, it argues that the standard for autonomous weapons systems’ compliance with the laws of war should not only be whether they follow the principles of international humanitarian law of distinction, proportionality, and precaution, but whether they can be free of algorithmic bias. The last several years of data analysis have shown that data bias and algorithmic bias can result in unintended consequences that pose the risk of unlawful discrimination. From housing to finance, mortgage lending to credit worthiness, and college applications to job recruitment, the use of artificial intelligence (AI) can result in unintended consequences that pose the biggest risk to women and minorities. While relying on potentially biased inputs, the “black box” of a machine can magnify these biases in its outputs or decisions. Furthermore, machine learning can help algorithms even learn to discriminate.

AI mistakes are often patterned, reflecting patterns in training data, algorithms, or the AI’s fundamental design. The Article asks whether Yale Law School professor Oona Hathaway’s recent arguments on individual and state responsibility for the patterns of “Mistakes” in War may also apply to the pattern of biases in AI-driven LAWS. In current and future disputes, machines do and will continue to make life-and-death decisions without the help of human decision-making. Who will then be responsible for the “mistakes” in war?

Although much has been written about algorithmic bias, an “algorithmic divide” can create an AI-driven weapons asymmetry between different nation states depending on who has access to AI. In the final analysis, the Article argues that the transformative potential of AI must be harnessed not in conflict but in conflict resolution.

Print
Article
26.1
Revolutions in Justice: Advancing the Rome Statute System to Fight Impunity in Future Wars
Lindsay Freeman
Director of the Technology, Law & Policy program at the Human Rights Center, UC Berkeley School of Law

The author thanks her fellow symposium participants for their valuable insights and feedback, as well the talented team at the Chicago Journal of International Law, who provided editorial support throughout the drafting process. 

The modern system of international criminal justice, which was born out of World War II and built in its current form during the early 1990s, is both revolutionary and a relic. The ideals, innovation, and vision that created the international legal order were ground-breaking at the time but have failed to evolve at a pace that ensures its relevance and efficacy. The challenges we face today are drastically different from those in the period in which the framework was conceived, the institutions were formed, and the laws were drafted. While these changes have been incremental over several decades, technological advances have led to fundamental transformations in how individuals communicate, how societies interact, and how states engage with each other and their constituents. The law, in contrast, has been slower to evolve, owing in large part to the dearth of enforcement mechanisms. One can point to an abundance of academic literature and soft law instruments that provide scholarly guidance on the interpretation of international law applicable to new and emerging technologies. However, this debate is siloed from the practical realities of international law in which very few court cases have tested how international law applies to these technologies in practice. This Article assesses the effectiveness of the current system of international criminal justice in the face of emerging threats, assessing whether and how existing international law applies and identifying where it falls short.

Print
Article
26.1
Battlefield Evidence in the Age of Artificial Intelligence-Enabled Warfare
Winthrop Wells
Senior Manager for Programs and Policy Planning, and Programmatic Unit Officer-in-Charge, the International Institute for Justice and the Rule of Law

The author wishes to thank the editors of the Chicago Journal of International Law for the opportunity to contribute to this symposium and for their diligent work. The views expressed are those of the author alone.

A number of emerging technologies increasingly prevalent on contemporary battlefields—notably unmanned autonomous systems (UAS) and various military applications of artificial intelligence (AI)—are working a sea change in the way that wars are fought. These technological developments also carry major implications for the investigation and prosecution of serious crimes committed in armed conflict, including for an under-examined yet potentially valuable form of evidence: information and material collected or obtained by military forces themselves.

Such “battlefield evidence” poses various legal and practical challenges. Yet it can play an important role in justice and accountability processes, in which it addresses the longstanding obstacle of law enforcement actors’ inability to access the conflict-torn crime scenes. Indeed, military-collected information and material has been critical to prosecutions of international crimes and terrorism offenses in recent years.

The present Article briefly surveys the historical record of battlefield evidence’s use. It demonstrates that previous technological advances—including in remote sensing, communications interception, biometrics, and digital data storage and analysis—not only enlarged and diversified the broader pool of military data but also had similar downstream effects on the (far) smaller subset of information shared and used for law enforcement purposes.

The Article then examines how current evolutions in the means and methods of warfare impact the utility of this increasingly prominent evidentiary tool. Ultimately, it is argued that the technical features of UAS and military AI give rise to significant, although qualified, opportunities for collection and exploitation of battlefield evidence. At the same time, these technologies and their broader impacts on the conduct of warfare risk inhibiting the sharing of such information and complicating its courtroom use.

2
Article
16.2
Hybrid Tribunals and the Composition of the Court: In Search of Sociological Legitimacy
Harry Hobbs
Senior Research Officer, Senate Standing Committee on Economics, Parliament of Australia; Sessional Tutor in Public International Law, Australian National University.

Thanks to Joanna Langille, Ryan Liss, André Nollkaemper, Philip Alston, Alison Cole, William Burke-White, Sarah Lulo and participants in the Salzburg Cutler Law Fellows Program at the U.S. Institute of Peace, Washington D.C., 20–21 February 2015. Considerable thanks should also go to the staff of the Journal for their helpful comments and editorial assistance.