Humanitarian/Human Rights Law

Print
Article
26.2
“Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems
Rangita de Silva de Alwis
Distinguished Adjunct Professor of Law and Global Leadership at the University of Pennsylvania Law School and Wharton and an expert on the treaty body to the Convention on the Elimination of Discrimination against Women (CEDAW).

She thanks Sienna Colbert, her Research Assistant at Penn Law for developing a rigorous coding matrix to evaluate each UN Member State’s position on LAWS across different categories. She also thanks Kirra Klein for her research support. The author began this study as a Fellow at Oxford’s Mansfield College in the 2024 Trinity Term and concluded it at Oxford’s Internet Institute in the 2025 Trinity Term. The author thanks Baroness Helena Kennedy KC, MIT’s Sanjay Sarma, Penn Law’s Cary Coglianese, and Amal Clooney, founder of the Clooney Foundation and founder of the Oxford Institute of Technology and Justice for their inspiration. This article marks the 25th anniversary of the UN Women Peace and Security Agenda and builds on the author’s work on drafting the addendum to the CEDAW Committee’s General Recommendation 30 on the Women, Peace, and Security Agenda. 

In paragraph two of its resolution on lethal autonomous weapon systems, pursuant to U.N. General Assembly resolution 78/241, the General Assembly requested the Secretary-General to solicit the views of Member States and Observer States regarding lethal autonomous weapons systems (LAWS). Specifically, the request encompassed perspectives on addressing the multifaceted challenges and concerns raised by LAWS, including humanitarian, legal, security, technological, and ethical dimensions, as well as reflections on the role of human agency in the deployment of force. The Secretary-General was further mandated to submit a comprehensive report to the General Assembly at its seventy-ninth session, incorporating the full spectrum of views received and including an annex containing those submissions for further deliberation by Member States.

In implementation of this directive, on February first, 2024, the Office for Disarmament Affairs issued a note verbale to all Member States and Observer States, drawing attention to paragraph two of resolution 78/241 and inviting their formal input. Corresponding communications—notes verbales and letters—were also disseminated to the entities identified in paragraph three of the resolution, requesting their contributions on the matter.  For the first time, this Article analyzes the positions of States parties on LAWS submitted to the Secretary-General in 2024, pursuant to UN General Assembly Resolution 78/241 calling for the views of Member States and Observer States on lethal autonomous weapons systems, inter alia, “on ways to address the related challenges and concerns they raise from humanitarian, legal, security, technological and ethical perspectives and on the role of humans in the use of force.” The Article focuses on Member States’ positions in relation to human-centric approaches to LAWS and compliance with international humanitarian law. Moreover, it argues that the standard for autonomous weapons systems’ compliance with the laws of war should not only be whether they follow the principles of international humanitarian law of distinction, proportionality, and precaution, but whether they can be free of algorithmic bias. The last several years of data analysis have shown that data bias and algorithmic bias can result in unintended consequences that pose the risk of unlawful discrimination. From housing to finance, mortgage lending to credit worthiness, and college applications to job recruitment, the use of artificial intelligence (AI) can result in unintended consequences that pose the biggest risk to women and minorities. While relying on potentially biased inputs, the “black box” of a machine can magnify these biases in its outputs or decisions. Furthermore, machine learning can help algorithms even learn to discriminate.

AI mistakes are often patterned, reflecting patterns in training data, algorithms, or the AI’s fundamental design. The Article asks whether Yale Law School professor Oona Hathaway’s recent arguments on individual and state responsibility for the patterns of “Mistakes” in War may also apply to the pattern of biases in AI-driven LAWS. In current and future disputes, machines do and will continue to make life-and-death decisions without the help of human decision-making. Who will then be responsible for the “mistakes” in war?

Although much has been written about algorithmic bias, an “algorithmic divide” can create an AI-driven weapons asymmetry between different nation states depending on who has access to AI. In the final analysis, the Article argues that the transformative potential of AI must be harnessed not in conflict but in conflict resolution.

Online
Comment
CJIL Online 4.2
Identity Checks in France in Violation of Article 14 Post-Wa Baile v. Switzerland
Julia-Jeane Lighten
B.A. 2020, Middlebury College; J.D. Candidate 2026, The University of Chicago Law School

I would like to thank the board and staff of the Chicago Journal of International Law for their guidance, as well as Professor Anjli Parrin for advising me.

On February 20, 2024, the European Court of Human Rights ruled in Mohamed Wa Baile’s favor on his claim that Switzerland violated his rights under Article 14 of the European Convention on Human Rights, in conjunction with Article 8. Mr. Wa Baile alleged that Swiss law enforcement violated his rights when they subjected him to an identity check without offering an objective reason for doing so. The Court’s finding of a substantive violation of Article 14 with respect to racial profiling was unprecedented. The Court relied on factors such as the Administrative Court’s ruling that law enforcement had no objective reason for requesting identification from Mr. Wa Baile, statistics showing the extent of racial profiling practices in Switzerland, as well as Switzerland’s failure to implement adequate measures to remedy the issue. Similar to Switzerland, statistics show the prevalence of racial profiling in France. France also hesitates to enact laws and implement measures to prevent racial profiling. This is noted in a pending class action lawsuit in the European Court of Human Rights: Seydi and others v. France. This Comment argues that the theory behind Switzerland’s liability in Wa Baile v. Switzerland gives rise to allegations of Article 14 violations in France, as law enforcement exhibits a pattern of conducting identity checks absent reasonable suspicion and objective justifications.

Print
Article
26.1
Digital Evidence: Facilitating what and for whom?
Rebecca Hamilton
Rebecca Hamilton is a Professor of Law at American University, Washington College of Law.

The authors would like to thank our colleagues at the Counter Evidentiary Network, colleagues at WITNESS, and partner communities whose courage continue to inspire. Our gratitude also to the editors of the Chicago Journal of International Law.

Adebayo Okeowo
Dr. Adebayo Okeowo is a human rights lawyer and currently serves as the Associate Director of Programs at WITNESS.

The authors would like to thank our colleagues at the Counter Evidentiary Network, colleagues at WITNESS, and partner communities whose courage continue to inspire. Our gratitude also to the editors of the Chicago Journal of International Law.

The emergence of user-generated evidence has revolutionized how atrocities and human rights violations are documented globally. Since 2011, when Syrian human rights defenders began documenting atrocities on their smartphones, a professional field has emerged around the collection, authentication, and preservation of digital evidence. However, this professionalization has created unintended consequences, as expertise and verification power shifted away from frontline communities to Global North institutions. This Article examines this tension through two case studies: the Rohingya Genocide Archive, and Nigeria's #EndSARS movement. These examples demonstrate both the power of locally-informed evidence collection and the challenges when verification skills remain concentrated among elite institutions. As the rise of synthetic media through generative artificial intelligence poses new threats to the practice of fortifying the truth through digital evidence, we urge collaborative work to ensure that frontline communities are empowered with locally relevant skills and tools to protect their rights.

Print
Article
26.1
Digital Investigations of Systematic and Conflict-Related Sexual Violence: Practice and Possibilities
Alexa Koenig
Research Professor, University of California Berkeley School of Law; Co-Faculty Director, Human Rights Center, UC Berkeley.

The author thanks Ingrid Elliott, Lindsay Freeman, Anthony Ghaly, Gabriel Oosthuizen, Andrea Richardson, and the team at the Chicago Journal of International Law for their feedback on earlier versions of this article. Any errors are, of course, the author’s own.

This article discusses a new guide that has been developed to support the responsible use of digital open-source information to investigate systematic and conflict-related sexual violence (SCRSV). Drafted by the Institute for International Criminal Investigations and the Human Rights Center at UC Berkeley School of Law, the just-published pilot version of the Open-Source Practitioner’s Guide to the Murad Code aims to minimize the risks and maximize the potential for digital investigations into SCRSV.  Part I of this article opens with a brief history of accountability for SCRSV, touching on the need to strengthen SCRSV investigations and providing a brief introduction to existing ethical guidelines. That is followed by a short history of digital open source investigations. Part II brings those histories together, touching on the various roles that digital investigations are beginning to play in the investigation and prosecution of SCRSV, acknowledging challenges to integrating digital methods into investigations, offering suggestions for resolving those challenges, and summarizing the guide’s relevant content. Part III looks to the future, exploring the potential for both tech-assisted and machine-led processes to strengthen the investigation and prosecution of SCRSV. The article concludes with some thoughts on how emerging digital technologies, and especially machine learning-based research methods, may prove useful to future accountability for SCRSV crimes.

Print
Article
26.1
Technology and the Law of Jus Ante Bellum
Asaf Lubin
Dr. Asaf Lubin is an Associate Professor of Law at Indiana University Maurer School of Law and a Faculty Affiliate of the Hamilton Lugar School of Global and International Studies. He is additionally an Affiliated Fellow at Yale Law School’s Information Society Project, a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University, and a Research Associate at the Hebrew University of Jerusalem Federmann Cyber Security Research Center.

I am grateful to Rebecca Crootof for the in-depth discussions we had at the outset of this project, which were instrumental in refining my thinking on the subject. I am also grateful to the participants of the Saint Louis University Law Journal Symposium titled “Contemporary Challenges in International Humanitarian Law: Is there Hope for the International Order? for offering excellent feedback on an earlier draft. In particular I wish to thank Adi Gal, Eric Talbot Jensen, Marco Roscini, Afonso Seixas-Nunes, SJ, and Jennifer Trahan for their valuable insights. I also extend my deep appreciation to the Board of the Chicago Journal of International Law for the opportunity to contribute to this symposium and for their thoughtful feedback and editing. Finally, this symposium has brought together some of the kindest people and sharpest minds currently working at the intersection of international law and technology. It is an incredible privilege to be included among them, and I look forward to engaging with their ideas and contributions in the years to come.

The temporal boundaries of the international rules governing military force are myopic. By focusing only on the initiation and conduct of war, the legal dichotomy between Jus Ad Bellum and Jus In Bello fails to address the critical role of peacetime military preparations in shaping future conflicts. Disruptive military technologies, such as artificial intelligence and cyber offensive capabilities, only further underscore this deficiency. During their pre-war development, these technologies embed countless design choices, hardcoding into their software and user interfaces policy rationales, legal interpretations, and value judgments. Once deployed in battle, these choices have the potential to precondition warfighters and set in motion violations of international humanitarian law.

This Article highlights glaring inadequacies in how the U.N. Charter, international humanitarian law, and international criminal law currently regulate peacetime military preparations, particularly those involving disruptive technologies. The Article juxtaposes these normative gaps with a growing literature in moral philosophy and theology advocating for Jus Ante Bellum (just preparation for war) as a new limb in the Just War Theory model. By reimagining international law’s temporalities, Jus Ante Bellum offers a proactive framework for addressing the risks posed by the development of disruptive military technologies. Without this recalibration, international law will continue to cede regulatory authority to the silent decisions made in the server farms of defense contractors and the fortified war rooms of central command, where algorithms and military strategies converge to dictate the contours of conflict long before it even begins.

Online
Comment
CJIL Online 4.1
Administering an International Climate Migration Lottery
Hana Nasser
B.A., University of Illinois Urbana-Champaign, Ph.D., University of Virginia, J.D. Candidate at the University of Chicago

I would like to thank my comment editors, Amara Shaikh and Tyler Lawson for their feedback and guidance. Professor Nicole Hallett provided detailed comments on drafts and helped me sharpen the argument. Professor Tom Ginsburg provided valuable feedback on the comment’s proposed design for a climate migration lottery.

Experts predict that millions of people will need to migrate internally and across borders due to global warming. Currently, international legal frameworks do not extend the same legal protections to climate migrants as are afforded refugees and asylum seekers. While international law recognizes the right to asylum based on political persecution, there is no international right to migrate based on climate-based harms that states are legally bound to observe. This Comment proposes a climate migration lottery (CML) that would be administered internationally to address current and future climate-based migration. Under this proposal, receiving states would agree via a treaty to admit their fair share of the total pool of climate migrants selected through the lottery. Migrants from countries with a high susceptibility to having large portions of territory rendered uninhabitable by climate change would be eligible to enter the CML. This comment argues that a CML can alleviate the strain on regions in developing states that must accommodate internally displaced persons as well as the burden on countries that are near low-lying Pacific island states that will experience significant rates of displacement due to sea level rises.

Print
Essay
18.1
Experimentally Testing the Effectiveness of Human Rights Treaties
Adam S. Chilton
Assistant Professor of Law, University of Chicago Law School. Email: adamchilton@uchicago.edu.

This paper was prepared for the “International Law as Behavior” Conference organized by the American Society of International Law and the University of Georgia School of Law. I would like to thank participants in that conference and Katherina Linos for helpful comments. I would also like to thank Vera Shikhelman and Katie Bass for research assistance, and the Baker Scholars Fund at the University of Chicago Law School for financial support.

Print
Article
18.1
The Legalization of Truth in International Fact-Finding
Shiri Krebs
Law and International Security Fellow, Stanford Law School and Stanford Center on International Security and Cooperation (CISAC), Stanford University; Senior Lecturer (Assistant Professor) and Director, Graduate Studies in Law and International Relations, Deakin Law School

The author wishes to thank Jenny Martinez, Robert MacCoun, Mariano-Florentino Cuellar, Mike Tomz, Lee Ross, Dan Ho, Beth Van-Schaack, Gabriella Blum, Beth Simmons, Paul Sniderman, Allen Winer, Charles Perrow, Karl Eikenberry, Bernadette Meyler, David Sloss, and Alison Renteln for their thoughtful comments, suggestions, and advice. The article benefitted greatly from the thoughtful editing of Katharine Wies and the editorial team of the Chicago Journal of International Law. I am also grateful for the comments I received from the participants of the 2016 Empirical Legal Studies Conference, Duke University; 2016 American Society of International Law (ASIL) Annual Meeting ‘New Voices’ panel, Washington, DC; the 2016 Harvard Experimental Political Science Conference, Harvard University; and the 2014 Northern California International Law Scholars Workshop, U.C. Hastings. This research project was made possible thanks to the generous financial support of the Christiana Shi Stanford Interdisciplinary Award in International Studies (SIGF), the Stanford Laboratory for the Study of American Values (SLAV), the CISAC Zuckerman research grant, and the Freeman Spogli Institute Research Grant.

Write it down. Write it. With ordinary ink

on ordinary paper; they weren’t given food,

they all died of hunger. All. How many?

It’s a large meadow. How much grass per head?