Jurisdiction and Responsibility

Print
Article
26.2
“Because We Take Our Values to War” Analyzing the Views of UN Member States on AI-Driven Lethal Autonomous Weapon Systems
Rangita de Silva de Alwis
Distinguished Adjunct Professor of Law and Global Leadership at the University of Pennsylvania Law School and Wharton and an expert on the treaty body to the Convention on the Elimination of Discrimination against Women (CEDAW).

She thanks Sienna Colbert, her Research Assistant at Penn Law for developing a rigorous coding matrix to evaluate each UN Member State’s position on LAWS across different categories. She also thanks Kirra Klein for her research support. The author began this study as a Fellow at Oxford’s Mansfield College in the 2024 Trinity Term and concluded it at Oxford’s Internet Institute in the 2025 Trinity Term. The author thanks Baroness Helena Kennedy KC, MIT’s Sanjay Sarma, Penn Law’s Cary Coglianese, and Amal Clooney, founder of the Clooney Foundation and founder of the Oxford Institute of Technology and Justice for their inspiration. This article marks the 25th anniversary of the UN Women Peace and Security Agenda and builds on the author’s work on drafting the addendum to the CEDAW Committee’s General Recommendation 30 on the Women, Peace, and Security Agenda. 

In paragraph two of its resolution on lethal autonomous weapon systems, pursuant to U.N. General Assembly resolution 78/241, the General Assembly requested the Secretary-General to solicit the views of Member States and Observer States regarding lethal autonomous weapons systems (LAWS). Specifically, the request encompassed perspectives on addressing the multifaceted challenges and concerns raised by LAWS, including humanitarian, legal, security, technological, and ethical dimensions, as well as reflections on the role of human agency in the deployment of force. The Secretary-General was further mandated to submit a comprehensive report to the General Assembly at its seventy-ninth session, incorporating the full spectrum of views received and including an annex containing those submissions for further deliberation by Member States.

In implementation of this directive, on February first, 2024, the Office for Disarmament Affairs issued a note verbale to all Member States and Observer States, drawing attention to paragraph two of resolution 78/241 and inviting their formal input. Corresponding communications—notes verbales and letters—were also disseminated to the entities identified in paragraph three of the resolution, requesting their contributions on the matter.  For the first time, this Article analyzes the positions of States parties on LAWS submitted to the Secretary-General in 2024, pursuant to UN General Assembly Resolution 78/241 calling for the views of Member States and Observer States on lethal autonomous weapons systems, inter alia, “on ways to address the related challenges and concerns they raise from humanitarian, legal, security, technological and ethical perspectives and on the role of humans in the use of force.” The Article focuses on Member States’ positions in relation to human-centric approaches to LAWS and compliance with international humanitarian law. Moreover, it argues that the standard for autonomous weapons systems’ compliance with the laws of war should not only be whether they follow the principles of international humanitarian law of distinction, proportionality, and precaution, but whether they can be free of algorithmic bias. The last several years of data analysis have shown that data bias and algorithmic bias can result in unintended consequences that pose the risk of unlawful discrimination. From housing to finance, mortgage lending to credit worthiness, and college applications to job recruitment, the use of artificial intelligence (AI) can result in unintended consequences that pose the biggest risk to women and minorities. While relying on potentially biased inputs, the “black box” of a machine can magnify these biases in its outputs or decisions. Furthermore, machine learning can help algorithms even learn to discriminate.

AI mistakes are often patterned, reflecting patterns in training data, algorithms, or the AI’s fundamental design. The Article asks whether Yale Law School professor Oona Hathaway’s recent arguments on individual and state responsibility for the patterns of “Mistakes” in War may also apply to the pattern of biases in AI-driven LAWS. In current and future disputes, machines do and will continue to make life-and-death decisions without the help of human decision-making. Who will then be responsible for the “mistakes” in war?

Although much has been written about algorithmic bias, an “algorithmic divide” can create an AI-driven weapons asymmetry between different nation states depending on who has access to AI. In the final analysis, the Article argues that the transformative potential of AI must be harnessed not in conflict but in conflict resolution.

Print
Article
26.2
The Territorial Independence of Intellectual Property Rights
Aaron X. Fellmeth
Dennis S. Karjala Professor of Law, Science & Technology, Sandra Day O ‘Connor College of Law, Arizona State University.

The author owes a debt of gratitude to the careful and diligent research of Tommaso Mossio, and to Prof. Margaret Chon for her helpful suggestions. The author also thanks the volume 26 staff members of the Chicago Journal of International Law for their capable editing work.

The purpose of this article is to reassert the primacy of each state’s territorial jurisdiction as a fundamental basis for resolving international IP disputes. It identifies the principle that I have elsewhere termed the “territorial independence of IP laws” as specially relevant to the problems of parallel imports and cross-border IP infringement, and it explains how the proper application of the territorial independence principle resolves IP disputes in a manner that avoids running afoul of international law, maintains the integrity of basic U.S. principles of statutory construction, and remains consistent with the various federal statutes protecting IP rights. The territorial independence principle arises from the basic doctrine of international law that states have primary prescriptive jurisdiction with regard to their own territories, and this has important implications for how IP laws should be interpreted in multinational IP disputes.

Print
Article
26.1
Technology and the Unique Challenges of Applying Law to the Realm of Outer Space and Space Activities
F.G. von der Dunk
Dr. Frans G. von der Dunk is the Harvey & Susan Perlman Alumni/Othmer Professor of Space Law at the University of Nebraska-Lincoln College of Law’s unique Program in Space, Cyber and National Security Law, as well as the Director of Black Holes B.V., a leading space law and policy consultancy based in The Netherlands.

For better or worse, technology at heart is—except to the extent that artificial intelligence fundamentally becomes involved—not so much a creator as a facilitator and enhancer of human acts, actions and activities, allowing them to become more effective, less costly, or sometimes even just merely feasible. Perhaps nowhere that is more pertinent then when it comes to human activities in outer space, which are still overwhelmingly conducted remotely and hence crucially dependent on technology. Given that “the law” has always been geared to address humans and their acts, actions, and activities, this gives rise to a rather special approach to maintaining and further developing a legal regime for outer space. The present Article intends to address and assess some of the most pertinent aspects of the unique body of space law from precisely this perspective, to shed some light on how “the law” would, could, and/or should handle relevant human endeavours in or with regard to outer space, in particular in the context of legal responsibilities and liabilities.

Print
Article
26.1
Interplanetary Risk Regulation
Jonathan B. Wiener
William R. Perkins Professor of Law, and Professor of Environmental Policy and Public Policy, Duke University; Co-Director, Duke Center on Risk; University Fellow, Resources for the Future (RFF)

For helpful comments on prior drafts, the authors thank Larry Helfer, Erika Nesvold, Arden Rowell, and Katrina Wyman; for helpful discussions, the authors thank Dan Bodansky, Dagomar Degroot, Tyler Felgenhauer, David Fidler, Alissa Haddaji, Benedict Kingsbury, Bhavya Lal, Irmgard Marboe, Betsy Pugel, Margaret Race, Surabhi Ranganathan, Martin Rees, John Rummel, Dan Scolnic, Jessica Snyder, Phil Stern, Yirong Sun, Frans von der Dunk, Giovanni Zanalda, and participants at the Chicago Journal of International Law Symposium on “Technological Innovation in Global Governance” (January 2025); the conference on “Space Law and Earth Justice” at NYU Law School (March 2025); the Duke Space Symposium (April 2025); and the annual conference of the Society for Environmental Law and Economics (SELE) held at the School of Transnational Governance of EUI in Florence (May 2025). The views expressed in this article represent the personal views of the authors only.

Charles (Chase) Hamilton
Associate, Akin Gump Strauss Hauer & Feld LLP; Graduate Fellow of the Duke Center on Risk

Space exploration promises new opportunities but also new risks. After centuries of national settlements and international conflicts on Earth, and the Cold War era of two great power states racing to the Moon, today we see a rapidly proliferating arena of actors, both governmental and non-governmental, undertaking bold new ventures off-Earth while posing an array of new risks. These multiple activities, actors, and risks raise the prospects of regulatory gaps, costs, conflicts, and complexities that warrant reconsideration and renovation of legacy legal regimes such as the international space law agreements. New approaches are needed, beyond current national and international law, beyond global governance. We suggest that interplanetary risks warrant new institutions for risk regulation at the interplanetary scale. We discuss several examples, recognizing that interplanetary risks may be difficult to foresee. Some interplanetary risks may arise in the future, such as if settlements on other planets entail the need to manage interplanetary relations. Some interplanetary risks are already arising today, such as space debris, space weather, planetary protection against harmful contamination, planetary defense against asteroids, conflict among spacefaring actors, and potentially settling and terraforming other planets (whether to conduct scientific research, exploit space mining, or hedge against risks to life on Earth). These interplanetary risks pose potential tragedies of the commons, tragedies of complexity, and tragedies of the uncommons, in turn challenging regulatory institutions to manage collective action, risk-risk tradeoffs, and extreme catastrophic/existential risks. Optimal interplanetary risk regulation can learn from experience in terrestrial risk regulation, including by designing for adaptive policy learning. Beyond national and international law on Earth, the new space era will need interplanetary risk regulation.

Print
Comment
17.1
The Law of Unintended Consequences: The 2015 E.U. Insolvency Regulation and Employee Claims in Cross-Border Insolvencies
Joshua W. Eastby
J.D. Candidate, 2017

The author would like to thank the CJIL board and staff for their helpful edits and suggestions, Professor Douglas Baird, for his enthusiastic feedback, Meredith Quarello for her patience and support, and Andrew Mackie-Mason, for his feedback and comments