Sources and Creation of Law: Treaty, Custom, Jus Cogens, General Principles

Print
Article
26.1
Large Language Models and International Law
Ashley Deeks
Vice Dean and Class of 1948 Professor of Scholarly Research in Law, University of Virginia Law School; Senior Fellow, Miller Center of Public Affairs, University of Virginia
Duncan Hollis
Laura H. Carnell Professor of Law, Temple University School of Law

Large Language Models (LLMs) have the potential to transform public international lawyering in at least five ways: (i) helping to identify the contents of international law; (ii) interpreting existing international law; (iii) formulating and drafting proposals for new legal instruments or negotiating positions; (iv) assessing the international legality of specific acts; and (v) collating and distilling large datasets for international courts, tribunals, and treaty bodies.

This Article uses two case studies to show how LLMs may work in international legal practice. First, it uses LLMs to identify whether particular behavioral expectations rise to the level of customary international law. In doing so, it tests LLMs’ ability to identify persistent objectors and a more egalitarian collection of state practice, as well as their proclivity to produce inaccurate answers. Second, it explores how LLMs perform in producing draft treaty texts, ranging from a U.S.-China extradition treaty to a treaty banning the use of artificial intelligence in nuclear command and control systems.

Based on these analyses, the Article identifies four roles for LLMs in international law: as collaborator, confounder, creator, or corruptor. In some cases, LLMs will be collaborators, complementing existing international lawyering by drastically improving the scope and speed with which users can assemble and analyze materials and produce new texts. At the same time, without careful prompt engineering and curation of results, LLMs may generate confounding outcomes, leading international lawyers down inaccurate or ambiguous paths. This is particularly likely when LLMs fail to accurately explain particular conclusions. Further, LLMs hold surprising potential to help to create new law by offering inventive proposals for treaty language or negotiating positions.

Most importantly, LLMs hold the potential to corrupt international law by fostering automation bias in users. That is, even where analog work by international lawyers would produce different results, humans may soon perceive LLM results to accurately reflect the contents of international law. The implications of this potential are profound. LLMs could effectively realign the contents and contours of international law based on the datasets they employ. The widespread use of LLMs may even incentivize states and others to push their desired views into those datasets to corrupt LLM outputs. Such risks and rewards lead us to conclude with a call for further empirical and theoretical research on LLMs’ potential to assist, reshape, or redefine international legal practice and scholarship.

Print
Comment
18.1
Rethinking Espionage in the Modern Era
Darien Pun
J. D. Candidate, 2018, The University of Chicago Law School.

I would like to thank Professor Abebe for his patience and guidance throughout the writing process, and the editors of the Chicago Journal of International Law for their thoughtful suggestions.