The Essay is part of a Symposium on Tom Ginsburg’s insightful book Democracies and International Law. It explores one particular kind of interaction between democratic nation states and international instruments and institutions: how international law and institutions either mitigate or exacerbate harms to democracy from the diffusion of misinformation and hate speech on social-media platforms. I identify three distinct pathways not covered by Ginsburg: (a) international law as an off-the-rack legal regime for content-moderation by such platforms; (b) international contouring of feasible domestic regulation; and (c) ex ante and ex post international regulation of platform-mediated misinformation. Reflection upon these pathways confirms some of Ginsburg’s insights, but also complicates other parts of his analysis.

TABLE OF CONTENTS

Introduction

Tom Ginsburg’s excellent book Democracies and International Law provides a careful, multifaceted account of how democratic nation states and international instruments and institutions interact.1 This brief response Essay takes up just one thread in the book’s comprehensive tapestry. A pressing worry in contemporary democracies is the effect of social media platforms such as Facebook, Twitter, and YouTube on the quality of democratic debate. Many complain that platform-mediated misinformation and hate speech damage the democratic practice of public debate. They are also said to undermine dispositions of truthfulness and mutual trust. All these necessary predicates to democratic stability are said to be at risk due to misinformation of both domestic and foreign origin. I consider here whether international law or institutions provide resources for mitigating (or perhaps exacerbating) these harms. I flag three such mechanisms that Ginsburg does not discuss, but pick up on his attention to the elements of the international legal order that are likely to be most consequential for national democracies. A thread running through Democracies and International Law is the role of regional bodies (e.g., the African Court and the European Union) in fostering conditions conducive to democratic survival.2 I suggest that the same holds true in the digital space.

In his book and in the follow-up paper in this symposium, Ginsburg discusses the international regulation of social-media platforms solely in reference to the idea of “authoritarian international law.”3 He highlights the Draft United Nations Convention on Cooperation in Combating Cybercrime as an example of international institutions being deployed by nondemocratic states to promote (antidemocratic) norms of “sovereignty and noninterference.”4 I find this example a bit ambiguous. Ginsburg himself notes that democratic states were divided on whether this instrument should come into force.5 Such a division within democratic ranks at least leaves open the possibility that the draft convention would provide some insulation for national jurisdictions seeking to influence internet usage within their borders in ways consistent with democracy. Hence, while his example provides a tantalizing glimpse of the possible mediating role of international law and institutions in the internet domain, it also invites further exploration of the international space as an arena for the push and pull of democratic backsliding.

Democracy and Social Media

Democracy of a national scope, it is commonly assumed, requires “the relatively free ability to organize and offer policy proposals, criticize leaders, and demonstrate in public without official intimidation.”6 Social-media platforms are relevant to democratic health because of their role in facilitating such public debate. A significant fraction of Americans, for example, obtain news from platforms such as Facebook (thirty-one percent) and YouTube (twenty-two percent).7

Because of the decentralized and porous access rules for many platforms, their use as critical elements of the public sphere creates the potential for deliberately engineered falsehoods being intentionally disseminated to alter beliefs pertinent to democratic choice—call this misinformation.8 One study of the 2016 election period estimated that Americans shared items of online misinformation some 38 million times, saw on average one or more such items during the election season, and believed such items roughly half of the time.9 Pace First Amendment folklore, lies diffuse faster than truths.10 And they can be produced at an industrial pace and scale.11 While the volume of online misinformation varies over time,12 and while its effect on specific election results is at best unproven,13 the phenomenon of platform-mediated misinformation remains a potent source of concern at a historical moment in which democratic norms appear to be more generally vulnerable.14 The worry is sharpened by evidence that foreign actors, and in particular Russia, have invested in misinformation for their own geopolitical ends.15

A related worry is platform-mediated hate speech. This is “bias-motivated, hostile, and malicious language targeted at a group or person because of their actual or perceived innate characteristics.”16 Content of this ilk tends to disseminate “faster, farther, and reach a much wider audience as compared to the content generated by users who do not produce hate speech.”17 It has the potential to play a “powerful” role in stimulating “mass hate” that poses a threat to democratic stability.18 Consider one example: in the twenty-four hours after a man murdered fifty attendees at the Al Noor Mosque in Christchurch, New Zealand, in March 2019, the video from his GoPro camera live streaming the killings had been uploaded some 1.5 million times on Facebook.19 The social network has also been criticized for the way in which it facilitates political violence in ethnically divided societies, such as Myanmar.20

I think there is some reason to be cautious about how grave a threat to democracy is posed by platform-mediated misinformation in isolation. When one reads about Russian misinformation and trolling, it is hard to shake the sense that Putin’s real offense here is not election-meddling so much as it is stealing a tool from American foreign policy. As Ginsburg drily notes, the U.S. has long engaged in its fair share of foreign election manipulation.21 Democracies, no less than autocracies, can present a threat to other democratic regimes, especially when their hegemony is at issue. Whether or not one agrees with President Obama’s assertion that Russian misinformation had violated “established international norms of behavior” in 2016,22 it is hard to find anything unprecedented in their extraterritorial ambition.

Moreover, much anxiety about platform-mediated misinformation seems to imagine a previous era in which Americans were fully informed, disabused of falsehoods and biases, and hence capable of exercising rational, democratic judgment. Of course, no such era ever existed. Democracy has always had an uneasy relation to the social practices of truth-production.23 As recently as a half-century ago, a small number of major television networks and national newspapers determined the framing of current events for most Americans. Conspiracy theories, animus-driven bile, and general bunkum can be found aplenty throughout American history.24 So when I take concerns about platform-mediated misinformation seriously here, I should not be understood to be endorsing an ahistorical, sanitized morality tale about American democracy’s fall from Edenic epistemic grace.25

The Role of International Law and Institutions in Platform Moderation

What, then, is the role that international law and institutions play in moderating or enhancing risks of platform-mediated misinformation and hate? I think it is important to follow Ginsburg by not assuming that there is one and only one way in which national democracies and international institutions can interact. The vectors might be many, cross-cutting, and in conflict.

International Law as Off-the-Rack Rules for Content Moderation

A first possibility is that international law might provide an off-the-rack set of rules for platforms’ content moderation systems. A content moderation system is an “editorial guide sheet”26 or a “system of prior restraints”27 (the choice of terms indexing an embedded moral judgment) that determines what can be posted or shared on a platform.28 It is calibrated by a platform, not by a state. U.N. Special Rapporteur On the Promotion and Protection of the Right to Freedom of Opinion and Expression, David Kaye, has suggested that international instruments defining human rights be used as a common source for content-moderation norms by platforms acting on their own recognizance.29 Whereas legal norms of speech regulation vary between national jurisdictions, international human rights law on protected speech and unprotected speech (e.g. hate speech) provides a purportedly general, widely accepted set of norms that platforms could adopt as “rules of the road” for platform usage. These rules would have the advantage of broad sociological and elite legitimacy and might offer moral leverage to platform employees who wish to push their employer toward a better managed site. Because a single content regulation norm is cheaper and easier for a platform than a plurality, major companies also benefit from standardization. Indeed, some have endorsed a version of Kaye’s proposal.30

Yet there are drawbacks too. In a careful analysis, Evelyn Douek has argued that human rights norms are too vague and susceptible to manipulation to play the role of global standards but nevertheless may offer an “intrinsically valuable” decision protocol.31 She worries too that platforms will likely “co-opt[] the language of [human rights] without any substantial changes in operations.”32 Less powerfully, she contends that platforms such as Facebook lack necessary information to apply those norms.33 (Surely, Facebook could certainly hire people to implement these norms?) At the same time, Douek is correct that international law is underspecified in relation to the large and heterogenous array of unsettled questions raised by content moderation. That said, the relevant question is not whether such ambiguity exists. It is rather whether human rights law is more or less ambiguous than other potential off‑the‑rack rules for content moderation. It seems plausible that the combination of cross-national generality, which tends to lower platforms’ operating costs, coupled with the fact that there is a similarly general normative framework of comparable sociological legitimacy, makes human rights law more attractive than Douek allows. Moreover, to the extent there are gaps, the cost of filling them will turn on the technical capacity of automated learning instruments that use reinforcement learning to refine and temper rules.34

To my mind, Douek’s insightful treatment omits two important reasons for hesitation about human rights norms. First, there is a worry that international law norms may be substantively undesirable. Ginsburg’s discussion of religious defamation norms in international law captures this worry.35 Second, Ginsburg’s analysis of authoritarian international law suggests that, contrary to Douek’s assumptions, human rights law should not be assumed to be static. The worry here is that once content moderation is hitched to human rights, there is a new, potentially powerful incentive to capture the bodies that generate such law. The quality of its substance would therefore be inversely correlated with its utility.

If there is reason for optimism about the content of human rights law as a template for content moderation, it might arise from a different logic. Many scholars have noted, often with alarm, that platforms engage in “a form of private governance that reaches across geographic borders.”36 In a 2017 speech, the General Counsel of Microsoft proposed that platforms be understood as, in effect, “digital Switzerlands.”37 As Kristen Eichensehr has explained in an illuminating gloss on this idea, these firms claim to be “on par with, not subordinate to, governments, including those governments that try to regulate them” because they are “supplemental sovereigns, governing individuals alongside states.”38 Eichensehr does not address the possibility that platforms might play a role in the production and calibration of international law norms. She does note, however, that there are forces that might induce public-regarding corporate behavior. These include “[c]ompetitive pressures, corporate social responsibility norms, evolving industry standards, and threats of targeted regulation.39 Her analysis thus homes in upon the right question: whether selfish, private incentives, as thinkers from Bernard Mandeville and Adam Smith onward have urged, will conduce to public goods—now in the international sphere.

International Contouring of Domestic Regulation

A second possibility is that the international environment provides springboards (or walls) that could enable (or handicap) domestic efforts at managing platform-mediated misinformation. Here, efforts to mitigate the latter run through the state, rather than through the platform itself. Prior commitments to international institutions then either expand or contract a democratic state’s ability to encase its own democracy, or (alternatively) recalibrate a nondemocratic state’s capacity to undermine other democratic regimes.

The positivist tenor of Ginsburg’s analysis suggests that any such effect will not depend simply on the presence of a conflict between national regulation and an international norm.40 Following his lead, I ask here whether and how international law and institutions will bite—an inquiry that, as Ginsburg suggests, leads us toward regional bodies. One avenue for such influence is the effect of European rules concerning extraterritorial jurisdiction. It is now common for states to demand the removal of content that violates international law from online forums.41 The efficacy of such orders can depend on the mediating role of international bodies. In 2019, for example, the European Court of Justice ruled that member states have authority to order platforms to remove content globally.42 The Court, however, has also distinguished between permissible injunctions to prevent specific violations and unlawful general bans either targeting all content or applying indiscriminately.43 Within these constraints, European law thus imposes no final barrier to national defense of democratic values on platforms.

Subsequent interactions between national regulations and international constraint might end differently. In 2018, for example, France enacted legislation allowing specially expedited judicial proceedings, initiated by public authorities or political parties, to halt the spread of online disinformation within three months prior to elections.44 The European Court of Human Rights, however, has on several earlier occasions invalidated summary election-related proceedings in the non-digital context.45 These cases suggest that European law might provide constraints on the operation of national mechanisms for protecting democracy.46

The effect of international action on these extraterritorial actions by states will depend on the extent to a which a given government is able to “control the Internet’s underlying hardware.”47 Even in democratic states, domestic actors (including judges) can try to alter the terms on which data flows via platforms from overseas. In Milton Mueller’s insightful terminology, they seek “greater alignment between territorial states and the administrative units of the Internet.”48 And the greater the national influence over data flows, the less international law and institutions will matter. 49

International Regulation of Platform-mediated Misinformation

The final pathway whereby international law might refract the risk to democracies created by social platforms is, quite simply, through direct regulation. An international body might directly inflict costs on a platform in ways that alter the content of information flows on the site. This might be done through direct, ex ante regulation, or alternatively might take the form of ex post penalties for specific instances in which counter-democratic speech is transmitted. The evidence of each of these possibilities is, to say the least, mixed at present. The best one can do is to identify nascent possibilities, which intimate the prospect of a more robust international role in the future.

Consider first ex ante regulation. Consistent with Ginsburg’s emphasis on regional actors as key, it has been the European regional bodies that have led the way here. Commentators canvassing the range of continent-wide initiatives hence speak of “the rise of European digital constitutionalism.”50 In 2015, the European Commission established the European Internet Forum, which produced a “Code of Conduct on Countering Hate Speech Online.”51 Under the Code, platforms agreed to proscribe speech that incited violence against protected groups, to establish a twenty-four hour window for examining and removing speech, and to allow periodic reviews by the European Commission to determine compliance.52 Facebook, Twitter, and Microsoft agreed to the code and received critical feedback on their responses in the first Commission compliance report.53 Some commentators have suggested that even though the Code lacks legal force, platforms will conform to prevent regulation from materializing.54 Further, they predict that those platforms will “delete or block content . . . everywhere the platform is viewed,” thereby damaging “global freedom of expression.”55

At least as of this writing, this concern does not appear to have materialized. As of 2020, about half of the misogynistic and racist posts that violate Facebook’s community standards are not taken down, even when they are reported to the company.56 Despite the European Commission Code, the platform failed to respond promptly or effectively to the Myanmar government’s efforts to use the site to catalyze ethnic cleansing and perhaps genocide.57 Its chief executives also resisted efforts to mitigate polarizing posts.58 The “economic incentive to promote polarizing content that induces users to spend more time on the site”59 appears to have outpaced any regulatory shadow. This is not to say that international efforts to influence platform-mediated misinformation could not be effective (or even excessively chilling). Rather, the evidence to date does not support the conclusion that the European Commission’s approach has been effective, let alone that it has induced a surfeit of caution.

In December 2020, the European Commission proposed a new Digital Services Act.60 The proposed measure would take the form of a regulation, which would allow member states limited room for deviation.61 It has been described as “a harmonized EU system which replaces national ones in the scope of application.”62 It would “impose a number of obligations on ‘very large online platforms’ including transparency requirements for their recommender and advertising systems, user controls over the main parameters of recommender systems including at least one option that is not based on profiling, a data access framework and independent audits to monitor compliance.”63 The Act, moreover, would allow mandatory notice-and-takedown orders forcing platforms either to remove illegal content, including racism and xenophobia, or face fines.64 However impressive this all sounds, it remains to be seen how extensively the proposed regulation would oust national efforts at controlling platform-mediated misinformation and hate speech.65

Now consider the scenario of ex post, punitive regulation in an international body. This is, unsurprisingly, a rather more remote possibility. In November 2019, however, The Gambia lodged a case against Myanmar in the International Court of Justice (ICJ) claiming a violation of the Genocide Convention.66 As previously noted, The Gambia turned to U.S. courts in 2021, seeking to compel Facebook to produce data related to Myanmar officials’ use of Facebook to stoke violence against the Rohingya minority.67 While not a defendant in an international proceeding, the social media platform was placed in the delicate position of having to resist disclosures concerning its users on privacy grounds in the context of a suit charging genocide. In September 2021, a magistrate judge in the District of Columbia issued an order compelling Facebook to produce data on de-platformed Myanmar officials who had posted anti‑Rohingya content, reasoning that failing to so do would “compound the tragedy that has befallen the Rohingya . . . throwing away the opportunity to understand how disinformation begat genocide of the Rohingya and would foreclose a reckoning at the ICJ.”68 This order, however, was overturned on appeal to the district court.69

It is worth asking whether The Gambia could pursue the same evidentiary request in another jurisdiction. As Anupam Chander and Uyên Lê have explained, a single database is not necessarily physically located in one place. Instead, its “[data] rows . . . are held separately in servers across the world—making each partition a ‘shard’ that provides enough data for operation.”70 So imagine if Facebook stored the data sought by The Gambia in relation to its international suit in a jurisdiction (or in several jurisdictions) other than the U.S. The availability of that information would turn on statutory privacy and disclosure regimes that might present different opportunities and barriers from the rather peculiar and antiquated American iteration. More generally, the operation of the relevant international law mechanism would turn potentially on the procedural intricacies of domestic law.

Conclusion

I have offered here a modest elaboration of one brief discussion in Democracy and International Law. That even a single page of the book allows for such elaboration is evidence of its intellectual fecundity. The page on which I drew is embedded in a discussion of authoritarian international law, drawing out how the international arrangements for controlling the internet’s basic structure might be captured by authoritarian actors.71 I have aimed here to supplement this focus, pointing out other mechanisms through which international law might bite on democratic survival. In so doing, I have underscored key themes of Democracy and International Law: the central role of regional bodies as sources of both substantive law and enforcement resources, the interaction of international bodies with domestic bodies (in particular, courts), and the importance of situating law in its institutional context.

At the same time, I think that a modest supplement to the analytic apparatus of Democracy and International Law can also be discerned here. Ginsburg’s analysis embraces a measure of realism but remains relatively state-centered. The rise of transnational platforms capable of generating endogenously their own forms of governance complicates this picture. Those platforms are not just shaped by the ideologies and material opportunities of early-twentieth century democratic capitalism. They exercise an autonomous influence on the flow of democratic ideas and possibility of democratic endurance. If we want a more fulsome account of how international law and institutions influence national democracies, we could do worse than taking account of those platforms—not just as agents of democratic erosion but as authors of international norms. We can recognize that transnational platforms have as much effectual influence as states without having to abandon the state-centered approach that Ginsburg masterfully takes.

  • 1Tom Ginsburg, Democracies and International Law (2021).
  • 2See id. at 144–73 (discussing regional bodies in Africa and Europe).
  • 3See id. at 225–29; see also Tom Ginsburg, Democracies and International Law: An Update, 23 Chi. J. Int’l L. X (2022).
  • 4Ginsburg, supra note 1, at 227–28 (discussing GA Res. 72/12, Draft United Nations Convention on Cooperation in Combatting Cybercrime, Oct. 16, 2017, at https://undocs.org/Home/Mobile?FinalSymbol=A%2FC.3%2F72%2F12&Language=E&DeviceType=Desktop&LangRequested=False (last visited Apr. 24, 2022)).
  • 5See id. at 227.
  • 6Tom Ginsburg & Aziz Z. Huq, How to Save a Constitutional Democracy 11 (2018).
  • 7News Consumption Across Social Media in 2021, Pew Rsch. Ctr. (Sept. 21, 2021), https://perma.cc/Y3LY-7NT3.
  • 8I avoid the more ideologically loaded term “fake news.”
  • 9Hunt Allcott & Matthew Gentzkow, Social Media and Fake News in the 2016 Election, 31 J. Econ. Persp. 211, 211–12 (2017); see also R. Kelly Garrett, Social Media’s Contribution to Political Misperceptions in U.S. Presidential Elections, 43 PLoS ONE 1, 1–2 (Mar. 27, 2019).
  • 10Perhaps by four orders of magnitude. Soroush Vosoughi, Deb Roy & Sinan Aral, The Spread of True and False News Online, 359 Science 1146, 1146 (2018). This fallacy in “the marketplace of ideas” rhetoric is occasionally rediscovered or re-proved but is an old point. See, e.g., Bernard Williams, On Hating and Despising Philosophy, London Rev. Books (Apr. 18, 1996), https://perma.cc/9N6X-XV7A.
  • 11See, e.g., Darren L. Linvill & Patrick L. Warren, Troll Factories: Manufacturing Specialized Disinformation on Twitter, 37 Pol. Comm’n 447 (2020).
  • 12See Hunt Allcott, Matthew Gentzkow & Chuan Yu, Trends in the Diffusion of Misinformation on Social Media, 6 Res. & Pol. 1, 2 (2019) (charting decrease in misinformation on Facebook after 2016, but not Twitter).
  • 13See generally Yochai Benkler, Robert Faris & Hal Roberts, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (2018) (casting doubt on the claim that the outcome of the 2016 presidential election was influenced by Russian misinformation).
  • 14Cf. Int’l IDEA, Global State of Democracy Report 2021 (2021), https://perma.cc/C25R-UPE4 (finding that “the United States, the bastion of global democracy, fell victim to authoritarian tendencies itself, and was knocked down a significant number of steps on the democratic scale”).
  • 15See, e.g., Scott Shane, These Are the Ads Russia Bought on Facebook in 2016, N.Y. Times (Nov. 1, 2017), https://perma.cc/KF7T-9PW6; Scott Shane & Mark Mazzetti, The Plot to Subvert an Election, N.Y. Times (Sept. 20, 2018), https://perma.cc/DH32-UNGR; Peter Pomerantsev, Authoritarianism Goes Global (II): The Kremlin’s Information War, 26 J. Democracy 40 (2015). On the role of China, see Anne Marie Brady, Authoritarianism Goes Global (II): China’s Foreign Propaganda Machine, 26(4) J. Democracy 51, 51–59 (2015). For a broad reading of the evidence concerning China and Russia, see David Sloss, Tyrants on Twitter: Protecting Democracies from Information Warfare 5-112 (2022).
  • 16Alexandra A. Siegel, Online Hate Speech, in Social Media and Democracy: The State of The Field, Prospects for Reform 56, 57 (Nathaniel Persily & Joshua Tucker eds. 2020).
  • 17Id. at 63.
  • 18Id. at 70.
  • 19Robert Gorwa, Reuben Binns, & Christian Katzenbach, Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance, 7 Big Data & Soc. 1, 1–2 (2020).
  • 20See sources cited infra in notes 66–68.
  • 21See Ginsburg, supra note 1, at 227.
  • 22David E. Sanger, Obama Strikes Back at Russia for Election Hacking, N.Y. Times (Dec. 29, 2016), https://perma.cc/B5HX-XWKF.
  • 23See generally Sophia Rosenfeld, Democracy and Truth (2018).
  • 24Indeed, all the way to the founding. See generally Gordon S. Wood, Conspiracy and the Paranoid Style: Causality and Deceit in the Eighteenth Century, 39 William & Mary Q. 402 (1982).
  • 25The role of social media in propagating hate speech linked to either discrete or generalized forms of violence raises quite different ethical questions from misinformation. It is true that previous modes of mass communication have played a role in earlier genocides—the use of radio in Rwanda in 1994 is an example—and we cannot be certain that a pre-internet technology would not have had much the same effect as Facebook in the Myanmar case. Yet our tolerance for ethnic or religious violence should, in my view, be far lower than our tolerance for incremental derogations of democracy. And just as earlier technologies are hedged around by regulatory safeguards against abuse, so too it might be appropriate to criticize Facebook not so much because it facilitated genocide, but rather because it failed to establish available and feasible precautionary measures to mitigate that facilitation.
  • 26Olivia Solon, To Censor or Sanction Extreme Content? Either Way, Facebook Can’t Win, Guardian (May 23, 2017), https://perma.cc/R2R6-DQVM.
  • 27Kyle Langvardt, Regulating Online Content Moderation, 106 Geo. L.J. 1353, 1359–60 (2018).
  • 28The most comprehensive account remains Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (2018).
  • 29See David Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, 44–48, 70–72 U.N. Doc. A/HRC/38/35. (Apr. 6, 2018), https://perma.cc/QTD9-KGLL.
  • 30See, e.g., Jack Dorsey (@jack), Twitter (Aug. 10, 2018, 9:58 AM), https://twitter.com/jack/status /1027962500438843397; Monika Bickert, Updating the Values That Inform Our Community Standards, Facebook Newsroom (Sept. 12, 2019), https://perma.cc/3K3K-TF2U.
  • 31Evelyn Douek, The Limits of International Law in Content Moderation, 6 U.C. Irvine J. Int’l Transnat’l & Comp. L. 37, 53–54, 63 (2021).
  • 32Id. at 59.
  • 33See id. at 61.
  • 34For a skeptical view, see Tarleton Gillespie, Content Moderation, AI, and the Question of Scale, 7 Big Data & Soc. 1, 2–3 (2020).
  • 35See Ginsburg, supra note 1, at 227; see also Lorenz Langer, The Rise (and Fall?) of Defamation of Religions, 35 Yale J. Int’l L. 257 (2010).
  • 36Hannah Bloch-Wehba, Global Platform Governance: Private Power in the Shadow of the State, 72 SMU L. Rev. 27, 31 (2019).
  • 37Brad Smith, President, Microsoft Corp., Keynote Address at the RSA Conference 2017: The Need for a Digital Geneva Convention 12 (Feb. 14, 2017) (transcript available at https://perma.cc/GKB5-SCUF).
  • 38Kristen E. Eichensehr, Digital Switzerlands, 167 U. Pa. L. Rev. 665, 668 (2019).
  • 39Id. at 727; see also Barrie Sander, Democratic Disruption in the Age of Social Media: Between Marketized and Structural Conceptions of Human Rights Law, 32 Eur. J. Int’l L. 159, 162 (2021) (expressing skepticism of “market friendly” understandings of human rights law).
  • 40See, e.g., Sander, supra note 39, at 168 (arguing that “blanket bans of disinformation or untruthful expression generally lack sufficient precision to be compatible with the legality test under Article 19(3) of the ICCPR and also fall foul of the necessity test”).
  • 41See Andrew Keane Woods, Litigating Data Sovereignty, 128 Yale L.J. 328, 340 (2018).
  • 42See Case C-18/18, Glawischnig-Piesczek v. Facebook Ireland Ltd., ECLI:EU:C:2019:821, ¶ 53 (Oct. 3, 2019) (“[A]rticle 15(1) [of directive 2000/31], must be interpreted as meaning that it does not preclude a court of a Member State from . . . ordering a host provider to remove information . . . worldwide within the framework of the relevant international law.”); see also Adam Satariano, Facebook Can Be Forced to Delete Content Worldwide, E.U.’s Top Court Says, N.Y. Times (Oct. 3, 2019), https://perma.cc/8QHZ-YX8D.
  • 43See C-70/10, Scarlet Extended SA v. SABAM, ECLI:EU:C:2011:771; C-360/10, Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v. Netlog NV, ECLI:EU:C:2012:85.
  • 44Loi 2018-1202 du 22 décembre 2018 relative à la lutte contre la manipulation de l’information [Law 2018-1202 of December 22, 2018 on the fight against the manipulation of information], Journal Officiel de la République Francaise [J.O.] [Official Gazette of France] (Dec. 23, 2018) https://perma.cc/4HJJ-B5TX.
  • 45For a summary, see Adam Krzywon, Summary Judicial Proceedings as a Measure for Electoral Disinformation: Defining the European Standard, 22 German L.J. 673, 682–83 (2021).
  • 46Id. at 685–87 (discussing possible interactions).
  • 47Kristen E. Eichensehr, The Cyber-Law of Nations, 103 Geo. L.J. 317, 327 (2015).
  • 48Milton Mueller, Will the Internet Fragment? 34 (2017). A famous case is the effort by a Brazilian judge to ban WhatsApp. See Jonathan Watts, Judge Lifts WhatsApp Ban in Brazil After Ruling Block Punished Users Unfairly, Guardian (Dec. 17, 2015), https://perma.cc/HHS8-N824.
  • 49Cf. Sloss, supra note 15, at 152–56 (proposing a new “Alliance for Democracy” that would cooperate against coordinated misinformation campaigns).
  • 50Giovanni De Gregorio, The Rise of Digital Constitutionalism in the European Union, 19 Int’l J. Con. L. 41, 67 (2021).
  • 51European Commission Press Release IP/16/1937, European Commission and IT Companies Announce Code of Conduct on Illegal Online Hate Speech (May 31, 2016), https://perma.cc/Q4H2-LXJU.
  • 52See id.
  • 53See European Comm’n, Code of Conduct on Countering Illegal Hate Speech Online: First Results on Implementation 1 (Dec. 2016), https://perma.cc/9VAJ-F6U8.
  • 54See Danielle Keats Citron, Extremist Speech, Compelled Conformity, and Censorship Creep, 93 Notre Dame L. Rev. 1035, 1048 (2018).
  • 55Id. at 1055, 1058.
  • 56Caitlin Ring Carlson & Hayley Rousselle, Report and Repeat: Investigating Facebook’s Hate Speech Removal Process, First Monday (Jan. 27, 2020), https://perma.cc/3LXZ-W8SJ.
  • 57See Jenny Domino, Crime as Cognitive Constraint: Facebook’s Role in Myanmar’s Incitement Landscape and the Promise of International Tort Liability, 52 Case W. Rsrv. J. Int’l L. 143, 182–83 (2020).
  • 58See Jeff Horwitz & Deepa Seetharaman, Facebook Executives Shut Down Efforts to Make the Site Less Divisive: The Social-Media Giant Internally Studied How It Polarizes Users, Then Largely Shelved the Research, Wall St. J. (May 26, 2020), https://perma.cc/6EZR-7A95.
  • 59Aziz Z. Huq, The Public Trust in Data, 110 Geo. L.J. 333, 362 (2021).
  • 60See The Digital Services Act Package, European Comm’n, https://perma.cc/K4GA-4JS6.
  • 61See Andrej Savin, The Eu Digital Services Act: Toward A More Responsible Internet, 24 J. Internet L. 1, 16 (2021).
  • 62Id. at 18.
  • 63Sander, supra note 39, at 180.
  • 64See Mehreen Khan & Madhumita Murgia, EU Draws Up Sweeping Rules to Curb Illegal Online Content, Fin. Times (July 23, 2019), https://perma.cc/WX4J-8KDK.
  • 65For example, the much-discussed right to be forgotten under European law has no global application. See Leo Kelion, Google Wins Landmark Right to Be Forgotten Case, BBC (Sept. 24, 2019), https://perma.cc/LNP7-N66G.
  • 66Application of the Convention on the Prevention and Punishment of the Crime of Genocide (The Gambia v. Myanmar), 2019 I.C.J. (Nov. 11) [hereinafter Gambia v. Myanmar ICJ App.].
  • 67U.S. Court Asked to Force Facebook to Release Myanmar Officials’ Data for Genocide Case, Reuters (June 10, 2020), https://perma.cc/3J9Z-XMK7.
  • 68Republic of the Gambia v. Facebook, Inc., No. CV 20-MC-36-JEB-ZMF, 2021 WL 4304851, at *16 (D.D.C. Sept. 22, 2021) (vacated in part by Republic of Gambia v. Facebook, Inc., No. MC 20-36 (JEB), 2021 WL 5758877 (D.D.C. Dec. 3, 2021)).
  • 69See id.
  • 70Anupam Chander & Uyên P. Lê, Data Nationalism, 64 Emory L.J. 677, 719 (2015); see also Paul M. Schwartz, Legal Access to the Global Cloud, 118 Colum. L. Rev. 1681, 1694–95 (2018).
  • 71Ginsburg, supra note 1, at 227–28.