The Journal of Things We Like (Lots)
Select Page

The Data Economy is Political

Salome Viljoen, Democratic Data: A Relational Theory for Data Governance (Nov. 11, 2020), available on SSRN.

Between 2018 and 2020, nine proposals (or discussion drafts) for comprehensive data privacy legislation were introduced in the U.S. Congress. 28 states introduced 42 comprehensive privacy bills during that time. This is on top of the European Union’s General Data Protection Regulation, which took effect in 2018, and the California Consumer Privacy Act, which took effect in 2020. Clearly, U.S. policymakers are eager to be active on privacy.

Are these privacy laws any good? Put differently, are policymakers drafting, debating, and enacting the kind of privacy laws we need to address the problems of informational capitalism? In Democratic Data: A Relational Theory for Data Governance, Salome Viljoen suggests that the answer is no.

Viljoen’s argument is simple. The information industry’s data collection practices are “primarily aimed at deriving population-level insights from data subjects” that are then applied to individuals who share those characteristics in design nudges, behavioral advertising, and political microtargeting, among others. (P. 3.) But privacy laws, both in their traditional form and in these recent proposals, “attempt to reduce legal interests in information to individualist claims subject to individualistic remedies that are structurally incapable of representing this fundamental population-level purpose of data protection.” (P. 3.)

Viljoen could not be more right, both in her diagnosis of current proposals and in their structural mismatch with the privacy, justice, and dignitary interests undermined by data-driven business models that traffic in the commodification of the human experience.

Viljoen first notes that privacy has traditionally been legally conceptualized as an individual right. The Fair Information Practice Principles (FIPPs) and a long series of federal sectoral privacy laws and state statutes grant privacy rights to consumers qua individuals. This new crop of privacy laws is no different. They guarantee rights of access, correction, deletion, and portability, among others. But all of these rights are for the individual consumer. Notice-and-choice, the framework for much of U.S. privacy law, operated the same way: Its consent paradigm centered the right to choose or consent in the individual internet user.

This also tracks the scholarly literature in privacy since 1890. Privacy has long been understood as either a negative—freedom from—or positive—freedom to—right, but almost always a right located in the individual. Modern privacy scholarship has moved away from this model, recognizing both privacy’s social value, its importance in social interaction and image management, and the connection between privacy and social trust. That terrain is well worn; its inclusion here speaks both to Viljoen’s in-depth knowledge of the literature in her field and law review editors’ adherence to a model of overlong “background” sections.

Viljoen’s contributions are not so much her descriptive claim that privacy law has traditionally conceptualized privacy in individualistic terms, but where she goes from there.

Her notion of “data governance’s sociality problem” is compelling. (P. 23.) Viljoen argues that the relationships between individuals and the the information industry can be mapped along two axes: vertical and horizontal. (Pp. 25-27.) The vertical axis is the relationship between us and data collectors. When we agree to Instagram’s terms and conditions and upload a photo of our new dog, we are creating a vertical relationship with Instagram and its parent company, Facebook. The terms of that relationship “structure[] the process whereby data subjects exchange data about themselves for the digital services the data collector provides.”

“Horizontal data relations” are those relations between and among us, data subjects all, who share relevant characteristics. Those who “match” on OKCupid are in a horizontal data relationship with each other. A gay man who “likes” pictures of Corgis is in a horizontal data relationship with those targeted for advertisements based on those latent characteristics. As is a person arrested because a facial recognition tool identified him as a suspect socially connected with the person whose voluntarily uploaded picture of the same tattoo was used to train the facial recognition AI. (P. 26.)

This leads to a critical point. The person who was arrested has a privacy interest in the collection, use, and processing of data about his tattoo. But his interest is independent of the interests of the person who actually uploaded the picture, who started this causal chain of picture, collection, processing, training AI, misidentification, and arrest. It doesn’t matter where the original picture came from. Whoever uploaded it, the victim’s privacy interest is not represented in the vertical data relationship triggered by terms and conditions, a privacy policy, or a picture upload.

Viljoen’s second important contribution flows from the first. She offers a normative diagnosis for why horizontal relationships matter for data governance law. That is, data extraction’s harms stem not only from concerns over my privacy or our visceral reaction to creepy, ubiquitous surveillance. By merely using technologies that track and extract data from us, we become unwitting accomplices in the process through which industry translates our behavior into designs, technologies, and patterns that shape and manipulate everyone else. Abetting this system is a precondition of participation in the information age.

For Viljoen, then, the information economy’s core evil is that it conscripts us all in a project of mass subordination that is (not so incidentally) making a few people very very rich.

This may be Viljoen’s central contribution, and it has already changed my understanding of privacy. Focusing on the individual elides the population-level harms Viljoen highlights. Data flows classify and categorize. Data helps industry develop models to predict and change behavior. And it is precisely this connection between data and the identification of relationships between groups of people that creates economic value. We are deeply enmeshed in perpetuating a vicious cycle that subordinates data subjects while enriching Big Tech. There is no way an individual rights-based regime that gives one person some measure of control over their data can ever address this problem.

And that is, at least in part, where current proposals for comprehensive privacy laws go awry. Although there are some differences at the margins, most proposals are binary: they guarantee individual rights of control and rely on internal compliance structures to manage data collection and use. The rights model, Viljoen shows, inadequately addresses the privacy harms of informational capitalism. So, for that matter, does the compliance model. But that conversation is for another day.

Cite as: Ari Waldman, The Data Economy is Political, JOTWELL (February 12, 2021) (reviewing Salome Viljoen, Democratic Data: A Relational Theory for Data Governance (Nov. 11, 2020), available on SSRN),

No Machines in the Garden

Rebecca Crootof & BJ Ard, Structuring TechLaw, __ Harv. J.L. & Tech. __, (forthcoming, 2020), available at SSRN.

A decade ago, I mused about the implications and limits of what was then called “cyberlaw.” By that time, scholars had spent roughly 15 years experiencing the internet and speculating that a new jurisprudential era had dawned in its wake. The dialogue between the speculators and their critics was famously encapsulated in a pair of journal articles. Lawrence Lessig celebrated the transformative potential of what we used to call “cyberspace” for law. Judge Frank Easterbrook insisted on the continuing utility of existing law in solving cyber-problems. The latter’s pejorative characterization of cyberlaw as “law of the horse” has endured as a metonym for the idea that law ought not to be tailored too specifically to social problems prompted by some exotic new device.

It turns out, as I mused, that Lessig and Easterbrook and others in their respective camps were arguing on the wrong ground. Cyberspace and cyberlaw pointed the way to an integrative jurisprudential project, in which novel technologies and their uses motivate a larger rethinking of the roles and purposes of law, rather than a jurisprudence of exception (Lessig) or a jurisprudence of tradition (Easterbrook). But it has taken some time for elements of an integrative project to emerge. Rebecca Crootof and BJ Ard, in Structuring Techlaw, are among those who are now building in that direction and away from scholars’ efforts to justify legal exceptionalism in response to various metaphorical horses – among them algorithmic decision making, data analytics, robotics, autonomous vehicles, 3D printing, recombinant DNA, genome editing, and synthetic biology. Their story is not, however, primarily one of power, ideology, markets, social norms, or technological affordances. Julie Cohen, among others, has taken that approach. Structuring Techlaw is resolutely and therefore usefully positivist. The law and legal methods still matter, as such. The law itself can be adapted, reformed, and perhaps transformed.

In that spirit, Structuring Techlaw offers a framework for organizing legal analysis (Pp. 8-9), rather a solution, so it is (admirably, in my opinion) primarily descriptive rather than normative. Like Leo Marx’s classic The Machine in the Garden, exploring American literature’s industrial interruption of the pastoral, it clarifies the situation. The article is a field guild to problems in technology and law, rather than a theory or a jurisprudential intervention. As a field guide, few of its details will be new to scholars, lawyers, or even students familiar with technology policy debates of the last 25 years. But the paper collects and organizes those details in a thoughtful, clear way, with priority given to traditional legal forms and to illustrations drawn from a wide variety of technology-animated social problems. Historical problems get attention, including those that long pre-dated the internet, along with contemporary challenges. The resulting framework is for use by scholars, policy makers, and other decision makers confronted with what Crootof and Ard characterize as a critical problem common to all types of new technology: legal uncertainty in the application and design of relevant rules.

Their broad view requires a broad beginning. “Technology” means devices that extend human capabilities. (P. 3  n.1.) Structuring Techlaw offers the neologism “techlaw” to distinguish solutions to larger-scale social problems created by technology in society from technology-enabled solutions to specific problems in the provision of professional services, or so-called legaltech or lawtech. (Id.)

Techlaw exposes legal uncertainties of three types. The framework consists of those three types, in layers, with some nuances, details, and illustrations added for good measure, together with likely strategies for dealing with each one. Each type of uncertainty is described in terms of familiar debates. Some of those concern the welfare effects of precautionary and permissive regulatory approaches. Some concern choices among updating existing law, imagining new law, and reconceptualizing the legal regime the context of institutional choices. The full framework is laid out in a single graphic. (P.11.)

Layer one consists of application uncertainties, in which existing legal rules are deemed to be either too narrow (gaps) or too broad (overlaps) as responses to technology-fostered social problems. Regular or traditional tools of legal interpretation may be used effectively here.

Layer two consists of normative uncertainties, in which technology-fostered problems expose larger concerns about the purposes and functions of the laws in question. Existing law may be revealed to be underinclusive or overinclusive relative to its original aims. This is the space for normative realignment of the law.

Layer three consists of institutional uncertainties, in which the roles and responsibilities of different legal actors are called into question based on concerns about legitimacy, authority, and competence. Are technology-fostered problems best solved by updates supplied by legislatures? By administrative agencies? By courts?

This is not so much a functioning method for reaching a judgment in a particular instance in practice.. It’s best viewed as a tool for understanding. Crootof and Ard round out their description with examples at multiple points along the way, but they don’t seek to apply the framework fully either to a real historical case or to an imaginary new one. Instead, the framework is best understood as they describe it (P. 47 n.187): as an idealized template by which observers and participants alike can begin to discern and respond to common patterns in law-making, rather than deal with each technology as a shiny new object or worse, as a distracting but entertaining squirrel. The framework may produce an integrated jurisprudence of technology and law as it is used over time, over multiple applications.

Will it? If the challenge of resolving uncertainties in legal meaning evokes H.L.A. Hart’s famous “No Vehicles in the Park” illustration of interpretive flexibilities in the law1 – a positivist Pole star – that is no accident. Structuring Techlaw is replete with references to Hart (P. 16 n.35) and Hartian interpretations and extensions. (Pp.69-70.) But one needs a way to get from what this rule means (per Hart) to how this rule is part of a pattern of multiple rules, some for equivalent instances and some for different ones. Crootof and Ard manage the transition to a pattern of multiple rules via an overview of the critical role of analogical reasoning and framing effects in legal interpretation. (Pp. 52–62.) That move is surely the right one; analogies help us scale from case to case, from case to rule, and from rule to system. But its success depends on any number of empirical claims as to how legal reasoning actually works in practice, such as those summarized by Dan Hunter, that are beyond the scope of this work.

Moreover, as Crootof and Ard acknowledge, fully specifying the framework and building the resulting field of law requires exploring a standard set of questions regarding comparative institutional advantage. They don’t do that in Structuring Techlaw. Tantalizingly, they promise that exploration in an additional paper. (P.9 n.19.)

Even more tantalizing are glimpses of jurisprudence yet to come. I wondered a bit about Structuring TechLaw’s emphasis on legal uncertainty. The return to positivism is an important one, but some scholars today place significant normative weight on humans and humanity in legal systems, precisely because of the lack of predictability, certainty, and consistency that human imaginations entail in practice.2 Some scholars argue that contestability of legal meaning, an attribute that is akin to uncertainty, is both essential to the rule of law and threatened by some novel technologies.3 Crootof and Ard hint that there is more in store on this point. Understanding humans in technological systems, or “loops,” is the promised subject matter of an “aspirational” manuscript. (P. 12 n.26.)

I can’t wait.

  1. H.L.A. Hart, Positivism and the Separation of Law and Morals, 71 Harv. J. Rev. 593 (1958). Lon Fuller’s reply was published as Lon L. Fuller, Positivism and Fidelity to Law – A Reply to Professor Hart, 71 Harv. L. Rev. 630 (1958).
  2. Brett M. Frischmann & Evan Selinger, Re-Engineering Humanity (2018); Meg Leta Jones & Karen Levy, Sporting Chances: Robot Referees and the Automation of Enforcement, We Robot 2017
  3. Mireille Hildebrandt, Law as Computation in the Era of Artificial Legal Intelligence: Speaking Law to the Power of Statistics, 68 U. Toronto L.J. Supp. 1, 12 (2018). DOI: 10.3138/utlj.2017-0044
Cite as: Michael Madison, No Machines in the Garden, JOTWELL (January 13, 2021) (reviewing Rebecca Crootof & BJ Ard, Structuring TechLaw, __ Harv. J.L. & Tech. __, (forthcoming, 2020), available at SSRN),

Cyberattacks, Accusations, and the Making of International Law

Martha Finnemore and Duncan B. Hollis, Beyond Naming and Shaming: Accusations and International Law in Cybersecurity, Eur. J. Int'l L. (forthcoming, 2020), available at SSRN.

In recent years, states have begun accusing other states of cyberattacks with some frequency. Just in the past few months, Canada, the United Kingdom, and the United States have warned of Russian intelligence services targeting COVID-19 vaccine development, the United States issued an alert about North Korea robbing banks via remote access, and U.S. prosecutors indicted hackers linked to China’s Ministry of State Security for stealing intellectual property.

The flurry of cyberattack attributions raises questions about what effects (if any) they have and what effects the attributors intend them to have. In their forthcoming article “Beyond Naming and Shaming: Accusations and International Law in Cybersecurity,” Martha Finnemore and Duncan Hollis offer a nuanced set of answers focused, as the title suggests, on moving beyond the idea that the attributions are just intended to name and shame states.

Government officials have repeatedly said that public attributions of cyberattacks to other states are intended to name and shame the perpetrator states and to cause them to change their behavior. The problem is that this strategy hasn’t seemed to work very well, prompting criticism from academics. Finnemore and Hollis helpfully offer an explanation for why naming and shaming is more difficult in the cybersecurity sphere than other areas of international law and international relations. They argue that existing literature on naming and shaming includes an implicit premise: that there is a preexisting norm against which compliance and deviation can be measured. (P. 27.) When there are existing norms or legal prohibitions, like the prohibitions on torture and genocide, accused states “do not contest [the] norms,” but “[i]nstead, . . . deny what the [accuser] says happened or offer a different interpretation or application of the norm than that proffered by the accuser.” (P. 27.) But in the cybersecurity realm, “the norms (and international law) governing online behavior are not always clear and well-entrenched,” particularly across different blocs of countries, and so enforcing norms via accusations is “tricky.” (P. 27.)

But that doesn’t mean cyberattack attributions lack value. Finnemore and Hollis contribute to a growing academic literature about other functions public attributions can serve. The most interesting of these is attributions’ potential constitutive role in international norms and international law. Finnemore and Hollis argue that accusations of state responsibility for a cyberattack can

serve[] as an opening bid, aimed at a particular community, indicating not just the accuser’s disapproval of the cited operation, but often, too, its proposal (perhaps implicit) that all such conduct should be barred, i.e., that there should be a norm against such conduct. Accusations may thus lay out the contours of ‘bad behavior’ along with an argument about why, exactly, the behavior is undesirable. Other actors may then respond to the accusation. They may accept some of it; they may accept all of it; they may accept it in some situations but not others; or, they may reject it entirely. It is these interactions between the accuser, the accused, and third party audiences that—over time—may result in the creation of a new norm (or its failure). (Pp. 14-15 (footnote omitted).)

The role of cyberattack attributions in setting the rules of the road in cyberspace need not stop with international norms. Rather, public attributions can also contribute to establishing international law. Finnemore and Hollis argue, “Today’s accusations may serve as early evidence of a ‘usage’—that is, a habitual practice followed without any sense of legal obligation,” but “[i]f such accusations persist and spread over time, states may come to assume that these accusations are evidence of opinio juris, delineating which acts are either appropriate or wrongful as a matter of international law.” (Pp. 16-17.)

Once one accepts the argument that public attributions play a role in creating international norms and law to govern state actions in cyberspace, important questions follow, including how such attributions should be made. I have argued that states should establish an international law rule requiring governments that engage in public attributions of cyberattacks to other states to provide sufficient evidence to enable crosschecking or corroboration of their attributions. Such a rule would help to ensure that attributions are accurate and credible and would thereby insulate the process of setting rules of the road for cyberspace from being skewed or tainted by accidentally or willfully false attributions that give an inaccurate picture of state practice and opinio juris. Other ongoing scholarly and policy debates center on the determining the appropriate roles that governments, private companies, international entities, and academic and other experts should play in accusations against states.

One could quibble with parts of Finnemore and Hollis’s article, perhaps especially their argument for changing terminology. The authors acknowledge that “[s]tates and scholars” generally call the process of assigning responsibility for a cyberattack “attribution” (P. 8), but they argue instead for using “accusation” (P. 7), reducing “attribution” to a component of an accusation and limiting it to “the process of associating what happened with a particular actor or territory.” (P. 6.) Although it’s true that “attribution” can have different meanings (P. 8), Finnemore and Hollis are fighting an uphill battle given the entrenched use of “attribution” and a working practice of specifying which kind or aspect of attribution is at issue in a particular context. Finnemore and Hollis’s term “accusation” also presents its own difficulties. For example, they argue, “Accusations can occur without attribution (i.e., when accusers say ‘we do not know who did this, but it happened, and it was bad.’)” (P. 8.) But in common parlance, accusations require an object—who is accused? An “accusation” without an object doesn’t really accuse anyone or anything.

Whatever one terms the phenomenon of states assigning responsibility for carrying out cyberattacks, Finnemore and Hollis rightly flag its importance to establishing the international rules governing state behavior in cyberspace. Moving toward a more sophisticated understanding of the roles that accusations or attributions of cyberattacks can play is a welcome contribution to an emerging academic field and important area of international relations.

Cite as: Kristen Eichensehr, Cyberattacks, Accusations, and the Making of International Law, JOTWELL (December 2, 2020) (reviewing Martha Finnemore and Duncan B. Hollis, Beyond Naming and Shaming: Accusations and International Law in Cybersecurity, Eur. J. Int'l L. (forthcoming, 2020), available at SSRN),

Are Data Privacy Laws Trade Barriers?

What distinguishes data protection (that is, legitimate privacy law) from data protectionism (arguably a barrier to trade)? Whether a country can use its domestic privacy laws to either de jure or de facto require a company to keep citizens’ personal data within that country’s borders is a significant point of international contention right now, especially between the United States and the European Union. In July, the Court of Justice of the EU invalidated (again) the sui generis mechanism for cross-border personal data transfers between the European Union and the United States (the “Privacy Shield”). The Court’s “Schrems II” decision makes it all the more likely that the United States will attempt to revisit the matter through strategic free trade agreement negotiations—and makes Svetlana Yakovleva’s Privacy Protection(ism): The Latest Wave of Trade Constraints on Regulatory Autonomy all the more timely and important.

Yakovleva observes that in recent free trade agreement negotiations, including at the World Trade Organization (WTO), the United States has pushed to characterize restraints on cross-border data flows as a protectionist trade measure, while the European Union, by contrast, has largely advocated for national regulatory autonomy. The outcome of this conflict over purported “digital protectionism” will have practical ramifications for transnational companies that regularly deal in cross-border data flows. It will also have serious theoretical consequences for ongoing and familiar discussions of how transnational law might bridge—or override—deep domestic regulatory divides. Yakovleva nimbly weaves together a history of the term “protectionism,” Foucauldian discourse theory, and the minute details of recent free trade agreement negotiations to provide an authoritative account of what exactly is at stake. Her big contribution is to tell us all to watch our language: one person’s “digital protectionism” can be another’s “fundamental right.”

Yakovleva opens with a broad discussion of the history of the term “protectionism” as it has been used in free trade policy and law, noting the term’s changing meanings at different times and in different institutions. She starts here in order to make the central point that meanings are not static; they’re very much constructed, contested, and chosen. The notion of “free trade” was first developed in direct contrast to the once-dominant theory of mercantilism, a strict form of protectionism which counseled “restricting imports, promoting domestic industries, and maintaining self-sufficiency from other countries.” (P. 436.) By contrast, neoclassical free trade theory rested on the concept of comparative advantage: that barriers to trade inefficiently prevent countries from increasing domestic welfare by exchanging goods they can each more efficiently produce.

This history would appear to place protectionism strongly in opposition to fundamental principles of free trade. However, early understandings of protectionism were narrow, focusing on tariffs or quotas on imports, and closely associated with political nationalism. Yakovleva explains that when the General Agreement on Tariffs and Trade (GATT 1947) was signed in 1947, “protectionism” was already a contested term, with the United States blaming trade distortions for the Great Depression and Second World War, and the United Kingdom instead emphasizing “the boundaries that the international trade regime should not cross in relation to domestic policies affecting trade.” (P. 439.) The compromise was GATT 1947’s “embedded liberalism,” which according to Yakovleva made liberalization not a “goal in itself” but “a component of a broader societal goal of maintaining economic stability.” (P. 441.) Practically, this meant that only intentional protectionism qualified as protectionism under the GATT 1947 regime, and domestic regulations with a de facto impact on trade, but not motivated by protectionist intent, largely went unchallenged.

Starting, however, in the 1970s, “new protectionism” was understood to encompass a variety of non-tariff barriers to trade, including domestic policies aimed at quelling growing unemployment. Yakovleva explains that these were precisely the domestic policies that had been deemed legitimate under “embedded liberalism.” At the same time, developed countries, including the United States, began advancing a counter-narrative of “fair trade,” working towards a goal of using international trade law to harmonize a number of domestic regulatory frameworks and thus eliminate “unfair” advantages held by less-regulated developing countries.

By the time the WTO was established in 1994, neoliberal norms had largely (though not exclusively) prevailed. Yakovleva writes that “[t]he main goal of the international trading system… was no longer ‘embedded liberalism,’ but the continued, gradual liberalization of trade.” (P. 457.) The WTO dispute settlement system was increasingly used to evaluate domestic regulations (say, on health or the environment) that caused de facto discrimination against foreign goods. Instead of looking to the regulatory intent of a country, WTO adjudicators looked at the economic impact of a domestic regulation. They did so, too, through the neoliberal lens of the free-trade system, largely without looking to relevant human rights instruments or principles. Practically, Yakovleva claims, this broadened the scope of the term “protectionism,” and thus put all the analytical pressure on the GATT and GATS exceptions, in which the burden of proof that a regulation was not protectionist fell on the country whose regulations were challenged.

What, then, should we make of the more recent notion of “digital protectionism,” or its subset “data protectionism?” “Discourse matters and the discourse is changing,” Yakovleva writes. (P. 473.) Digital protectionism is now part of the vocabulary of free trade, used by lobbyists, negotiators, and academics. (Even though, as Chris Kuner has pointed out, some of the policies now being called protectionist have been in place since the 1970s.) The European Union and the United States in fact both use the terms “digital trade” and “digital protectionism” in policy documents and negotiations. But as Yakovleva convincingly argues, the understanding of and values behind these terms differ vastly, as do the provisions on cross-border data flows advanced by each party in free trade negotiations. “Data protectionism” is not a stable term, but hotly contested.

Contrasts between the U.S. and EU approaches to data privacy abound. What Yakovleva does here is clearly link the relevant distinctions to current trade discourse. She explains that one way of framing the regulation of personal data is to look at such data as an economic asset, where any legal “protection is a precondition of data-intensive trade.” (P. 510.) The alternative is what Yakovleva calls the “moral value approach,” in which data protection law is directed at protecting fundamental human rights. (P. 510.) The EU has in fact historically embraced both frameworks, with an explicit goal of its EU-wide data protection instruments being to free up digital trade between Member States. However, Yakovleva notes that in the EU, the moral value approach will “always prevail” when the two conceptions are in conflict, because of the role the CJEU plays in interpreting EU law in light of the rights to privacy and data protection established in the EU Charter of Fundamental Rights. (P. 506.) The United States, by contrast, emphasizes only the former in trade negotiations, ignoring the possibility that privacy law might not just be economically efficient but can also implicate human rights and flourishing.

This disagreement in discourse has consequences for trade policy. Yakovleva identifies important differences in the current policy approaches to “data protectionism” taken by the U.S. and the EU in trade negotiations—differences every privacy law scholar or policy wonk should learn, if they haven’t already. (For more, see Mira Burri’s recent work.)

U.S. proposals in recent bilateral free trade agreements and at the WTO create a default that cross-border restrictions on the flow of personal data will not be allowed unless they are deemed objectively necessary—a test that Yakovleva points out in the GATS context is often failed. By contrast, the EU enumerates specific instances of inappropriate cross-border restrictions—conveniently, none restrictions the EU itself places on data flows. In its proposed exception language, the EU takes an approach more similar to the national security exception in WTO agreements, deferring to a country’s own subjective assessment of what is necessary. (P. 496.) U.S. proposals characterize data privacy laws as being an aspect of economic regulation, needed in order to encourage consumers to disclose more data. EU proposals, by contrast, explicitly refer to human rights.

If there is anything surprising about this, it is that there is some agreement that at least some privacy protection is necessary for trade, rather than inherently protectionist. The key question, as Yakovleva notes, is not whether there should be domestic data privacy law, but what level of protection is legitimate. (P. 515.) She concludes by calling for “a new multidisciplinary discourse… in order to allow each trading party to strike the right balance between globalization… democratic politics, and domestic autonomy to pursue domestic values such as fundamental rights to privacy and data protection.” (P. 513.)

This is an extraordinarily ambitious—and long—article. I remain impressed by its intellectual heft, and the ease with which Yakovleva moves up into discourse theory and then back into the weeds of free trade agreement provisions. Potential readers should also know that although the article clocks in at 104 pages, much of the length comes from footnotes, evidencing Yakovleva’s impressively thorough research. I do wish there had been more engagement with related, parallel conversations about the role of trade in international intellectual property law, and the relationship there between human rights and the trade regime—but for that to have been included, this would have had to become a book.

Yakovleva’s masterful article will sound in familiar notes for technology law scholars. It resembles recurring conversations about the internet and jurisdiction, differing free speech norms around the world, and the globalization of intellectual property law, including digital copyright law. How does one address gaps between different domestic regulatory goals and regimes, given that the internet (and its users’ data) can be everywhere instantaneously? While the notion of addressing transatlantic divides in privacy laws through international trade law is not new (the late, wonderful Joel Reidenberg called for an international privacy treaty housed at the WTO back in 1999), Yakovleva brings clear policy expertise and critical insights to the current conversation. These insights will inform not just privacy law scholars, but those tracking international negotiating strategies and framing games in multiple areas of technology law.

Cite as: Margot Kaminski, Are Data Privacy Laws Trade Barriers?, JOTWELL (October 30, 2020) (reviewing Svetlana Yakovleva, Privacy Protection(ism): The Latest Wave of Trade Constraints on Regulatory Autonomy, 74 U. Miami. L. Rev. 416 (2020)),

Invisible Holes in History

Which Western institutions aid and abet Chinese censorship? Major Internet companies probably come immediately to mind. In Peering down the Memory Hole: Censorship, Digitization, and the Fragility of Our Knowledge Base, Glenn Tiffert highlights an unexpected set of additional accomplices: scholarly archival platforms.

Tiffert shows that digitization makes it possible for censorship to disappear into the apparently limitless, but silently curated, torrents of information now available—adding a valuable example to Zeynep Tufekci’s catalog of ways that information is distorted online. He explains how “the crude artisanal and industrial forms of publication and censorship familiar to us from centuries past” may shortly give way to “an individuated, dynamic model of information control powered by adaptive algorithms that operate in ways even their creators struggle to understand.”

In 2017, Cambridge University Press “quietly removed 315 articles and book reviews from the online edition of the respected British academic journal The China Quarterly, without consulting the journal’s editors or the affected authors,” making them inaccessible to subscribers in China. While the press ultimately reversed itself, “Springer Nature, which bills itself as the largest academic publisher in the world, capitulated to Chinese requests, effectively arguing that its censorship of over 1,000 of its own publications was a cost of doing business.”

It is possible to alter the archive in even less visible and more global ways. Punishing resource constraints and a turn to digitization have led many libraries to deemphasize physical collections. Unlike the difficult maneuvers required to rewrite history in Orwell’s 1984, the centralization of digital collections makes it relatively simple to tweak censorship so that it reflects whatever past is most useful to the present. Tiffert analyzes how Chinese censors removed most of one side in a debate in “the two dominant academic law journals published in the PRC during the 1950s,” whose print editions “document the construction of China’s post-1949 socialist legal system and the often savage debates that seized it.” These law journals are particularly useful targets for censorship because there are few complete print runs outside the PRC, and the print volumes are fragile and often stored off-site, so digital versions are the only way most people can encounter them. (It is striking that the PRC devoted resources to this obscure corner of legal history, rather than simply trying to shape contemporary accounts of that history.)

The selective editing of online editions “materially distort the historical record but are invisible to the end user,” potentially deceiving good-faith researchers. Tiffert explains that the original issues from 1956 through 1958 “chronicle how budding debates over matters such as judicial independence, the transcendence of law over politics and class, the presumption of innocence, and the heritability of law abruptly gave way to vituperative denunciations of those ideas and their sympathizers.” The online databases, however, have removed 63 articles, constituting more than 8% of the articles and 11% of the total page count during this critical three-year period.

The missing articles are often lead articles—that is, articles the editors presumably thought were especially important. The deletions are often invisible. The online tables of contents show no omissions, and while one of the two authorized platforms on which the censored versions appear would allow counting of page numbers to reveal omitted sequences, the other simply omits page numbers. Tiffert argues that the suppressed authors “promoted values associated with the rule of law and greater separation between party and state,” making it embarrassing for the PRC to preserve “the record of their arguments and the persecutions they endured,” given the unitary version of Chinese history the government prefers.

Tiffert focuses on two publications, but points out that People’s Judicature (the official publication of the courts) and a leading social science journal are missing entire issues. And censorship of more current topics is even more pervasive, including the disappearance of President Xi Jinping’s 2001 doctoral dissertation from databases. A user who searches the online archives of the official party newspaper for sensitive terms that appeared in print can lose access, or get different results “depending on whether the vendors supplying access to the archive host their servers in China or outside of it.” As Tiffert shows by developing his own algorithm, which does a pretty good job of targeting the disfavored articles (he reports a 95% success rate), much of this censorship can be automated.

Copyright law shows up as an additional problem. The U.S. restoration of copyright in foreign works prolongs copyright for 95 years from publication, allowing the Chinese government to assert exclusive U.S. rights in the journals for decades to come (either by claiming copyright ownership directly or pressuring whatever Chinese entity claims copyright to enforce its rights—it is not clear who the owners are under Chinese law, though obviously the current commercial database providers are confident that they have permission from the owners). Though Tiffert notes the §108 limitation for libraries allowing them to make limited copies in the last 20 years of the extended term, he unfortunately does not discuss the strong case for fair use for any article censored by the Chinese government. Today’s fair use jurisprudence provides (1) clear protection for creating a database of all articles, including censored ones, and providing relevant snippets in response to user search, and (2) strong reason to think that providing full access to censored articles would be fair. But it is not surprising that fear, uncertainty and doubt surrounding copyright would deter scholarly archives that might otherwise be willing to preserve and protect this history, especially if they are associated with colleges or universities hoping for a lucrative flow of students from China.

Fair use could be an important addition to Tiffert’s recommendations, including “[d]emanding that providers make unredacted collections available on alternate servers beyond the reach of interested censors.” He also suggests “industry-wide best practices to uphold the integrity of our digital collections,” which would include “transparently disclos[ing] omissions and modifications.” But his larger appeal is ethical: principles that would prevent institutions in democratic societies from accepting this kind of censorship of the past.

Cite as: Rebecca Tushnet, Invisible Holes in History, JOTWELL (October 1, 2020) (reviewing Glenn D. Tiffert, Peering down the Memory Hole: Censorship, Digitization, and the Fragility of Our Knowledge Base, 124 Am. Hist. Rev. 550 (2019), available in draft at The Washington Post),

Code is More Than and Less Than Law

Laurence Diver, Digisprudence: the design of legitimate code, 13 Law, Innovation & Technology __ (forthcoming, 2020), available at LawArXiv.

We often say that code is law, but what kind of law is it? Laurence Diver’s new article, Digisprudence: the design of legitimate code, introduces his ‘digisprudence’ theory, associating himself with the welcome emphasis upon design that is seen in particular in current work on privacy (e.g. Woodrow Hartzog’s Privacy’s Blueprint) and in Ian Kerr’s attention to the power of defaults, and doing so in light of a rich body of scholarship, from well beyond technology law, on law and legitimacy.

Code is not law, Diver says, with tongue slightly in cheek. It is more than law, constituting and regulating at the same time, rather than needing interpretation by addressees as law does. Yet it is also less than law, in the absence of, for instance, the possibility of disobedience. Drawing from ideas in the jurisprudential canon, including the morality of law and the more recent ‘legisprudence’ ideas of Luc Wintgens (on core principles for limiting subjective notions of freedom), Diver asks us to think of how ‘constitutional’ ideas such as legitimacy ought to be embedded in the software ‘legislature’, i.e. the contexts and environments for, and methodologies of, the production of software. He is rightly adamant that we must focus on production, arguing that code must be legitimate from the outset rather than often futilely retrofitted once it is in the wild.

This article summarises the findings of Diver’s doctoral research at the University of Edinburgh, and points to themes of his current work at COHUBICOL (Counting as a Human Being in the Era of Computational Law). (Indeed, digisprudence as a theory is clearly influenced by Edinburgh legal theorists past and present, including Neil MacCormickZenon Bankowski, and Diver’s doctoral supervisor Burkhard Schafer). From this work, Diver identifies the centrality of explanation and legitimacy to the acceptability of legal orders, drawing a firm distinction between law and legalism. He finds that code-as-law suffers from the worst excesses of legalism—narrow governance rather than principles, an inability to view and contest decisionmaking—and is, by its nature, resistant to the countervailing forces, such as requirements for certainty, or constraints upon sovereign power, that make law acceptable. (For a related argument, emphasizing the resulting need for new countermovements, see the Jotwell commentary on Julie Cohen’s book Between Truth and Power by Mireille Hildebrandt, who leads the COHUBICOL project.)

This article is full of thoughtful insights, which support the development of the theory of digisprudence, and are also capable of application on their own terms. I highlight two of them here. First, the affordances of software, a science and technology studies concept increasingly discussed in writing on law and technology which focuses on how design has an impact on use and behaviour, are discussed alongside the less familiar concept of disaffordances, or the restrictions imposed upon users. Brilliantly, Diver takes note of Lessig’s idea of ‘architectures of control’ but then draws our attention to choices made by designers to embed such disaffordances in objects and systems, engaging with work including that of Dan Lockton (founder of the Imaginaries Lab) and Peter-Paul Verbeek (co-director of the Design Lab in Twente). Second, Diver makes the powerful point that we should not be led by whether code authors position themselves as regulators, or having the authority to regulate—instead, we should look at what the code does and how it affects users. This is particularly important in a world where much of the production happens in the private sector and without some of the more obvious public law mechanisms of accountability and oversight.

In what is largely a conceptual article, Diver nonetheless applies emerging arguments to current circumstances. He chooses blockchain applications for this purpose, though his approach is less about how blockchain disrupts “insert legal area of choice” and more about how the desire for smart contracts and the like challenges how we think about rules. Tellingly, Diver mentions DRM at the outset of the section on blockchain; as with critiques of DRM, Diver asks the reader to reflect on the implications for governance and legitimacy of a widespread shift from more familiar legal approaches towards an apparently promising technological solution.

Digisprudence itself is explained in a table, where the characteristics of computational legalism are matched to Fullerian (morality of law) and legisprudential principles, resulting in a short and clear set of design-focused affordances, of which contestability is the core – because it allows both individuals and institutional to be empowered. If these concepts are considered at the right stage in the process (i.e. at the time of design), a form of legitimacy, recognisable as constitutional in nature, is possible. Quite properly, Diver points to areas that are ripe for digisprudential analysis, including machine learning and robotics.

As in many parts of the world a new and quite unusual new academic year approaches, there are also some great opportunities to use Diver’s digisprudence theory in teaching law and technology, even for revisiting earlier stages of technological development, such as the rise in influence of commercial social media platforms, or the debates, which now cross the decades, on regulating search. Though studying the way in which code regulates behaviour has rightly become an established feature of technology law, Diver’s contribution calls on us to look to the design process (and research on design) and to the limits of legalism, if we really want to understand and promote the legitimacy of such regulation.

Cite as: Daithí Mac Síthigh, Code is More Than and Less Than Law, JOTWELL (August 14, 2020) (reviewing Laurence Diver, Digisprudence: the design of legitimate code, 13 Law, Innovation & Technology __ (forthcoming, 2020), available at LawArXiv),

Countermovements to Reinstate Countervailing Powers

No, law does not necessarily lag behind technological development. No, smart technologies are not destined to lead the road to either freedom or surveillance. Determinisms of any kind are not what make Julie E. Cohen’s Between Truth and Power: The Legal Constructions of Informational Capitalism a great sensitizer to the mutual transformations that law, economy, power and technology affect.

Instead, the underlying thesis of the book is that to come to terms with the systemic harms of informational capitalism, we need to develop a keen eye for the precise way that legal rights, duties, immunities and powers are deployed and reconfigured to enable the move from a market to a platform economy —while also detecting the emergence of novel entitlements and disentitlements outside Hohfeld’s framework. Steering clear of both technological and economic determinism, Cohen argues that the instrumentalization of legal institutions by powerful economic actors requires new types of Polanyian countermovements, to address and redress outrageous accumulation of economic power.

In my own terms, Cohen asserts that Montesquieu’s countervailing powers require reinvention in the face of the radical reconfiguration of the political economic landscape wrought by the shift from neo-liberal economic markets to monopolistic multi-sided vertically integrated platform economies. This will require what political economist Karl Polanyi called ‘countermovements’ in his seminal 1944 work, The Great Transformation. Economic markets do not grow like grass (they are not ‘natural’) but are the result of legal entitlements and legal constraints. This implies that markets can be ‘made’ in different ways, thus creating different economic incentives and different outcomes (as to equality and freedom). It also implies that the hold of market fundamentalism on other contexts (politics, health, education) is not ‘given’ and can be pushed back. (See a similar but more condensed discussion in Jedediah Britton-Purdy et al., Building a Law-and-Political-Economy Framework: Beyond the Twentieth-Century Synthesis, 129 Yale L.J. 1784 (2020).)

As the subtitle indicates, this work explains how law contributes to the construction of informational capitalism. The latter refers to a regime where ‘market actors use knowledge, culture, and networked information technologies as means of extracting and appropriating surplus value, including consumer surplus’ (P. 6). It is refreshing though disturbing to be guided through the motions by which some of law’s pathways have been instrumentalised to safeguard privileged private interests where public goods are at stake and both fairness and freedom trampled upon. Such instrumentalization needs to be detailed, called out, and countered.

Cohen weaves a textured narrative with detailed attention to the developments that shaped and reshaped our legal institutions, which in turn shaped and reshaped the pathways of our political economy. Often, she describes opposing accounts of what is at stake, followed by new insights that can only be mined when looking awry – away from conventional oppositions that distract attention from underlying reconstructions. Let me give one example. Discussions of IP law often contrast incentives for individual creation with control over such creation, or reward of original invention with reward of capital investment and corporate risk taking. Cohen uncovers how such discourse remains within the confines of Chicago School economics, with its emphasis on atomistic methodological individualism, consent as a commodity (termed ‘consumer preference’), and a blind eye to power relationships. Instead of staying within the limits of this discourse, she tracks the legislative as well as judicial transformations that enabled the growth of patent portfolios meant to bolster bargaining positions rather than rewarding either individual creativity or innovative risk taking. In doing so, Cohen avoids the usual ideological trenches, keeping her eye on the ball: the traditional countervailing powers allowing big players to work around, co-opt or redefine legal institutions that stand in the way of monopolistic control over newly emerging informational sources.

Instead of arguing for a return to liberal markets that supposedly ensured an ideal setting for liberal democracies, Cohen digs deeper into what Polanyi called the ‘double movement’ of 19th and 20th century capitalism. She traces the rise of liberal markets as part of the industrial revolution that was built on the commodification of land, labour and money (the first movement), explaining how the perverse implications of unbridled capital accumulation gave rise to ‘countermovements’ that resulted in market reforms and a strong state to protect against monopolistic power and inequity, thus instigating what in Europe we call social democracies (the second movement). Cohen then demonstrates how the influence of the Chicago School gave rise to a neo-liberal governmentality that makes the idea of an unfettered free market the default setting for pursuing both public and private interests, entangled with an ideology of managerialism. Co-opting the rise of new socio-technical infrastructures that afford rent seeking from the accumulation of (access to) knowledge and information, industrial capitalism has transmuted into informational capitalism, culminating in the platform economy. This, Cohen convincingly argues, requires a new agenda for institutional innovation (new countermovements) that cannot be taken for granted or derived from previous reforms.

As she ends her book, we have a ‘new window of opportunity that now stands open’, thus calling for active engagement of lawyers willing to resist and reform the unprecedented economic power generated by newly shaped neoliberal playing fields. I would agree with Benkler in his 2018 Law and Political economy blog posts on the  ‘Political Economy of Technology’, in which he insists that we should not make the mistake of buying into the mainstream narrative that naturalises both economic markets and technological change, nor reduce the solution space to institutional rearrangement. Instead we should actively collaborate to design and redesign the technological infrastructures that afford informational capitalism.

I believe that Cohen’s analysis of networked socio-technical infrastructures in her Configuring the Networked Self: Law, Code, and the Play of Everyday Practice, Yale University Press (2012), together with the institutional investigations of Between Truth and Power, offer a way to both distinguish and combine institutional and technical redesign as part of the countermovement she calls for. An example would be the legal obligation imposed by the EU General Data Protection Regulation to implement data protection by design. This obligation requires those who deploy data-driven solutions to build protection into their computing systems at the level of their architecture, thus redressing potential power imbalances based on unlimited extraction of personal data at the technical level. Simultaneously, by making this a legal obligation instead of an ethical duty, such redress is institutionalised and becomes enforceable instead of depending upon the ethical inclinations of individual persons or companies.

For a lawyer dedicated to law and the rule of law, Cohen’s account of powerful actors successfully ‘playing’ legal institutions to serve private interests is painful reading. It reminds me that countervailing powers cannot be taken for granted and must be sustained and reinvented; they require new countermovements. This will take more than lawyers, because checks and balances will have to be built into the data- and code-driven architectures that form the backbone of our institutional environment. And those built-in affordances will determine the kind of informational capitalism we must live with.

Cite as: Mireille Hildebrandt, Countermovements to Reinstate Countervailing Powers, JOTWELL (July 17, 2020) (reviewing Julie E. Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism (2019)),

Old Frauds in New Fintech Bottles

Christopher Odinet, Consumer Bitcredit and Fintech Lending, 69 Ala. L. Rev. 781 (2018).

The COVID crisis has starkly revealed the thin line between middle-class status and destitution in the United States. As a Greater Depression looms, vital assistance from the federal government may soon expire. At that point, the unemployed may need to seek loans for necessities, ranging from rent to food to health care. Advocates for a “public option” in finance have pressed ideas like postal banking or “quantitative easing for the people,” to enable direct government provision of lending for those the market is not serving. They have met a wall of opposition, particularly from libertarian advocates of cyber finance. The tech solutionist alternative is simple: instead of direct government lending, let new financial technology (fintech) companies accumulate more data, and then they can precisely calibrate optimal loan amounts and interest rates. Algorithmic lending, cryptocurrency, and smart contracts all have a place in this vision.

Christopher Odinet’s important article Consumer Bitcredit and Fintech Lending challenges this conventional wisdom, demonstrating that some fintech business models rely on deeply predatory and unfair treatment of borrowers. Through both qualitative and quantitative analysis of over 500 complaints from a Consumer Financial Protection Board (CFPB) dataset, Odinet paints a grim picture of fintech malfeasance. Cyberlenders may be a route for financial inclusion for many—but they also pose risks that are poorly understood, and nearly impossible to protect against.

Odinet painstakingly documents and classifies actual consumer complaints, adding an invaluable empirical foundation to widespread worries about the potential for predatory financial inclusion by new entrants in the consumer lending space. I wish I had Odinet’s article when I testified before the Senate Banking Committee on fintech in 2017. Key senators and Trump Administration officials clearly wanted to accelerate deregulation; Odinet shows the importance of an enduring role for both federal and state regulators in this space.

Here are just a few of the narratives Odinet unearths in consumer complaints:

From a borrower trying to auto-pay a loan: “They are outrageous with regard to how many problems they create to prevent you from paying your monthly installment. Clearly, they are trying to get consumers to default, so they can jab you with excessive late (and other) fees.”

From a borrower who paid off her loan in full, only to continue being debited: They “debited my account for bill and grocery money that i [sic] needed to take care of my family.”

From a borrower surprised by a large “origination fee: “The loan documentation was not available until the loan was funded and there is nothing in the documentation that indicates the origination fee that would be charged.”

From a borrower behind on payments: “This company calls every hour on the hour.”

From a borrower stuck with a high interest rate: “I was told, that after 1 yr. I was going to be able to lower my interested [sic] rate on [my] debt consolidation loan. But, it turns out, that I have to reapply & pay another lending club processing fee. The rate is ridiculously high compare [sic] to current rates. I only took this loan in desperation.”

Other entities appear to be harvesting sensitive financial information from loan applicants, then disappearing without actually funding loans.

Odinet complements these narratives with pie charts classifying complaints. He finds that “the largest number of complaints (over half) relate to how the loan was managed. The next highest category deals with taking out a loan.” His empirical analysis deftly visualizes government data in an accessible manner. It also has immense policy relevance. Emboldened by fintech utopianism, many regulators have loosened the reins for new firms. But this is a misguided approach, since the use of AI in fintech has just as many problems as traditional underwriting—if not more.

Odinet’s work also helped me suss out a paradox in fintech valuation. Investors have justified pouring money into this sector based on the prospect of ever-improving AI finding ever more profit opportunities than older statistical methods. However, I’ve also been to presentations by experts on finance algorithms convincingly demonstrating that past repayment history is powerfully predictive of future conduct, and that additional “fringe” or “nontraditional” data adds little to the predictive calculus. So how are fintechs supposed to make above market returns if their “secret sauce” in reality adds so little to their predictive capacities? As expertly interpreted by Odinet, the CFPB complaints database suggests a ready route to profitability: hiding good old fashioned cheating, sharp business practices, and dark patterns behind a shiny veneer of futuristic AI. Here, Odinet follows in the footsteps of many scholars who have exposed deep problems in an allegedly new digital economy (including platform capitalism and initial coin offerings). All too often, a narrative of technological advance masks old, disfavored, and illegal practices.

Of course, there will always be rival narratives about the value and dangers of algorithmic lending and fintech platforms. They do extend credit to some individuals who would find no conventional alternatives. Odinet offers important data here that will be of use to both advocates and critics of fintech. He complements his expert and compelling empirical findings with accessible explanations of why they matter. He grounds recommendations for regulatory responses on the empirical findings in this article, focusing on the need for relevant agencies to better understand fintechs’ business models, to detect and deter discrimination, and to ensure more effective disclosures. This is important work that will help governments around the world develop data-informed approaches to the regulation of fintech.

Cite as: Frank Pasquale, Old Frauds in New Fintech Bottles, JOTWELL (June 16, 2020) (reviewing Christopher Odinet, Consumer Bitcredit and Fintech Lending, 69 Ala. L. Rev. 781 (2018)),

The Letter (and Emoji) of the Law

Eric Goldman, Emojis and the Law, 93 Wash. L. Rev. 1227 (2018).

Eric Goldman’s Emojis and the Law is 🔥🔥🔥. If you don’t know what that sentence means, then Goldman’s article is a perceptive early warning about a problem that will increasingly confront courts. Any time legal consequences turn on the content of a communication, there is a live evidentiary question about the meaning of the emoji it contains. Has a criminal defendant who uses 🔫 in an Instagram post threatened a witness? Has a prospective tenant who uses 🐿️ in a text message agreed to lease an apartment? To answer these questions, lawyers and judges must know what emoji are and how they work, and Goldman’s article is the beginning of wisdom.

Even if you did know that the Fire emoji means that Emojis and the Law is “hot” in the sense of Larry Solum‘s “Download it while it’s hot!” Goldman raises deeper questions. How did you learn this meaning? Is it reliably documented in a way that briefs and opinions can cite? What about the fact that the “same” emoji can look dramatically different on an iPhone and on a PC? In short, the interpretation of emoji is problematic in a way that ought to make legal theorists sit up and pay attention.

Goldman begins with an overview of emoji: how they are implemented on a technical level and how they are used socially. The short version of the technical story is that the Unicode Consortium standardizes the characters used on computers (e.g., A, ג, Њ, and ) and the way each character is encoded in bits (e.g., Latin Capital Letter A is encoded as the bits 01000001 in the widely used UTF-8 encoding). It has now added emoji to the characters it standardizes, giving us such familiar friends as Hundred Points Symbol and Face with Tears of Joy. (Goldman also discusses “emoji” that are run by private companies and not standardized by the Consortium, such as Bitmoji and Memoji, which are their own kettle of worms.)

As Goldman astutely emphasizes, however, the “standardization” of emoji is quite limited. The Consortium defines an emoji’s name and encoding: “Fire Engine” is 11110000 10011111 10011010 10010010 in UTF-8. But it does not control how “Fire Engine” will appear on different platforms. Compare Apple’s realistic ant emoji with Microsoft’s “unsettling” “bee in disguise.” The sender of an emoji may have one image in mind; readers may see something else entirely. Nor does the Consortium control emoji semantics. It was Internet users who turned 🍆 and 🍑 into sexual innuendoes. Moreover, emoji “have the capacity to transcend existing language barriers and be understood by speakers of diverse languages” (P. 1289): emoji can accompany messages in English, Italian, Russian, Hebrew, and Hindi, or even serve as a common dialect for all of them.

This combination of technical fixity and social fluidity means that emoji pose difficult interpretive problems. This is hardly unique to law—see generally Gretchen McCulloch’s entertaining and informative popular book on Internet linguistics, Because Internet—but in law the problems arise with particular frequency and intensity. As Goldman has documented, judicial encounters with emoji are rapidly increasing. He found 101 cases that referred to “emoji” or “emoticon” in 2019.

Goldman offers useful advice for lawyers and judges. As a starting point, the variation in how emoji are displayed makes it important to show the actual emoji. “The rat emoji” is not specific enough; maybe it wasn’t the Rat emoji but the Mouse emoji instead. (In a labor case or a witness intimidation case, the difference could matter.) Nor is it enough for a judge to insert an emoji in the PDF version of the court’s opinion. The emoji as displayed on the court’s Windows PC might differ from the emoji as seen on the victim’s Samsung phone or as sent from the defendant’s iPhone. (And that’s to say nothing of the difficulties legal research services create when they fail to reproduce emoji in opinions.) Legal actors dealing with emoji need to be sensitive to these divergences when they try to establish who said what to whom.

Another practical point, which Goldman has developed in his blogging on emoji, is that courts must be careful to remember that the meaning of an emoji is negotiated among the communities that use them to communicate. Sometimes they carry metaphorical or symbolic meaning; sometimes their meanings are context-specific. In one case, a court relied on expert testimony to establish that 👑 has a specific and incriminating meaning in the context of sex trafficking.

As these examples suggest, emoji raise interpretive problems that should also be of great interest to legal theorists. They are like text, but not quite text, and thus they unsettle assumptions about text. For example, we are accustomed to thinking that glyph variations are irrelevant to meaning. Surely, it should not affect the interpretation of the Constitution that we now write “Congress” with a Latin Small Letter S instead of “Congreſs” with a Latin Small Letter Long S. A contract does not mean one thing in Times New Roman and another in Baskerville. And yet platform-specific glyph variations in emoji can make a real difference in meaning, as when Apple changed its glyph for the Pistol emoji from a realistic firearm to a bright green squirt gun. Indeed, platforms switched to cartoony water pistols not in spite of but because of the shift in meaning. Semantic fixation depends on syntactic fixation. The point is not just that emoji function differently than English text in plain old Latin script (which they do), but that they point out how even a concept as simple as “Latin script” contains multitudes.

More generally, Goldman’s thoughtful discussion of emoji interpretation is a useful example of legal interpretation in a setting of obvious and inescapable ignorance. The very unfamiliarity of emoji means that the interpretive challenges are front and center—and thus they help us see more clearly the challenges that have been with us all along. All of the familiar interpretive sources are available to judges interpreting emoji: personal testimony from the parties, expert testimony about emoji usage, surveys, dictionaries of varied and controversial provenance and quality, even corpus linguistics. But in a context where no meanings are plain because all meanings are new, emoji invite us to come at the problem of legal interpretation with true beginner’s mind.

Cite as: James Grimmelmann, The Letter (and Emoji) of the Law, JOTWELL (April 24, 2020) (reviewing Eric Goldman, Emojis and the Law, 93 Wash. L. Rev. 1227 (2018)),

Moderation’s Excess

Hannah Bloch-Wehba, Automation in Moderation, Cornell Int'l L. J. (forthcoming), available at SSRN.

In 2012, Twitter executive Tony Wang proudly described his company as “the free-speech wing of the free-speech party.”1 Seven years later, The New Yorker’s Andrew Marantz declaimed in an op-ed for The New York Times that “free speech is killing us.”2 The intervening years saw a tidal shift in public attitudes toward Twitter and the world’s other major social media services—most notably Facebook, YouTube, and Instagram. These global platforms, which were once widely celebrated for democratizing mass communication and giving voice to the voiceless, are now widely derided as cesspools of disinformation, hate speech, and harassment. How did we get to this moment in the Internet’s history? In Automation in Moderation, Hannah Bloch-Wehba chronicles the important social, technological, and regulatory developments that have brought us here. She surveys in careful detail both how algorithms have come to be the arbiters of acceptable online speech and what we are losing in the apparently unstoppable transition from manual-reactive to automated-proactive speech regulation.

Globally, policy makers are enacting waves of new legislation requiring platform operators to scrub and sanitize their virtual premises. Regulatory regimes that once protected tech companies from liability for their users’ unlawful speech are being dramatically reconfigured, creating strong incentives for platforms to not only remove offensive and illegal speech after it has been posted but to prevent it from ever appearing in the first place. To proactively manage bad speech, platforms are increasingly turning to algorithmic moderation. In place of intermediary liability, scholars of Internet law and policy now speak of intermediary accountability and responsibility.

Bloch-Wehba argues that automation in moderation has three major consequences: First, user speech and privacy are compromised due to the nature and limits of existing filtering technology. Second, new regulatory mandates conflict in unacknowledged and unresolved ways with longstanding intermediary safe harbors, creating a fragmented legal landscape in which the power to control speech is shifting (in ways that should worry us) to state actors. Third, new regulatory mandates for platforms risk entrenching rather than checking the power of mega-platforms, because regulatory mandates to deploy and maintain sophisticated filtering systems fall harder on small platforms and new entrants than on tech giants like Facebook and YouTube.

To moderate the harmful effects of auto-moderation, Bloch-Wehba proposes enhanced transparency obligations for platforms. Transparency reports began as a voluntary effort for platforms to inform users about demands for surveillance and censorship and have since been incorporated into regulatory reporting obligations in some jurisdictions. Bloch-Wehba would like to see platforms provide more information to the public about how, when, and why they deploy proactive technical measures to screen uploaded content. In addition, she calls for disaggregated and more granular reporting about material that is blocked, and she suggests mandatory audits of algorithms to make their methods of operation visible.

Transparency alone is not enough, however. Bloch-Wehba argues that greater emphasis must be placed on delivering due process for speakers whose content is negatively impacted by auto-moderation decisions. She considers existing private appeal mechanisms, including Facebook’s much-publicized “Supreme Court,” and cautions against our taking comfort in mere “simulacr[a] of due process, unregulated by law and constitution and unaccountable to the democratic process.”

An aspect of Bloch-Wehba’s article that deserves special attention given the global resurgence of authoritarian nationalism is her treatment of the convergence of corporate and state power in the domain of automated content moderation. Building on the work of First Amendment scholars including Jack Balkin, Kate Klonick, Danielle Citron, and Daphne Keller, Bloch-Wehba describes a troubling dynamic in which platform executives seek to appease government actors—and thereby to avoid additional regulation—by suppressing speech in accordance with the prevailing political winds. As Bloch-Wehba recognizes, this is a confluence of interests that bodes ill for expressive freedom in the world’s increasingly beleaguered democracies.

Automation in Moderation has much to offer for died-in-the-wool Internet policy wonks and interested bystanders alike. It’s a deep and rewarding dive into the most difficult free speech challenge of our time, offered to us at a moment when public discourse is polarized and the pendulum of public opinion swings wide in the direction of casual censorship.

  1. Josh Halliday, Twitter’s Tony Wang: “We are the free speech wing of the free speech party,” Guardian, Mar. 22, 2012.
  2. Andrew Marantz, Free Speech Is Killing Us, NY Times, Oct. 4, 2019.
Cite as: Annemarie Bridy, Moderation’s Excess, JOTWELL (March 27, 2020) (reviewing Hannah Bloch-Wehba, Automation in Moderation, Cornell Int'l L. J. (forthcoming), available at SSRN),