The Journal of Things We Like (Lots)
Select Page

Invisible Holes in History

Which Western institutions aid and abet Chinese censorship? Major Internet companies probably come immediately to mind. In Peering down the Memory Hole: Censorship, Digitization, and the Fragility of Our Knowledge Base, Glenn Tiffert highlights an unexpected set of additional accomplices: scholarly archival platforms.

Tiffert shows that digitization makes it possible for censorship to disappear into the apparently limitless, but silently curated, torrents of information now available—adding a valuable example to Zeynep Tufekci’s catalog of ways that information is distorted online. He explains how “the crude artisanal and industrial forms of publication and censorship familiar to us from centuries past” may shortly give way to “an individuated, dynamic model of information control powered by adaptive algorithms that operate in ways even their creators struggle to understand.”

In 2017, Cambridge University Press “quietly removed 315 articles and book reviews from the online edition of the respected British academic journal The China Quarterly, without consulting the journal’s editors or the affected authors,” making them inaccessible to subscribers in China. While the press ultimately reversed itself, “Springer Nature, which bills itself as the largest academic publisher in the world, capitulated to Chinese requests, effectively arguing that its censorship of over 1,000 of its own publications was a cost of doing business.”

It is possible to alter the archive in even less visible and more global ways. Punishing resource constraints and a turn to digitization have led many libraries to deemphasize physical collections. Unlike the difficult maneuvers required to rewrite history in Orwell’s 1984, the centralization of digital collections makes it relatively simple to tweak censorship so that it reflects whatever past is most useful to the present. Tiffert analyzes how Chinese censors removed most of one side in a debate in “the two dominant academic law journals published in the PRC during the 1950s,” whose print editions “document the construction of China’s post-1949 socialist legal system and the often savage debates that seized it.” These law journals are particularly useful targets for censorship because there are few complete print runs outside the PRC, and the print volumes are fragile and often stored off-site, so digital versions are the only way most people can encounter them. (It is striking that the PRC devoted resources to this obscure corner of legal history, rather than simply trying to shape contemporary accounts of that history.)

The selective editing of online editions “materially distort the historical record but are invisible to the end user,” potentially deceiving good-faith researchers. Tiffert explains that the original issues from 1956 through 1958 “chronicle how budding debates over matters such as judicial independence, the transcendence of law over politics and class, the presumption of innocence, and the heritability of law abruptly gave way to vituperative denunciations of those ideas and their sympathizers.” The online databases, however, have removed 63 articles, constituting more than 8% of the articles and 11% of the total page count during this critical three-year period.

The missing articles are often lead articles—that is, articles the editors presumably thought were especially important. The deletions are often invisible. The online tables of contents show no omissions, and while one of the two authorized platforms on which the censored versions appear would allow counting of page numbers to reveal omitted sequences, the other simply omits page numbers. Tiffert argues that the suppressed authors “promoted values associated with the rule of law and greater separation between party and state,” making it embarrassing for the PRC to preserve “the record of their arguments and the persecutions they endured,” given the unitary version of Chinese history the government prefers.

Tiffert focuses on two publications, but points out that People’s Judicature (the official publication of the courts) and a leading social science journal are missing entire issues. And censorship of more current topics is even more pervasive, including the disappearance of President Xi Jinping’s 2001 doctoral dissertation from databases. A user who searches the online archives of the official party newspaper for sensitive terms that appeared in print can lose access, or get different results “depending on whether the vendors supplying access to the archive host their servers in China or outside of it.” As Tiffert shows by developing his own algorithm, which does a pretty good job of targeting the disfavored articles (he reports a 95% success rate), much of this censorship can be automated.

Copyright law shows up as an additional problem. The U.S. restoration of copyright in foreign works prolongs copyright for 95 years from publication, allowing the Chinese government to assert exclusive U.S. rights in the journals for decades to come (either by claiming copyright ownership directly or pressuring whatever Chinese entity claims copyright to enforce its rights—it is not clear who the owners are under Chinese law, though obviously the current commercial database providers are confident that they have permission from the owners). Though Tiffert notes the §108 limitation for libraries allowing them to make limited copies in the last 20 years of the extended term, he unfortunately does not discuss the strong case for fair use for any article censored by the Chinese government. Today’s fair use jurisprudence provides (1) clear protection for creating a database of all articles, including censored ones, and providing relevant snippets in response to user search, and (2) strong reason to think that providing full access to censored articles would be fair. But it is not surprising that fear, uncertainty and doubt surrounding copyright would deter scholarly archives that might otherwise be willing to preserve and protect this history, especially if they are associated with colleges or universities hoping for a lucrative flow of students from China.

Fair use could be an important addition to Tiffert’s recommendations, including “[d]emanding that providers make unredacted collections available on alternate servers beyond the reach of interested censors.” He also suggests “industry-wide best practices to uphold the integrity of our digital collections,” which would include “transparently disclos[ing] omissions and modifications.” But his larger appeal is ethical: principles that would prevent institutions in democratic societies from accepting this kind of censorship of the past.

Cite as: Rebecca Tushnet, Invisible Holes in History, JOTWELL (October 1, 2020) (reviewing Glenn D. Tiffert, Peering down the Memory Hole: Censorship, Digitization, and the Fragility of Our Knowledge Base, 124 Am. Hist. Rev. 550 (2019), available in draft at The Washington Post), https://cyber.jotwell.com/invisible-holes-in-history/.

Code is More Than and Less Than Law

Laurence Diver, Digisprudence: the design of legitimate code, 13 Law, Innovation & Technology __ (forthcoming, 2020), available at LawArXiv.

We often say that code is law, but what kind of law is it? Laurence Diver’s new article, Digisprudence: the design of legitimate code, introduces his ‘digisprudence’ theory, associating himself with the welcome emphasis upon design that is seen in particular in current work on privacy (e.g. Woodrow Hartzog’s Privacy’s Blueprint) and in Ian Kerr’s attention to the power of defaults, and doing so in light of a rich body of scholarship, from well beyond technology law, on law and legitimacy.

Code is not law, Diver says, with tongue slightly in cheek. It is more than law, constituting and regulating at the same time, rather than needing interpretation by addressees as law does. Yet it is also less than law, in the absence of, for instance, the possibility of disobedience. Drawing from ideas in the jurisprudential canon, including the morality of law and the more recent ‘legisprudence’ ideas of Luc Wintgens (on core principles for limiting subjective notions of freedom), Diver asks us to think of how ‘constitutional’ ideas such as legitimacy ought to be embedded in the software ‘legislature’, i.e. the contexts and environments for, and methodologies of, the production of software. He is rightly adamant that we must focus on production, arguing that code must be legitimate from the outset rather than often futilely retrofitted once it is in the wild.

This article summarises the findings of Diver’s doctoral research at the University of Edinburgh, and points to themes of his current work at COHUBICOL (Counting as a Human Being in the Era of Computational Law). (Indeed, digisprudence as a theory is clearly influenced by Edinburgh legal theorists past and present, including Neil MacCormickZenon Bankowski, and Diver’s doctoral supervisor Burkhard Schafer). From this work, Diver identifies the centrality of explanation and legitimacy to the acceptability of legal orders, drawing a firm distinction between law and legalism. He finds that code-as-law suffers from the worst excesses of legalism—narrow governance rather than principles, an inability to view and contest decisionmaking—and is, by its nature, resistant to the countervailing forces, such as requirements for certainty, or constraints upon sovereign power, that make law acceptable. (For a related argument, emphasizing the resulting need for new countermovements, see the Jotwell commentary on Julie Cohen’s book Between Truth and Power by Mireille Hildebrandt, who leads the COHUBICOL project.)

This article is full of thoughtful insights, which support the development of the theory of digisprudence, and are also capable of application on their own terms. I highlight two of them here. First, the affordances of software, a science and technology studies concept increasingly discussed in writing on law and technology which focuses on how design has an impact on use and behaviour, are discussed alongside the less familiar concept of disaffordances, or the restrictions imposed upon users. Brilliantly, Diver takes note of Lessig’s idea of ‘architectures of control’ but then draws our attention to choices made by designers to embed such disaffordances in objects and systems, engaging with work including that of Dan Lockton (founder of the Imaginaries Lab) and Peter-Paul Verbeek (co-director of the Design Lab in Twente). Second, Diver makes the powerful point that we should not be led by whether code authors position themselves as regulators, or having the authority to regulate—instead, we should look at what the code does and how it affects users. This is particularly important in a world where much of the production happens in the private sector and without some of the more obvious public law mechanisms of accountability and oversight.

In what is largely a conceptual article, Diver nonetheless applies emerging arguments to current circumstances. He chooses blockchain applications for this purpose, though his approach is less about how blockchain disrupts “insert legal area of choice” and more about how the desire for smart contracts and the like challenges how we think about rules. Tellingly, Diver mentions DRM at the outset of the section on blockchain; as with critiques of DRM, Diver asks the reader to reflect on the implications for governance and legitimacy of a widespread shift from more familiar legal approaches towards an apparently promising technological solution.

Digisprudence itself is explained in a table, where the characteristics of computational legalism are matched to Fullerian (morality of law) and legisprudential principles, resulting in a short and clear set of design-focused affordances, of which contestability is the core – because it allows both individuals and institutional to be empowered. If these concepts are considered at the right stage in the process (i.e. at the time of design), a form of legitimacy, recognisable as constitutional in nature, is possible. Quite properly, Diver points to areas that are ripe for digisprudential analysis, including machine learning and robotics.

As in many parts of the world a new and quite unusual new academic year approaches, there are also some great opportunities to use Diver’s digisprudence theory in teaching law and technology, even for revisiting earlier stages of technological development, such as the rise in influence of commercial social media platforms, or the debates, which now cross the decades, on regulating search. Though studying the way in which code regulates behaviour has rightly become an established feature of technology law, Diver’s contribution calls on us to look to the design process (and research on design) and to the limits of legalism, if we really want to understand and promote the legitimacy of such regulation.

Cite as: Daithí Mac Síthigh, Code is More Than and Less Than Law, JOTWELL (August 14, 2020) (reviewing Laurence Diver, Digisprudence: the design of legitimate code, 13 Law, Innovation & Technology __ (forthcoming, 2020), available at LawArXiv), https://cyber.jotwell.com/code-is-more-than-and-less-than-law/.

Countermovements to Reinstate Countervailing Powers

No, law does not necessarily lag behind technological development. No, smart technologies are not destined to lead the road to either freedom or surveillance. Determinisms of any kind are not what make Julie E. Cohen’s Between Truth and Power: The Legal Constructions of Informational Capitalism a great sensitizer to the mutual transformations that law, economy, power and technology affect.

Instead, the underlying thesis of the book is that to come to terms with the systemic harms of informational capitalism, we need to develop a keen eye for the precise way that legal rights, duties, immunities and powers are deployed and reconfigured to enable the move from a market to a platform economy —while also detecting the emergence of novel entitlements and disentitlements outside Hohfeld’s framework. Steering clear of both technological and economic determinism, Cohen argues that the instrumentalization of legal institutions by powerful economic actors requires new types of Polanyian countermovements, to address and redress outrageous accumulation of economic power.

In my own terms, Cohen asserts that Montesquieu’s countervailing powers require reinvention in the face of the radical reconfiguration of the political economic landscape wrought by the shift from neo-liberal economic markets to monopolistic multi-sided vertically integrated platform economies. This will require what political economist Karl Polanyi called ‘countermovements’ in his seminal 1944 work, The Great Transformation. Economic markets do not grow like grass (they are not ‘natural’) but are the result of legal entitlements and legal constraints. This implies that markets can be ‘made’ in different ways, thus creating different economic incentives and different outcomes (as to equality and freedom). It also implies that the hold of market fundamentalism on other contexts (politics, health, education) is not ‘given’ and can be pushed back. (See a similar but more condensed discussion in Jedediah Britton-Purdy et al., Building a Law-and-Political-Economy Framework: Beyond the Twentieth-Century Synthesis, 129 Yale L.J. 1784 (2020).)

As the subtitle indicates, this work explains how law contributes to the construction of informational capitalism. The latter refers to a regime where ‘market actors use knowledge, culture, and networked information technologies as means of extracting and appropriating surplus value, including consumer surplus’ (P. 6). It is refreshing though disturbing to be guided through the motions by which some of law’s pathways have been instrumentalised to safeguard privileged private interests where public goods are at stake and both fairness and freedom trampled upon. Such instrumentalization needs to be detailed, called out, and countered.

Cohen weaves a textured narrative with detailed attention to the developments that shaped and reshaped our legal institutions, which in turn shaped and reshaped the pathways of our political economy. Often, she describes opposing accounts of what is at stake, followed by new insights that can only be mined when looking awry – away from conventional oppositions that distract attention from underlying reconstructions. Let me give one example. Discussions of IP law often contrast incentives for individual creation with control over such creation, or reward of original invention with reward of capital investment and corporate risk taking. Cohen uncovers how such discourse remains within the confines of Chicago School economics, with its emphasis on atomistic methodological individualism, consent as a commodity (termed ‘consumer preference’), and a blind eye to power relationships. Instead of staying within the limits of this discourse, she tracks the legislative as well as judicial transformations that enabled the growth of patent portfolios meant to bolster bargaining positions rather than rewarding either individual creativity or innovative risk taking. In doing so, Cohen avoids the usual ideological trenches, keeping her eye on the ball: the traditional countervailing powers allowing big players to work around, co-opt or redefine legal institutions that stand in the way of monopolistic control over newly emerging informational sources.

Instead of arguing for a return to liberal markets that supposedly ensured an ideal setting for liberal democracies, Cohen digs deeper into what Polanyi called the ‘double movement’ of 19th and 20th century capitalism. She traces the rise of liberal markets as part of the industrial revolution that was built on the commodification of land, labour and money (the first movement), explaining how the perverse implications of unbridled capital accumulation gave rise to ‘countermovements’ that resulted in market reforms and a strong state to protect against monopolistic power and inequity, thus instigating what in Europe we call social democracies (the second movement). Cohen then demonstrates how the influence of the Chicago School gave rise to a neo-liberal governmentality that makes the idea of an unfettered free market the default setting for pursuing both public and private interests, entangled with an ideology of managerialism. Co-opting the rise of new socio-technical infrastructures that afford rent seeking from the accumulation of (access to) knowledge and information, industrial capitalism has transmuted into informational capitalism, culminating in the platform economy. This, Cohen convincingly argues, requires a new agenda for institutional innovation (new countermovements) that cannot be taken for granted or derived from previous reforms.

As she ends her book, we have a ‘new window of opportunity that now stands open’, thus calling for active engagement of lawyers willing to resist and reform the unprecedented economic power generated by newly shaped neoliberal playing fields. I would agree with Benkler in his 2018 Law and Political economy blog posts on the  ‘Political Economy of Technology’, in which he insists that we should not make the mistake of buying into the mainstream narrative that naturalises both economic markets and technological change, nor reduce the solution space to institutional rearrangement. Instead we should actively collaborate to design and redesign the technological infrastructures that afford informational capitalism.

I believe that Cohen’s analysis of networked socio-technical infrastructures in her Configuring the Networked Self: Law, Code, and the Play of Everyday Practice, Yale University Press (2012), together with the institutional investigations of Between Truth and Power, offer a way to both distinguish and combine institutional and technical redesign as part of the countermovement she calls for. An example would be the legal obligation imposed by the EU General Data Protection Regulation to implement data protection by design. This obligation requires those who deploy data-driven solutions to build protection into their computing systems at the level of their architecture, thus redressing potential power imbalances based on unlimited extraction of personal data at the technical level. Simultaneously, by making this a legal obligation instead of an ethical duty, such redress is institutionalised and becomes enforceable instead of depending upon the ethical inclinations of individual persons or companies.

For a lawyer dedicated to law and the rule of law, Cohen’s account of powerful actors successfully ‘playing’ legal institutions to serve private interests is painful reading. It reminds me that countervailing powers cannot be taken for granted and must be sustained and reinvented; they require new countermovements. This will take more than lawyers, because checks and balances will have to be built into the data- and code-driven architectures that form the backbone of our institutional environment. And those built-in affordances will determine the kind of informational capitalism we must live with.

Cite as: Mireille Hildebrandt, Countermovements to Reinstate Countervailing Powers, JOTWELL (July 17, 2020) (reviewing Julie E. Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism (2019)), https://cyber.jotwell.com/countermovements-to-reinstate-countervailing-powers/.

Old Frauds in New Fintech Bottles

Christopher Odinet, Consumer Bitcredit and Fintech Lending, 69 Ala. L. Rev. 781 (2018).

The COVID crisis has starkly revealed the thin line between middle-class status and destitution in the United States. As a Greater Depression looms, vital assistance from the federal government may soon expire. At that point, the unemployed may need to seek loans for necessities, ranging from rent to food to health care. Advocates for a “public option” in finance have pressed ideas like postal banking or “quantitative easing for the people,” to enable direct government provision of lending for those the market is not serving. They have met a wall of opposition, particularly from libertarian advocates of cyber finance. The tech solutionist alternative is simple: instead of direct government lending, let new financial technology (fintech) companies accumulate more data, and then they can precisely calibrate optimal loan amounts and interest rates. Algorithmic lending, cryptocurrency, and smart contracts all have a place in this vision.

Christopher Odinet’s important article Consumer Bitcredit and Fintech Lending challenges this conventional wisdom, demonstrating that some fintech business models rely on deeply predatory and unfair treatment of borrowers. Through both qualitative and quantitative analysis of over 500 complaints from a Consumer Financial Protection Board (CFPB) dataset, Odinet paints a grim picture of fintech malfeasance. Cyberlenders may be a route for financial inclusion for many—but they also pose risks that are poorly understood, and nearly impossible to protect against.

Odinet painstakingly documents and classifies actual consumer complaints, adding an invaluable empirical foundation to widespread worries about the potential for predatory financial inclusion by new entrants in the consumer lending space. I wish I had Odinet’s article when I testified before the Senate Banking Committee on fintech in 2017. Key senators and Trump Administration officials clearly wanted to accelerate deregulation; Odinet shows the importance of an enduring role for both federal and state regulators in this space.

Here are just a few of the narratives Odinet unearths in consumer complaints:

From a borrower trying to auto-pay a loan: “They are outrageous with regard to how many problems they create to prevent you from paying your monthly installment. Clearly, they are trying to get consumers to default, so they can jab you with excessive late (and other) fees.”

From a borrower who paid off her loan in full, only to continue being debited: They “debited my account for bill and grocery money that i [sic] needed to take care of my family.”

From a borrower surprised by a large “origination fee: “The loan documentation was not available until the loan was funded and there is nothing in the documentation that indicates the origination fee that would be charged.”

From a borrower behind on payments: “This company calls every hour on the hour.”

From a borrower stuck with a high interest rate: “I was told, that after 1 yr. I was going to be able to lower my interested [sic] rate on [my] debt consolidation loan. But, it turns out, that I have to reapply & pay another lending club processing fee. The rate is ridiculously high compare [sic] to current rates. I only took this loan in desperation.”

Other entities appear to be harvesting sensitive financial information from loan applicants, then disappearing without actually funding loans.

Odinet complements these narratives with pie charts classifying complaints. He finds that “the largest number of complaints (over half) relate to how the loan was managed. The next highest category deals with taking out a loan.” His empirical analysis deftly visualizes government data in an accessible manner. It also has immense policy relevance. Emboldened by fintech utopianism, many regulators have loosened the reins for new firms. But this is a misguided approach, since the use of AI in fintech has just as many problems as traditional underwriting—if not more.

Odinet’s work also helped me suss out a paradox in fintech valuation. Investors have justified pouring money into this sector based on the prospect of ever-improving AI finding ever more profit opportunities than older statistical methods. However, I’ve also been to presentations by experts on finance algorithms convincingly demonstrating that past repayment history is powerfully predictive of future conduct, and that additional “fringe” or “nontraditional” data adds little to the predictive calculus. So how are fintechs supposed to make above market returns if their “secret sauce” in reality adds so little to their predictive capacities? As expertly interpreted by Odinet, the CFPB complaints database suggests a ready route to profitability: hiding good old fashioned cheating, sharp business practices, and dark patterns behind a shiny veneer of futuristic AI. Here, Odinet follows in the footsteps of many scholars who have exposed deep problems in an allegedly new digital economy (including platform capitalism and initial coin offerings). All too often, a narrative of technological advance masks old, disfavored, and illegal practices.

Of course, there will always be rival narratives about the value and dangers of algorithmic lending and fintech platforms. They do extend credit to some individuals who would find no conventional alternatives. Odinet offers important data here that will be of use to both advocates and critics of fintech. He complements his expert and compelling empirical findings with accessible explanations of why they matter. He grounds recommendations for regulatory responses on the empirical findings in this article, focusing on the need for relevant agencies to better understand fintechs’ business models, to detect and deter discrimination, and to ensure more effective disclosures. This is important work that will help governments around the world develop data-informed approaches to the regulation of fintech.

Cite as: Frank Pasquale, Old Frauds in New Fintech Bottles, JOTWELL (June 16, 2020) (reviewing Christopher Odinet, Consumer Bitcredit and Fintech Lending, 69 Ala. L. Rev. 781 (2018)), https://cyber.jotwell.com/old-frauds-in-new-fintech-bottles/.

The Letter (and Emoji) of the Law

Eric Goldman, Emojis and the Law, 93 Wash. L. Rev. 1227 (2018).

Eric Goldman’s Emojis and the Law is 🔥🔥🔥. If you don’t know what that sentence means, then Goldman’s article is a perceptive early warning about a problem that will increasingly confront courts. Any time legal consequences turn on the content of a communication, there is a live evidentiary question about the meaning of the emoji it contains. Has a criminal defendant who uses 🔫 in an Instagram post threatened a witness? Has a prospective tenant who uses 🐿️ in a text message agreed to lease an apartment? To answer these questions, lawyers and judges must know what emoji are and how they work, and Goldman’s article is the beginning of wisdom.

Even if you did know that the Fire emoji means that Emojis and the Law is “hot” in the sense of Larry Solum‘s “Download it while it’s hot!” Goldman raises deeper questions. How did you learn this meaning? Is it reliably documented in a way that briefs and opinions can cite? What about the fact that the “same” emoji can look dramatically different on an iPhone and on a PC? In short, the interpretation of emoji is problematic in a way that ought to make legal theorists sit up and pay attention.

Goldman begins with an overview of emoji: how they are implemented on a technical level and how they are used socially. The short version of the technical story is that the Unicode Consortium standardizes the characters used on computers (e.g., A, ג, Њ, and ) and the way each character is encoded in bits (e.g., Latin Capital Letter A is encoded as the bits 01000001 in the widely used UTF-8 encoding). It has now added emoji to the characters it standardizes, giving us such familiar friends as Hundred Points Symbol and Face with Tears of Joy. (Goldman also discusses “emoji” that are run by private companies and not standardized by the Consortium, such as Bitmoji and Memoji, which are their own kettle of worms.)

As Goldman astutely emphasizes, however, the “standardization” of emoji is quite limited. The Consortium defines an emoji’s name and encoding: “Fire Engine” is 11110000 10011111 10011010 10010010 in UTF-8. But it does not control how “Fire Engine” will appear on different platforms. Compare Apple’s realistic ant emoji with Microsoft’s “unsettling” “bee in disguise.” The sender of an emoji may have one image in mind; readers may see something else entirely. Nor does the Consortium control emoji semantics. It was Internet users who turned 🍆 and 🍑 into sexual innuendoes. Moreover, emoji “have the capacity to transcend existing language barriers and be understood by speakers of diverse languages” (P. 1289): emoji can accompany messages in English, Italian, Russian, Hebrew, and Hindi, or even serve as a common dialect for all of them.

This combination of technical fixity and social fluidity means that emoji pose difficult interpretive problems. This is hardly unique to law—see generally Gretchen McCulloch’s entertaining and informative popular book on Internet linguistics, Because Internet—but in law the problems arise with particular frequency and intensity. As Goldman has documented, judicial encounters with emoji are rapidly increasing. He found 101 cases that referred to “emoji” or “emoticon” in 2019.

Goldman offers useful advice for lawyers and judges. As a starting point, the variation in how emoji are displayed makes it important to show the actual emoji. “The rat emoji” is not specific enough; maybe it wasn’t the Rat emoji but the Mouse emoji instead. (In a labor case or a witness intimidation case, the difference could matter.) Nor is it enough for a judge to insert an emoji in the PDF version of the court’s opinion. The emoji as displayed on the court’s Windows PC might differ from the emoji as seen on the victim’s Samsung phone or as sent from the defendant’s iPhone. (And that’s to say nothing of the difficulties legal research services create when they fail to reproduce emoji in opinions.) Legal actors dealing with emoji need to be sensitive to these divergences when they try to establish who said what to whom.

Another practical point, which Goldman has developed in his blogging on emoji, is that courts must be careful to remember that the meaning of an emoji is negotiated among the communities that use them to communicate. Sometimes they carry metaphorical or symbolic meaning; sometimes their meanings are context-specific. In one case, a court relied on expert testimony to establish that 👑 has a specific and incriminating meaning in the context of sex trafficking.

As these examples suggest, emoji raise interpretive problems that should also be of great interest to legal theorists. They are like text, but not quite text, and thus they unsettle assumptions about text. For example, we are accustomed to thinking that glyph variations are irrelevant to meaning. Surely, it should not affect the interpretation of the Constitution that we now write “Congress” with a Latin Small Letter S instead of “Congreſs” with a Latin Small Letter Long S. A contract does not mean one thing in Times New Roman and another in Baskerville. And yet platform-specific glyph variations in emoji can make a real difference in meaning, as when Apple changed its glyph for the Pistol emoji from a realistic firearm to a bright green squirt gun. Indeed, platforms switched to cartoony water pistols not in spite of but because of the shift in meaning. Semantic fixation depends on syntactic fixation. The point is not just that emoji function differently than English text in plain old Latin script (which they do), but that they point out how even a concept as simple as “Latin script” contains multitudes.

More generally, Goldman’s thoughtful discussion of emoji interpretation is a useful example of legal interpretation in a setting of obvious and inescapable ignorance. The very unfamiliarity of emoji means that the interpretive challenges are front and center—and thus they help us see more clearly the challenges that have been with us all along. All of the familiar interpretive sources are available to judges interpreting emoji: personal testimony from the parties, expert testimony about emoji usage, surveys, dictionaries of varied and controversial provenance and quality, even corpus linguistics. But in a context where no meanings are plain because all meanings are new, emoji invite us to come at the problem of legal interpretation with true beginner’s mind.

Cite as: James Grimmelmann, The Letter (and Emoji) of the Law, JOTWELL (April 24, 2020) (reviewing Eric Goldman, Emojis and the Law, 93 Wash. L. Rev. 1227 (2018)), https://cyber.jotwell.com/the-letter-emoji-of-the-law/.

Moderation’s Excess

Hannah Bloch-Wehba, Automation in Moderation, Cornell Int'l L. J. (forthcoming), available at SSRN.

In 2012, Twitter executive Tony Wang proudly described his company as “the free-speech wing of the free-speech party.”1 Seven years later, The New Yorker’s Andrew Marantz declaimed in an op-ed for The New York Times that “free speech is killing us.”2 The intervening years saw a tidal shift in public attitudes toward Twitter and the world’s other major social media services—most notably Facebook, YouTube, and Instagram. These global platforms, which were once widely celebrated for democratizing mass communication and giving voice to the voiceless, are now widely derided as cesspools of disinformation, hate speech, and harassment. How did we get to this moment in the Internet’s history? In Automation in Moderation, Hannah Bloch-Wehba chronicles the important social, technological, and regulatory developments that have brought us here. She surveys in careful detail both how algorithms have come to be the arbiters of acceptable online speech and what we are losing in the apparently unstoppable transition from manual-reactive to automated-proactive speech regulation.

Globally, policy makers are enacting waves of new legislation requiring platform operators to scrub and sanitize their virtual premises. Regulatory regimes that once protected tech companies from liability for their users’ unlawful speech are being dramatically reconfigured, creating strong incentives for platforms to not only remove offensive and illegal speech after it has been posted but to prevent it from ever appearing in the first place. To proactively manage bad speech, platforms are increasingly turning to algorithmic moderation. In place of intermediary liability, scholars of Internet law and policy now speak of intermediary accountability and responsibility.

Bloch-Wehba argues that automation in moderation has three major consequences: First, user speech and privacy are compromised due to the nature and limits of existing filtering technology. Second, new regulatory mandates conflict in unacknowledged and unresolved ways with longstanding intermediary safe harbors, creating a fragmented legal landscape in which the power to control speech is shifting (in ways that should worry us) to state actors. Third, new regulatory mandates for platforms risk entrenching rather than checking the power of mega-platforms, because regulatory mandates to deploy and maintain sophisticated filtering systems fall harder on small platforms and new entrants than on tech giants like Facebook and YouTube.

To moderate the harmful effects of auto-moderation, Bloch-Wehba proposes enhanced transparency obligations for platforms. Transparency reports began as a voluntary effort for platforms to inform users about demands for surveillance and censorship and have since been incorporated into regulatory reporting obligations in some jurisdictions. Bloch-Wehba would like to see platforms provide more information to the public about how, when, and why they deploy proactive technical measures to screen uploaded content. In addition, she calls for disaggregated and more granular reporting about material that is blocked, and she suggests mandatory audits of algorithms to make their methods of operation visible.

Transparency alone is not enough, however. Bloch-Wehba argues that greater emphasis must be placed on delivering due process for speakers whose content is negatively impacted by auto-moderation decisions. She considers existing private appeal mechanisms, including Facebook’s much-publicized “Supreme Court,” and cautions against our taking comfort in mere “simulacr[a] of due process, unregulated by law and constitution and unaccountable to the democratic process.”

An aspect of Bloch-Wehba’s article that deserves special attention given the global resurgence of authoritarian nationalism is her treatment of the convergence of corporate and state power in the domain of automated content moderation. Building on the work of First Amendment scholars including Jack Balkin, Kate Klonick, Danielle Citron, and Daphne Keller, Bloch-Wehba describes a troubling dynamic in which platform executives seek to appease government actors—and thereby to avoid additional regulation—by suppressing speech in accordance with the prevailing political winds. As Bloch-Wehba recognizes, this is a confluence of interests that bodes ill for expressive freedom in the world’s increasingly beleaguered democracies.

Automation in Moderation has much to offer for died-in-the-wool Internet policy wonks and interested bystanders alike. It’s a deep and rewarding dive into the most difficult free speech challenge of our time, offered to us at a moment when public discourse is polarized and the pendulum of public opinion swings wide in the direction of casual censorship.

  1. Josh Halliday, Twitter’s Tony Wang: “We are the free speech wing of the free speech party,” Guardian, Mar. 22, 2012.
  2. Andrew Marantz, Free Speech Is Killing Us, NY Times, Oct. 4, 2019.
Cite as: Annemarie Bridy, Moderation’s Excess, JOTWELL (March 27, 2020) (reviewing Hannah Bloch-Wehba, Automation in Moderation, Cornell Int'l L. J. (forthcoming), available at SSRN), https://cyber.jotwell.com/moderations-excess/.

The Cute Contracts Conundrum

David Hoffman, Relational Contracts of Adhesion, 85 Univ. of Chicago L. Rev, 1395 (2018).

When considering online contracts, three assumptions often come to mind. First, terms of service and other online agreements are purposefully written to be impossible to read. Second, lawyers at large law firms create these long documents by copying them verbatim from one client to another with minimal tweaking. But third, none of this really matters, as no one reads these contracts anyway.

David Hoffman’s recent paper Relational Contracts of Adhesion closely examines each of these assumptions. In doing so, Professor Hoffman provides at least two major contributions to the growing literature and research on online standard form contracts. First, he proves that these common assumptions are, in some cases, wrong. Second, he explains why these surprising outcomes are unfolding.

First, Hoffman demonstrates that some terms of service provided by popular websites are in fact written in ways that are easily read. Indeed, the sites are hoping that their users actually read the document they drafted. These terms are custom-drafted for each specific firm, and use “cute” language as part of an overall initiative to promote the site’s brand and develop the firm’s unique voice.

To reach this surprising conclusion, Hoffman examines the terms of (among others) Bumble, Tumblr, Kickstarter, Etsy and Airbnb. He finds them to be carefully drafted for readability. Some use humor; others provide users with important rights. Drafting unique, “cute”, and readable provisions is a costly and taxing task both in terms of the actual time the employees must put in, and the additional liability these new provisions might generate for the firm because of their lenient language. Yet these terms have emerged.

What are these provisions and their drafters trying to achieve? In many cases, Hoffman argues, they do not strive to achieve the classical objectives of contractual language (namely, setting forth the rights and obligations of the contractual parties). Rather, they attempt to persuade the users reading these provisions (either before or after the contract’s formation) to act in a specific way. Hoffman refers to such contractual language as “precatory fine print.” The firms understand that these provisions will probably never end up being litigated, even though some of the rights the firms could be asserting in court would most likely be upheld.

Hoffman’s second main contribution relates to his attempt to explain why firms are now taking the time to incorporate cute and readable texts into documents no one was supposed to read anyway. To answer this question, Hoffman, who is a seasoned expert in the field of standard-form-contract law and theory, ventures outside of this field’s comfort zone. Here, he reaches out to several in-house lawyers after failing to come up with a reasonable theoretical explanation for the firms’ effort in drafting documents nobody will read or use.

The results of the survey of in-house lawyers are intriguing. They indicate that the drafters of the noted contractual provisions turn out to be insiders — the firms’ lawyers and general counsel, as well as other employees — as opposed to outside counsel. These employees explain that, in drafting, their objectives were to better reflect the firm’s ideology in the contractual language. In doing so, they were striving to build consumer trust and promote the firm’s brand. Rather than bury contractual provisions, they were interested in showcasing them. The survey respondents also indicated that the contracts were drafted with specific audiences in mind: not necessarily their users, but often journalists and regulators. Furthermore, some firms followed up and found that indeed the initiative was successful and that the messages reflected in the modified contract have been effective in successfully conveying positive signals about the firm, especially given favorable press coverage.

Hoffman’s study focuses on a diverse set of websites. This diversity makes it difficult to wave away his findings by arguing that they result from the specific circumstances involving the examined websites. Indeed, each one of the selected websites is unique (something Hoffman acknowledges). Some are struggling to enter a market dominated by a powerful incumbent, while others are catering to a specific set of users who might be more sensitive to abusive contractual language (such as merchants). The emerging pattern across diverse websites is impossible to ignore. However, the implications of this study will require additional research, as it is very difficult to further predict (as Hoffman admits) which firms will offer friendly contractual language in the future.

One of this article’s strengths is its willingness to recognize its potential methodological shortcomings. Asking a handful of in-house lawyers why they drafted the contracts the way they did can lead to unrepresentative results. In addition, the fact that respondents tended to praise their own hard work and complain about the limited assistance they received from the external law firms is far from surprising. Towards the end of the paper, Hoffman provides candid responses to possible critiques regarding the paper’s methodology. He also acknowledges that the “cute” language adopted by firms might be a manipulative ploy to enhance trust without showing anything in return. Yet he finds the redrafting important and potentially helpful to consumers, given the fact that it requires firms to substantially reflect on their business practices. This process might lead firms to cease obnoxious forms of conduct many executives will feel uncomfortable with once they are fleshed out. Indeed, in the digital environment, mandatory self-reflection is a common strategy to promote the consumer’s objectives and can be found in the GDPR’s requirement to engage in impact assessments (Article 35).

Sometimes, the answers to difficult questions are simple. Contractual language is not always unfriendly to users (at least in form, if not in substance) because the actual humans working at the relevant firms feel bad about drafting draconian provisions. It is heart-warming to learn that occasionally, internal battles within tech firms regarding user protection end up settled in the users’ favor (for a famous example where that did not happen, see this WSJ report regarding Microsoft). One can only hope that as firms mature, gain market value, and lock in a substantial user segment, they will not have a change of heart and shift back to unreadable, mind-numbing standard forms and contracts.

Cite as: Tal Zarsky, The Cute Contracts Conundrum, JOTWELL (February 27, 2020) (reviewing David Hoffman, Relational Contracts of Adhesion, 85 Univ. of Chicago L. Rev, 1395 (2018)), https://cyber.jotwell.com/the-cute-contracts-conundrum/.

Oyez! Robot

Richard M. Re & Alicia Solow-Niederman, Developing Artificially Intelligent Justice, 22 Stan. Tech. L. Rev. 242 (2019).

How and why does it matter that humans do things that machines might do instead, more quickly, consistently, productively, or economically? When and where should we care that robots might take our jobs, and what, if anything, might we do about that?

It is the law’s turn, and the law’s time, to face these questions. Richard Re and Alicia Solow-Niederman offer an excellent, pragmatic overview and framework for thinking about artificial intelligence (AI) in the courtroom. What if the judge is a (ro)bot?

The general questions are far from novel, and there is no shortage of recent research. Facing the emergence of computable governance in the workplace and large swaths of social life, for the last twenty years legal scholars, historians, and researchers in science and technology studies have been exploring “algorithmic” decision-making in computer networks, social media platforms, the “gig” economy, and wage labor.

Yet the application of automation to the law feels different, disconcerting, and disruptive for added reasons that are not always easy to identify. Is it the central role that law and legal systems play in constructions of well-ordered modern society? Is it the combination of cognition and affect that define modern lawyering and judging? Is it the narrative of law as craft work that still underpins so much legal education and practice? Those questions form the conceptual underpinnings of Re and Solow-Niederman’s work. They are after a framework for organizing thoughts about the answers, rather than the answers themselves.

Organizing a framework means pragmatism rather than high theory. The problem to be addressed is “how can we distinguish the benefits from the harms associated with automated judging?” rather than “What defines the humanity of the law?” Re and Solow-Niederman address courtroom practice and judicial decision-making as their central example.

The article proceeds elegantly in a handful of steps.

First, Re and Solow-Niederman propose a reconfigured model of systems-level interactions between “law” and “technology,” shifting from the law as a set of institutions that “responds” to technological innovation (a linear model, labelled “Rule Updating”) and toward law as a set of institutions whose capacities co-evolve with technological innovation (a feedback-driven model, labelled “Value Updating”).

Within the Value Updating model, the article addresses adjudication, distinguishing between stylized “equitable justice” and stylized “codified justice.” The former is usually associated with individualized proceedings in which judges apply legal rules and standards within recognized discretionary boundaries. The latter is usually associated with the routinized application of standardized procedures to a set of facts. The justice achieved by a system of adjudication represents a blend of interests in making accurate decisions and making just decisions.

Re’s and Solow-Niederman’s concerns arise with the alignment and reinforcement of codified justice by algorithmic systems, the “artificially intelligent justice” of their title. They acknowledge that what they call codified justice is not new; they invoke precedents in the federal sentencing guidelines and matrices for administering disability benefits. Nor is codified justice, in its emerging AI-supported forms, temporary. Algorithmic judging supported by machine learning is here to stay, particularly in certain parts of criminal justice (for example, parole and sentencing determinations) and benefits administration, and its role is likely to expand.

Re and Solow-Niederman argue that the emergence of AI in adjudication may shift existing balances between equitable justice and codified justice in specific settings, in ways that key into macro shifts in the character of the law and justice. Their Value Updating model renders those shifts explicit. With AI-based adjudication, they argue that we may see more codified justice and less equitable justice. Why? Because, they note, motivations for adoption and application of AI to adjudication are tangible. Codified justice promises to be relatively cheap; equitable justice is relatively expensive. Firms are likely to promise and to persuade, rightly or wrongly, that AI may deliver better, faster, and cheaper decision-making at scale.

The article is careful to note that these shifts are not inevitable but that the risks and associated concerns are real. Perhaps the most fundamental of those concerns is that AI-supported changes to adjudication may shift “both the content of the law and the relationship between experts, laypersons, and the legal system in democratic society” (P. 262) in systematic ways. Decision-making and adjudicative outcomes may be incomprehensible to humans. Data-driven adjudication may limit the production or persuasiveness of certain types of system-level critiques of legal systems, and it may limit the extent to which rules themselves are permitted to evolve. Reducing the role of human judges may lead to system-level demoralization and disillusionment in society as a whole, leading to questions of legitimacy and trust not only with respect to adjudicative systems but regarding the very architecture of democracy. To paraphrase Re’s and Solow-Niederman’s summation: if robots resolve disputes, why should humans bother engaging with civil society, including fundamental concepts of justice and the identity and role of the state?

Re and Solow-Niederman conclude with their most important and most pragmatic contributions, describing a range of stylized responses to AI’s promise of “perfect enforcement of formal rules” (P. 278) that illuminate “a new appreciation of imperfect enforcement.” (Id.) Existing institutions and systems might be trusted to muddle through, at least for a while, experimenting with AI-based adjudication in various ways without committing decisively to any one approach. Alternatively, equitable adjudication could be “coded into” algorithmic adjudicators, at least in some contexts or with respect to some issues. A third approach would involve some systematic allocation of adjudicative roles to humans rather than machines, a division of labor approach. A final response would tackle the problems of privately developed robot judges by competing with them, via publicly supported or endorsed systems. If you can’t join them (or don’t want to), beat them, as it were. As with the article as a whole, this survey of options is inspired by broad conceptual topics, but its execution has an importantly pragmatic character.

Little of the material is fully novel. The work echoes themes raised several years ago by Ian Kerr and Carissima Mathen and extended more recently by Rebecca Crootof, among others. Its elegance lies in the coordination of prior thinking and writing in an unusually clear way. The framework can be applied generally to the roles that algorithms increasingly play in governance of many sorts, from urban planning to professional sports.

I’ll close with an illustration of that point, one that appears mostly, and briefly, in the footnotes. Consider soccer, or football, as it is known in much of the world. Re and Solow-Niederman acknowledge the utility of thinking about sports as a case study with respect to automation and adjudication. (P. 254 n. 37; P. 278 n. 121.) The following picks up on this and extends it, to show how their framework can be applied to help clarify thinking about a specific example. Other scholars have done similar work, notably Meg Jones and Karen Levy in Sporting Chances: Robot Referees and the Automation of Enforcement. But they did not include soccer in their group of cases, and automation in soccer refereeing has some distinctive attributes that may be particularly relevant here.

A few years ago, to improve refereeing in professional football matches, VAR systems (short for Video Assistant Referee) were introduced. During breaks in play, referees are permitted to look at recorded video of the game and consult with off-field officials who supervise video playback.

VAR has been controversial. It has been implemented so far in a “division of labor” sense, against a long history of experimentation with the rules of the game (or “laws,” as they are formally known). VAR data are generally determinative with respect to rule-based judgments, such as whether a goal has been scored. VAR data are employed differently with respect to possible penalty kicks and possible ejections. In both contexts, presumably because of the severity of the consequences (or, perhaps, despite them), VAR data are advisory. The human referee retains the discretion to make final determinations.

The relevance of VAR is not its technical details; the point is its systems impact. As Jones and Levy note, a mechanical element has been introduced in a game in which both play and adjudication have long been inescapably error-prone. Rightness and wrongness, even in a yes/no sense, are human and humane constructs, in soccer and in the law. VAR, like an AI judge, changes something about this human “essence” of playing and judging experiences.

But the VAR example illuminates something critical about Re and Solow-Neiderman’s framework. Soccer referees not only adjudicate yes/no applications of the rules. Penalty kicks and player ejections do not follow only from administration of soccer’s laws in a “correct/incorrect” sense, with accuracy as the paramount value. In the long history and narrative of soccer, the referee’s discretion has always represented justice. Does a violent tackle warrant a penalty kick? Sometimes it does; sometimes it does not. Unlike referees’ decisions in other sports with machine-based officiating, critical judgments in soccer are based on “fairness” rather than only on “the rule,” where “fairness” is equated to a sense of earned outcomes, or “just deserts.” The soccer referee is dispenser of what might be called “equitable justice” on the field. Enlisting VAR risks tilting this decision-making process toward what might be called “codified justice.”

Is this good for the game, or for the society that depends on it? It’s too soon to say. Soccer, like all institutions, has never been unchanging. Soccer laws, soccer technologies, and soccer values are always at least a little bit in flux, and sometimes much more so. But the soccer example offers not simply another way of understanding challenges of AI and the law. Re and Solow-Neiderman have given us a framework based in the law that helps us understand the challenges of automation and algorithms across additional critical domains of social life. Those challenges ask us to consider, again, what we mean by justice—not only in the law but also beyond it.

Cite as: Michael Madison, Oyez! Robot, JOTWELL (January 24, 2020) (reviewing Richard M. Re & Alicia Solow-Niederman, Developing Artificially Intelligent Justice, 22 Stan. Tech. L. Rev. 242 (2019)), https://cyber.jotwell.com/oyez-robot/.

Trust, Decentralization, and the Blockchain

Kevin Werbach, The Blockchain and the New Architecture of Trust (2018).

The distributed ledger technology known as the blockchain has continued to gather interest in legal academia. With a growing number of books and academic papers exploring almost every aspect of the subject, it is always good to have a comprehensive book that not only covers the basics, but also provides new insights. Kevin Werbach’s The Blockchain and the New Architecture of Trust provides an in-depth yet easy-to-comprehend analysis of the most important aspects of blockchain technology. It also manages to convey astute analysis and some needed and sobering skepticism about the subject.

Werbach describes the characteristics of blockchains, providing a thorough and easy-to-understand introduction to key concepts and the reasons why the technology is thought to be a viable solution for various problems. A blockchain is a cryptographic distributed ledger that appends information in a supposedly immutable manner. An open record of all transactions is made public, which then communicates a “shared truth”, that is a single proof that everyone trusts because it has been independently verified by a majority of the participants in the network. This technology is thought to be a solution for problems such as the lack of reliability in accounting records, and for double spending, where the same amount of currency is used in two different transactions. While the first half of the work is likely to be the most useful for those who are not familiar with the underlying technical concepts, the book really makes its best contributions in its later chapters, where the author begins to criticize various aspects of the technology.

A core topic of the book is that of “a crisis of trust,” and Werbach explains how the blockchain has been repeatedly offered as a possible solution to this problem. Everyday life is built on trust, and we are always engaged in rational calculations based on trust-related questions: Can I trust this person with my car keys? Can I trust my bank? Can a lender trust a borrower? We have built systems of risk assessment, of managing trust, and of redress when trust breaks down. While our financial and legal systems rely on a good measure of trust, it is seen by many as an added expense and an obstacle to a well-functioning economy. Werbach explains how in some circles the concept of trust has become “almost an obscenity.” The need for trust is a vulnerability, a flaw that should be remedied.

Enter the blockchain, a distributed ledger technology that is supposed to solve this perceived crisis of trust by being “trustless”—that is, doing away with the need for trust. The blockchain is proposed as a better solution than past architectures of trust, such as centralized “Leviathans,” third-party arbiters, or peer-to-peer systems. States or other large central authorities historically addressed the problem of trust through enforcement. Alternatively or additionally, intermediaries have acted as arbiters and providers of trust. In peer-to-peer systems, trust is built on an assumption of shared values and norms, and on principles of self-governance.

Werbach explains why the blockchain is characterized as better:

In any transaction, there are three elements that may be trusted: the counterparty, the intermediary, and the dispute resolution mechanism. The blockchain tries to replace all three with software code. People are represented through arbitrary digital keys, which eliminates the contextual factors that humans use to evaluate trustworthiness. (P. 29.)

Werbach identifies a problem, however: for a system that appears to rely so much on the concept of trustlessness, there is still a lot of trust required in intermediaries in the blockchain space. This observation proves Nick Szabo’s comment that “there is no such thing as a fully trustless institution or technology”. Very few people have the technical know-how to become fully self-reliant in the use of blockchain technology, so most people end up having to trust some form of intermediary. The book covers several disasters, such as the infamous QuadrigaCX fiasco, where users lost millions due to either negligence or fraud. “The blockchain was developed in response to trust failures, but it can also cause failures.” (P. 117.)

Werbach cites other examples where intermediaries have exercised a large amount of control, such as the Decentralized Authority Organization (DAO) hack. The DAO was a way to manage transactions on the Ethereum blockchain using smart contracts and a shared pool of funds. A hacker found a way to syphon out funds through a bug in the code. A truly trustless decentralized system would allow this to happen, but the DAO developers decided to fork the code, re-writing history, erasing the bug, and creating a new version of the blockchain where the hack never occurred. This has been seen as evidence that trust is still involved—a trust in blockchain developers and intermediaries.

Werbach also identifies that the blockchain failed in practice to be a fully decentralized system. Satoshi Nakamoto, the pseudonymous inventor of the concept of the blockchain, dreamt of a system where millions of individual miners would come together to verify transactions, making the blockchain a truly decentralized endeavor. But what has happened is more centralized, with mining concentrated in a few massive conglomerates.

Werbach makes some very good criticisms of another use of the blockchain, namely smart contracts. There are various reasons for distrusting smart contracts, but one of the most compelling offered in the book is that despite advances in machine learning, “computers do not have the degree of contextual, domain specific knowledge or subtle understanding required to resolve contractual ambiguity.” (P. 125.)

Finally, Werbach goes into several interesting discussions specific to some platforms, and looks at regulatory responses. I found the regulatory discussion interesting, but perhaps the least useful aspect of the book. The attempts to regulate the blockchain phenomenon move so fast that a few of the examples offered are already outdated, even though the book was published in 2018. As a European reader, I would also have liked a bit more coverage of international developments; while the author cites several cases from abroad, these were not dealt with in depth.

However, these are minor concerns. This is a thorough, informative, and highly readable book that should be the go-to reference for anyone interested in the subject of blockchain and the law.

Cite as: Andres Guadamuz, Trust, Decentralization, and the Blockchain, JOTWELL (December 12, 2019) (reviewing Kevin Werbach, The Blockchain and the New Architecture of Trust (2018)), https://cyber.jotwell.com/trust-decentralization-and-the-blockchain/.

Military Algorithms and the Virtues of Transparency

Ashley S. Deeks, Predicting Enemies, 104 Va. L. Rev. 1529 (2018).

For all the justifiable concern in recent years directed toward the prospect of autonomous weapons, other military uses of automation may be more imminent and more widespread. In Predicting Enemies, Ashley Deeks highlights how the U.S. military may deploy algorithms in armed conflicts to determine who should be detained and for how long, and who may be targeted. Part of the reason Deeks predicts these near-term uses of algorithms is that the military has models: algorithms and machine-learning applications currently used in the domestic criminal justice and policing contexts. The idea of such algorithms being employed as blueprints may cause heartburn. Their use domestically has prompted multiple lines of critique about, for example, biases in data and lack of transparency. Deeks recognizes those concerns and even intensifies them. She argues that concerns about the use of algorithms are exacerbated in the military context because of the “double black box”—“an ‘algorithmic black box’ inside what many in the public conceive of as the ‘operational black box’ of the military” (P. 1537)—that hampers oversight.

Predicting Enemies makes an important contribution by combining the identification of likely military uses of algorithms with trenchant critiques drawn from the same sphere as the algorithmic models themselves. Deeks is persuasive in her arguments about the problems associated with military deployment of algorithms, but she doesn’t rest there. She argues that the U.S. military should learn from the blowback it suffered after trying to maintain secrecy over post-9/11 operations, and instead pursue “strategic transparency” about its use of algorithms. (P. 1587.) Strategic transparency, as she envisions it, is an important and achievable step, though likely still insufficient to remedy all of the concerns with military deployment of algorithms.

Deeks highlights several kinds of algorithms used domestically and explains how they might parallel military applications. Domestic decision-makers use algorithms to assess risks individuals pose in order to determine, for example, whether to grant bail, impose a prison sentence, or allow release on parole. Even more controversially, police departments use algorithms to “identif[y] people who are most likely to be party to a violent incident” in the future (P. 1543, emphasis omitted), as well as to pinpoint geographic locations where crimes are likely to occur.

These functions have military counterparts. During armed conflicts, militaries often detain individuals and have to make periodic assessments about whether to continue to detain them based on whether they continue to pose a threat or are likely to return to the fight. Militaries, like police departments, also seek to allocate their resources efficiently. Algorithms that predict where enemy forces will attack or who is likely to do the attacking, especially in armed conflicts with non-state armed groups, would have obvious utility.

But, Deeks argues, problems with domestic use of algorithms are exacerbated in the military context. As compared with domestic police departments or judicial officials, militaries using algorithms early in a particular conflict are likely to have far less and less granular information about the population with which to train their algorithms. And algorithms trained for one conflict may not be transferable to different conflicts in different locations involving different populations, meaning that the same problems with lack of data would recur at the start of each new conflict. There’s also the problem of applying algorithms “cross-culturally” in the military context, rather than “within a single society” as is the case when they are used domestically (P. 1565), and the related possibility of exacerbating biases embedded in the data. With bad or insufficient data come inaccurate algorithmic outcomes.

Deeks also worries about “automation bias”—that military officials will be overly willing to trust algorithmic outcomes and even more susceptible to this risk than judges, who are generally less tech-savvy. (Pp. 1574-75.) At the same time, she also warns that a lack of transparency about how algorithms work could make military officials unwilling to trust algorithms when they should, that is, when the algorithms would actually improve decision-making and compliance with international law principles like distinction and proportionality. (Pp. 1568-71.)

These and other concerns lead Deeks to her prescription for “strategic transparency.” Deeks argues that the military should “fight its institutional instincts” (P. 1576) to hide behind classification and limited oversight from Congress and the public and instead deploy a lesson from the war on terror—that “there are advantages to be gained by publicly confronting the fact that new tools pose difficult challenges and tradeoffs, by giving reasons for their use, and by clarifying how the tools are used, by whom, and pursuant to what legal rules.” (P. 1583.) Specifically, Deeks argues that in pursuing transparency, the military should explain when and how it uses algorithms and machine learning, articulate how such tools comply with its international law obligations, and engage in a public discussion of costs and benefits of using algorithms. (Pp. 1588-89.) She also urges the military to “articulate[] how it will test the quality of its data, avoid training its algorithms on biased data, and train military users to avoid falling prey to undue automation biases.” (P. 1590.)

Deeks previously served as the Assistant Legal Adviser for Political-Military Affairs in the State Department’s Office of the Legal Adviser (where I had the pleasure of working with her), and so she has the experience of an internal advisor combined with the critical eye of an academic commentator. One hopes that the U.S. military—and others around the world—will heed her thoughtful advice about transparency in the use of algorithms. Transparency is not a panacea for problems of data availability, quality, and bias, but it may help with oversight and accountability. And that’s a good first step.

Cite as: Kristen Eichensehr, Military Algorithms and the Virtues of Transparency, JOTWELL (November 20, 2019) (reviewing Ashley S. Deeks, Predicting Enemies, 104 Va. L. Rev. 1529 (2018)), https://cyber.jotwell.com/military-algorithms-and-the-virtues-of-transparency/.