The Journal of Things We Like (Lots)
Select Page

Moderation’s Excess

Hannah Bloch-Wehba, Automation in Moderation, Cornell Int'l L. J. (forthcoming), available at SSRN.

In 2012, Twitter executive Tony Wang proudly described his company as “the free-speech wing of the free-speech party.”1 Seven years later, The New Yorker’s Andrew Marantz declaimed in an op-ed for The New York Times that “free speech is killing us.”2 The intervening years saw a tidal shift in public attitudes toward Twitter and the world’s other major social media services—most notably Facebook, YouTube, and Instagram. These global platforms, which were once widely celebrated for democratizing mass communication and giving voice to the voiceless, are now widely derided as cesspools of disinformation, hate speech, and harassment. How did we get to this moment in the Internet’s history? In Automation in Moderation, Hannah Bloch-Wehba chronicles the important social, technological, and regulatory developments that have brought us here. She surveys in careful detail both how algorithms have come to be the arbiters of acceptable online speech and what we are losing in the apparently unstoppable transition from manual-reactive to automated-proactive speech regulation.

Globally, policy makers are enacting waves of new legislation requiring platform operators to scrub and sanitize their virtual premises. Regulatory regimes that once protected tech companies from liability for their users’ unlawful speech are being dramatically reconfigured, creating strong incentives for platforms to not only remove offensive and illegal speech after it has been posted but to prevent it from ever appearing in the first place. To proactively manage bad speech, platforms are increasingly turning to algorithmic moderation. In place of intermediary liability, scholars of Internet law and policy now speak of intermediary accountability and responsibility.

Bloch-Wehba argues that automation in moderation has three major consequences: First, user speech and privacy are compromised due to the nature and limits of existing filtering technology. Second, new regulatory mandates conflict in unacknowledged and unresolved ways with longstanding intermediary safe harbors, creating a fragmented legal landscape in which the power to control speech is shifting (in ways that should worry us) to state actors. Third, new regulatory mandates for platforms risk entrenching rather than checking the power of mega-platforms, because regulatory mandates to deploy and maintain sophisticated filtering systems fall harder on small platforms and new entrants than on tech giants like Facebook and YouTube.

To moderate the harmful effects of auto-moderation, Bloch-Wehba proposes enhanced transparency obligations for platforms. Transparency reports began as a voluntary effort for platforms to inform users about demands for surveillance and censorship and have since been incorporated into regulatory reporting obligations in some jurisdictions. Bloch-Wehba would like to see platforms provide more information to the public about how, when, and why they deploy proactive technical measures to screen uploaded content. In addition, she calls for disaggregated and more granular reporting about material that is blocked, and she suggests mandatory audits of algorithms to make their methods of operation visible.

Transparency alone is not enough, however. Bloch-Wehba argues that greater emphasis must be placed on delivering due process for speakers whose content is negatively impacted by auto-moderation decisions. She considers existing private appeal mechanisms, including Facebook’s much-publicized “Supreme Court,” and cautions against our taking comfort in mere “simulacr[a] of due process, unregulated by law and constitution and unaccountable to the democratic process.”

An aspect of Bloch-Wehba’s article that deserves special attention given the global resurgence of authoritarian nationalism is her treatment of the convergence of corporate and state power in the domain of automated content moderation. Building on the work of First Amendment scholars including Jack Balkin, Kate Klonick, Danielle Citron, and Daphne Keller, Bloch-Wehba describes a troubling dynamic in which platform executives seek to appease government actors—and thereby to avoid additional regulation—by suppressing speech in accordance with the prevailing political winds. As Bloch-Wehba recognizes, this is a confluence of interests that bodes ill for expressive freedom in the world’s increasingly beleaguered democracies.

Automation in Moderation has much to offer for died-in-the-wool Internet policy wonks and interested bystanders alike. It’s a deep and rewarding dive into the most difficult free speech challenge of our time, offered to us at a moment when public discourse is polarized and the pendulum of public opinion swings wide in the direction of casual censorship.

  1. Josh Halliday, Twitter’s Tony Wang: “We are the free speech wing of the free speech party,” Guardian, Mar. 22, 2012.
  2. Andrew Marantz, Free Speech Is Killing Us, NY Times, Oct. 4, 2019.
Cite as: Annemarie Bridy, Moderation’s Excess, JOTWELL (March 27, 2020) (reviewing Hannah Bloch-Wehba, Automation in Moderation, Cornell Int'l L. J. (forthcoming), available at SSRN), https://cyber.jotwell.com/moderations-excess/.

The Cute Contracts Conundrum

David Hoffman, Relational Contracts of Adhesion, 85 Univ. of Chicago L. Rev, 1395 (2018).

When considering online contracts, three assumptions often come to mind. First, terms of service and other online agreements are purposefully written to be impossible to read. Second, lawyers at large law firms create these long documents by copying them verbatim from one client to another with minimal tweaking. But third, none of this really matters, as no one reads these contracts anyway.

David Hoffman’s recent paper Relational Contracts of Adhesion closely examines each of these assumptions. In doing so, Professor Hoffman provides at least two major contributions to the growing literature and research on online standard form contracts. First, he proves that these common assumptions are, in some cases, wrong. Second, he explains why these surprising outcomes are unfolding.

First, Hoffman demonstrates that some terms of service provided by popular websites are in fact written in ways that are easily read. Indeed, the sites are hoping that their users actually read the document they drafted. These terms are custom-drafted for each specific firm, and use “cute” language as part of an overall initiative to promote the site’s brand and develop the firm’s unique voice.

To reach this surprising conclusion, Hoffman examines the terms of (among others) Bumble, Tumblr, Kickstarter, Etsy and Airbnb. He finds them to be carefully drafted for readability. Some use humor; others provide users with important rights. Drafting unique, “cute”, and readable provisions is a costly and taxing task both in terms of the actual time the employees must put in, and the additional liability these new provisions might generate for the firm because of their lenient language. Yet these terms have emerged.

What are these provisions and their drafters trying to achieve? In many cases, Hoffman argues, they do not strive to achieve the classical objectives of contractual language (namely, setting forth the rights and obligations of the contractual parties). Rather, they attempt to persuade the users reading these provisions (either before or after the contract’s formation) to act in a specific way. Hoffman refers to such contractual language as “precatory fine print.” The firms understand that these provisions will probably never end up being litigated, even though some of the rights the firms could be asserting in court would most likely be upheld.

Hoffman’s second main contribution relates to his attempt to explain why firms are now taking the time to incorporate cute and readable texts into documents no one was supposed to read anyway. To answer this question, Hoffman, who is a seasoned expert in the field of standard-form-contract law and theory, ventures outside of this field’s comfort zone. Here, he reaches out to several in-house lawyers after failing to come up with a reasonable theoretical explanation for the firms’ effort in drafting documents nobody will read or use.

The results of the survey of in-house lawyers are intriguing. They indicate that the drafters of the noted contractual provisions turn out to be insiders — the firms’ lawyers and general counsel, as well as other employees — as opposed to outside counsel. These employees explain that, in drafting, their objectives were to better reflect the firm’s ideology in the contractual language. In doing so, they were striving to build consumer trust and promote the firm’s brand. Rather than bury contractual provisions, they were interested in showcasing them. The survey respondents also indicated that the contracts were drafted with specific audiences in mind: not necessarily their users, but often journalists and regulators. Furthermore, some firms followed up and found that indeed the initiative was successful and that the messages reflected in the modified contract have been effective in successfully conveying positive signals about the firm, especially given favorable press coverage.

Hoffman’s study focuses on a diverse set of websites. This diversity makes it difficult to wave away his findings by arguing that they result from the specific circumstances involving the examined websites. Indeed, each one of the selected websites is unique (something Hoffman acknowledges). Some are struggling to enter a market dominated by a powerful incumbent, while others are catering to a specific set of users who might be more sensitive to abusive contractual language (such as merchants). The emerging pattern across diverse websites is impossible to ignore. However, the implications of this study will require additional research, as it is very difficult to further predict (as Hoffman admits) which firms will offer friendly contractual language in the future.

One of this article’s strengths is its willingness to recognize its potential methodological shortcomings. Asking a handful of in-house lawyers why they drafted the contracts the way they did can lead to unrepresentative results. In addition, the fact that respondents tended to praise their own hard work and complain about the limited assistance they received from the external law firms is far from surprising. Towards the end of the paper, Hoffman provides candid responses to possible critiques regarding the paper’s methodology. He also acknowledges that the “cute” language adopted by firms might be a manipulative ploy to enhance trust without showing anything in return. Yet he finds the redrafting important and potentially helpful to consumers, given the fact that it requires firms to substantially reflect on their business practices. This process might lead firms to cease obnoxious forms of conduct many executives will feel uncomfortable with once they are fleshed out. Indeed, in the digital environment, mandatory self-reflection is a common strategy to promote the consumer’s objectives and can be found in the GDPR’s requirement to engage in impact assessments (Article 35).

Sometimes, the answers to difficult questions are simple. Contractual language is not always unfriendly to users (at least in form, if not in substance) because the actual humans working at the relevant firms feel bad about drafting draconian provisions. It is heart-warming to learn that occasionally, internal battles within tech firms regarding user protection end up settled in the users’ favor (for a famous example where that did not happen, see this WSJ report regarding Microsoft). One can only hope that as firms mature, gain market value, and lock in a substantial user segment, they will not have a change of heart and shift back to unreadable, mind-numbing standard forms and contracts.

Cite as: Tal Zarsky, The Cute Contracts Conundrum, JOTWELL (February 27, 2020) (reviewing David Hoffman, Relational Contracts of Adhesion, 85 Univ. of Chicago L. Rev, 1395 (2018)), https://cyber.jotwell.com/the-cute-contracts-conundrum/.

Oyez! Robot

Richard M. Re & Alicia Solow-Niederman, Developing Artificially Intelligent Justice, 22 Stan. Tech. L. Rev. 242 (2019).

How and why does it matter that humans do things that machines might do instead, more quickly, consistently, productively, or economically? When and where should we care that robots might take our jobs, and what, if anything, might we do about that?

It is the law’s turn, and the law’s time, to face these questions. Richard Re and Alicia Solow-Niederman offer an excellent, pragmatic overview and framework for thinking about artificial intelligence (AI) in the courtroom. What if the judge is a (ro)bot?

The general questions are far from novel, and there is no shortage of recent research. Facing the emergence of computable governance in the workplace and large swaths of social life, for the last twenty years legal scholars, historians, and researchers in science and technology studies have been exploring “algorithmic” decision-making in computer networks, social media platforms, the “gig” economy, and wage labor.

Yet the application of automation to the law feels different, disconcerting, and disruptive for added reasons that are not always easy to identify. Is it the central role that law and legal systems play in constructions of well-ordered modern society? Is it the combination of cognition and affect that define modern lawyering and judging? Is it the narrative of law as craft work that still underpins so much legal education and practice? Those questions form the conceptual underpinnings of Re and Solow-Niederman’s work. They are after a framework for organizing thoughts about the answers, rather than the answers themselves.

Organizing a framework means pragmatism rather than high theory. The problem to be addressed is “how can we distinguish the benefits from the harms associated with automated judging?” rather than “What defines the humanity of the law?” Re and Solow-Niederman address courtroom practice and judicial decision-making as their central example.

The article proceeds elegantly in a handful of steps.

First, Re and Solow-Niederman propose a reconfigured model of systems-level interactions between “law” and “technology,” shifting from the law as a set of institutions that “responds” to technological innovation (a linear model, labelled “Rule Updating”) and toward law as a set of institutions whose capacities co-evolve with technological innovation (a feedback-driven model, labelled “Value Updating”).

Within the Value Updating model, the article addresses adjudication, distinguishing between stylized “equitable justice” and stylized “codified justice.” The former is usually associated with individualized proceedings in which judges apply legal rules and standards within recognized discretionary boundaries. The latter is usually associated with the routinized application of standardized procedures to a set of facts. The justice achieved by a system of adjudication represents a blend of interests in making accurate decisions and making just decisions.

Re’s and Solow-Niederman’s concerns arise with the alignment and reinforcement of codified justice by algorithmic systems, the “artificially intelligent justice” of their title. They acknowledge that what they call codified justice is not new; they invoke precedents in the federal sentencing guidelines and matrices for administering disability benefits. Nor is codified justice, in its emerging AI-supported forms, temporary. Algorithmic judging supported by machine learning is here to stay, particularly in certain parts of criminal justice (for example, parole and sentencing determinations) and benefits administration, and its role is likely to expand.

Re and Solow-Niederman argue that the emergence of AI in adjudication may shift existing balances between equitable justice and codified justice in specific settings, in ways that key into macro shifts in the character of the law and justice. Their Value Updating model renders those shifts explicit. With AI-based adjudication, they argue that we may see more codified justice and less equitable justice. Why? Because, they note, motivations for adoption and application of AI to adjudication are tangible. Codified justice promises to be relatively cheap; equitable justice is relatively expensive. Firms are likely to promise and to persuade, rightly or wrongly, that AI may deliver better, faster, and cheaper decision-making at scale.

The article is careful to note that these shifts are not inevitable but that the risks and associated concerns are real. Perhaps the most fundamental of those concerns is that AI-supported changes to adjudication may shift “both the content of the law and the relationship between experts, laypersons, and the legal system in democratic society” (P. 262) in systematic ways. Decision-making and adjudicative outcomes may be incomprehensible to humans. Data-driven adjudication may limit the production or persuasiveness of certain types of system-level critiques of legal systems, and it may limit the extent to which rules themselves are permitted to evolve. Reducing the role of human judges may lead to system-level demoralization and disillusionment in society as a whole, leading to questions of legitimacy and trust not only with respect to adjudicative systems but regarding the very architecture of democracy. To paraphrase Re’s and Solow-Niederman’s summation: if robots resolve disputes, why should humans bother engaging with civil society, including fundamental concepts of justice and the identity and role of the state?

Re and Solow-Niederman conclude with their most important and most pragmatic contributions, describing a range of stylized responses to AI’s promise of “perfect enforcement of formal rules” (P. 278) that illuminate “a new appreciation of imperfect enforcement.” (Id.) Existing institutions and systems might be trusted to muddle through, at least for a while, experimenting with AI-based adjudication in various ways without committing decisively to any one approach. Alternatively, equitable adjudication could be “coded into” algorithmic adjudicators, at least in some contexts or with respect to some issues. A third approach would involve some systematic allocation of adjudicative roles to humans rather than machines, a division of labor approach. A final response would tackle the problems of privately developed robot judges by competing with them, via publicly supported or endorsed systems. If you can’t join them (or don’t want to), beat them, as it were. As with the article as a whole, this survey of options is inspired by broad conceptual topics, but its execution has an importantly pragmatic character.

Little of the material is fully novel. The work echoes themes raised several years ago by Ian Kerr and Carissima Mathen and extended more recently by Rebecca Crootof, among others. Its elegance lies in the coordination of prior thinking and writing in an unusually clear way. The framework can be applied generally to the roles that algorithms increasingly play in governance of many sorts, from urban planning to professional sports.

I’ll close with an illustration of that point, one that appears mostly, and briefly, in the footnotes. Consider soccer, or football, as it is known in much of the world. Re and Solow-Niederman acknowledge the utility of thinking about sports as a case study with respect to automation and adjudication. (P. 254 n. 37; P. 278 n. 121.) The following picks up on this and extends it, to show how their framework can be applied to help clarify thinking about a specific example. Other scholars have done similar work, notably Meg Jones and Karen Levy in Sporting Chances: Robot Referees and the Automation of Enforcement. But they did not include soccer in their group of cases, and automation in soccer refereeing has some distinctive attributes that may be particularly relevant here.

A few years ago, to improve refereeing in professional football matches, VAR systems (short for Video Assistant Referee) were introduced. During breaks in play, referees are permitted to look at recorded video of the game and consult with off-field officials who supervise video playback.

VAR has been controversial. It has been implemented so far in a “division of labor” sense, against a long history of experimentation with the rules of the game (or “laws,” as they are formally known). VAR data are generally determinative with respect to rule-based judgments, such as whether a goal has been scored. VAR data are employed differently with respect to possible penalty kicks and possible ejections. In both contexts, presumably because of the severity of the consequences (or, perhaps, despite them), VAR data are advisory. The human referee retains the discretion to make final determinations.

The relevance of VAR is not its technical details; the point is its systems impact. As Jones and Levy note, a mechanical element has been introduced in a game in which both play and adjudication have long been inescapably error-prone. Rightness and wrongness, even in a yes/no sense, are human and humane constructs, in soccer and in the law. VAR, like an AI judge, changes something about this human “essence” of playing and judging experiences.

But the VAR example illuminates something critical about Re and Solow-Neiderman’s framework. Soccer referees not only adjudicate yes/no applications of the rules. Penalty kicks and player ejections do not follow only from administration of soccer’s laws in a “correct/incorrect” sense, with accuracy as the paramount value. In the long history and narrative of soccer, the referee’s discretion has always represented justice. Does a violent tackle warrant a penalty kick? Sometimes it does; sometimes it does not. Unlike referees’ decisions in other sports with machine-based officiating, critical judgments in soccer are based on “fairness” rather than only on “the rule,” where “fairness” is equated to a sense of earned outcomes, or “just deserts.” The soccer referee is dispenser of what might be called “equitable justice” on the field. Enlisting VAR risks tilting this decision-making process toward what might be called “codified justice.”

Is this good for the game, or for the society that depends on it? It’s too soon to say. Soccer, like all institutions, has never been unchanging. Soccer laws, soccer technologies, and soccer values are always at least a little bit in flux, and sometimes much more so. But the soccer example offers not simply another way of understanding challenges of AI and the law. Re and Solow-Neiderman have given us a framework based in the law that helps us understand the challenges of automation and algorithms across additional critical domains of social life. Those challenges ask us to consider, again, what we mean by justice—not only in the law but also beyond it.

Cite as: Michael Madison, Oyez! Robot, JOTWELL (January 24, 2020) (reviewing Richard M. Re & Alicia Solow-Niederman, Developing Artificially Intelligent Justice, 22 Stan. Tech. L. Rev. 242 (2019)), https://cyber.jotwell.com/oyez-robot/.

Trust, Decentralization, and the Blockchain

Kevin Werbach, The Blockchain and the New Architecture of Trust (2018).

The distributed ledger technology known as the blockchain has continued to gather interest in legal academia. With a growing number of books and academic papers exploring almost every aspect of the subject, it is always good to have a comprehensive book that not only covers the basics, but also provides new insights. Kevin Werbach’s The Blockchain and the New Architecture of Trust provides an in-depth yet easy-to-comprehend analysis of the most important aspects of blockchain technology. It also manages to convey astute analysis and some needed and sobering skepticism about the subject.

Werbach describes the characteristics of blockchains, providing a thorough and easy-to-understand introduction to key concepts and the reasons why the technology is thought to be a viable solution for various problems. A blockchain is a cryptographic distributed ledger that appends information in a supposedly immutable manner. An open record of all transactions is made public, which then communicates a “shared truth”, that is a single proof that everyone trusts because it has been independently verified by a majority of the participants in the network. This technology is thought to be a solution for problems such as the lack of reliability in accounting records, and for double spending, where the same amount of currency is used in two different transactions. While the first half of the work is likely to be the most useful for those who are not familiar with the underlying technical concepts, the book really makes its best contributions in its later chapters, where the author begins to criticize various aspects of the technology.

A core topic of the book is that of “a crisis of trust,” and Werbach explains how the blockchain has been repeatedly offered as a possible solution to this problem. Everyday life is built on trust, and we are always engaged in rational calculations based on trust-related questions: Can I trust this person with my car keys? Can I trust my bank? Can a lender trust a borrower? We have built systems of risk assessment, of managing trust, and of redress when trust breaks down. While our financial and legal systems rely on a good measure of trust, it is seen by many as an added expense and an obstacle to a well-functioning economy. Werbach explains how in some circles the concept of trust has become “almost an obscenity.” The need for trust is a vulnerability, a flaw that should be remedied.

Enter the blockchain, a distributed ledger technology that is supposed to solve this perceived crisis of trust by being “trustless”—that is, doing away with the need for trust. The blockchain is proposed as a better solution than past architectures of trust, such as centralized “Leviathans,” third-party arbiters, or peer-to-peer systems. States or other large central authorities historically addressed the problem of trust through enforcement. Alternatively or additionally, intermediaries have acted as arbiters and providers of trust. In peer-to-peer systems, trust is built on an assumption of shared values and norms, and on principles of self-governance.

Werbach explains why the blockchain is characterized as better:

In any transaction, there are three elements that may be trusted: the counterparty, the intermediary, and the dispute resolution mechanism. The blockchain tries to replace all three with software code. People are represented through arbitrary digital keys, which eliminates the contextual factors that humans use to evaluate trustworthiness. (P. 29.)

Werbach identifies a problem, however: for a system that appears to rely so much on the concept of trustlessness, there is still a lot of trust required in intermediaries in the blockchain space. This observation proves Nick Szabo’s comment that “there is no such thing as a fully trustless institution or technology”. Very few people have the technical know-how to become fully self-reliant in the use of blockchain technology, so most people end up having to trust some form of intermediary. The book covers several disasters, such as the infamous QuadrigaCX fiasco, where users lost millions due to either negligence or fraud. “The blockchain was developed in response to trust failures, but it can also cause failures.” (P. 117.)

Werbach cites other examples where intermediaries have exercised a large amount of control, such as the Decentralized Authority Organization (DAO) hack. The DAO was a way to manage transactions on the Ethereum blockchain using smart contracts and a shared pool of funds. A hacker found a way to syphon out funds through a bug in the code. A truly trustless decentralized system would allow this to happen, but the DAO developers decided to fork the code, re-writing history, erasing the bug, and creating a new version of the blockchain where the hack never occurred. This has been seen as evidence that trust is still involved—a trust in blockchain developers and intermediaries.

Werbach also identifies that the blockchain failed in practice to be a fully decentralized system. Satoshi Nakamoto, the pseudonymous inventor of the concept of the blockchain, dreamt of a system where millions of individual miners would come together to verify transactions, making the blockchain a truly decentralized endeavor. But what has happened is more centralized, with mining concentrated in a few massive conglomerates.

Werbach makes some very good criticisms of another use of the blockchain, namely smart contracts. There are various reasons for distrusting smart contracts, but one of the most compelling offered in the book is that despite advances in machine learning, “computers do not have the degree of contextual, domain specific knowledge or subtle understanding required to resolve contractual ambiguity.” (P. 125.)

Finally, Werbach goes into several interesting discussions specific to some platforms, and looks at regulatory responses. I found the regulatory discussion interesting, but perhaps the least useful aspect of the book. The attempts to regulate the blockchain phenomenon move so fast that a few of the examples offered are already outdated, even though the book was published in 2018. As a European reader, I would also have liked a bit more coverage of international developments; while the author cites several cases from abroad, these were not dealt with in depth.

However, these are minor concerns. This is a thorough, informative, and highly readable book that should be the go-to reference for anyone interested in the subject of blockchain and the law.

Cite as: Andres Guadamuz, Trust, Decentralization, and the Blockchain, JOTWELL (December 12, 2019) (reviewing Kevin Werbach, The Blockchain and the New Architecture of Trust (2018)), https://cyber.jotwell.com/trust-decentralization-and-the-blockchain/.

Military Algorithms and the Virtues of Transparency

Ashley S. Deeks, Predicting Enemies, 104 Va. L. Rev. 1529 (2018).

For all the justifiable concern in recent years directed toward the prospect of autonomous weapons, other military uses of automation may be more imminent and more widespread. In Predicting Enemies, Ashley Deeks highlights how the U.S. military may deploy algorithms in armed conflicts to determine who should be detained and for how long, and who may be targeted. Part of the reason Deeks predicts these near-term uses of algorithms is that the military has models: algorithms and machine-learning applications currently used in the domestic criminal justice and policing contexts. The idea of such algorithms being employed as blueprints may cause heartburn. Their use domestically has prompted multiple lines of critique about, for example, biases in data and lack of transparency. Deeks recognizes those concerns and even intensifies them. She argues that concerns about the use of algorithms are exacerbated in the military context because of the “double black box”—“an ‘algorithmic black box’ inside what many in the public conceive of as the ‘operational black box’ of the military” (P. 1537)—that hampers oversight.

Predicting Enemies makes an important contribution by combining the identification of likely military uses of algorithms with trenchant critiques drawn from the same sphere as the algorithmic models themselves. Deeks is persuasive in her arguments about the problems associated with military deployment of algorithms, but she doesn’t rest there. She argues that the U.S. military should learn from the blowback it suffered after trying to maintain secrecy over post-9/11 operations, and instead pursue “strategic transparency” about its use of algorithms. (P. 1587.) Strategic transparency, as she envisions it, is an important and achievable step, though likely still insufficient to remedy all of the concerns with military deployment of algorithms.

Deeks highlights several kinds of algorithms used domestically and explains how they might parallel military applications. Domestic decision-makers use algorithms to assess risks individuals pose in order to determine, for example, whether to grant bail, impose a prison sentence, or allow release on parole. Even more controversially, police departments use algorithms to “identif[y] people who are most likely to be party to a violent incident” in the future (P. 1543, emphasis omitted), as well as to pinpoint geographic locations where crimes are likely to occur.

These functions have military counterparts. During armed conflicts, militaries often detain individuals and have to make periodic assessments about whether to continue to detain them based on whether they continue to pose a threat or are likely to return to the fight. Militaries, like police departments, also seek to allocate their resources efficiently. Algorithms that predict where enemy forces will attack or who is likely to do the attacking, especially in armed conflicts with non-state armed groups, would have obvious utility.

But, Deeks argues, problems with domestic use of algorithms are exacerbated in the military context. As compared with domestic police departments or judicial officials, militaries using algorithms early in a particular conflict are likely to have far less and less granular information about the population with which to train their algorithms. And algorithms trained for one conflict may not be transferable to different conflicts in different locations involving different populations, meaning that the same problems with lack of data would recur at the start of each new conflict. There’s also the problem of applying algorithms “cross-culturally” in the military context, rather than “within a single society” as is the case when they are used domestically (P. 1565), and the related possibility of exacerbating biases embedded in the data. With bad or insufficient data come inaccurate algorithmic outcomes.

Deeks also worries about “automation bias”—that military officials will be overly willing to trust algorithmic outcomes and even more susceptible to this risk than judges, who are generally less tech-savvy. (Pp. 1574-75.) At the same time, she also warns that a lack of transparency about how algorithms work could make military officials unwilling to trust algorithms when they should, that is, when the algorithms would actually improve decision-making and compliance with international law principles like distinction and proportionality. (Pp. 1568-71.)

These and other concerns lead Deeks to her prescription for “strategic transparency.” Deeks argues that the military should “fight its institutional instincts” (P. 1576) to hide behind classification and limited oversight from Congress and the public and instead deploy a lesson from the war on terror—that “there are advantages to be gained by publicly confronting the fact that new tools pose difficult challenges and tradeoffs, by giving reasons for their use, and by clarifying how the tools are used, by whom, and pursuant to what legal rules.” (P. 1583.) Specifically, Deeks argues that in pursuing transparency, the military should explain when and how it uses algorithms and machine learning, articulate how such tools comply with its international law obligations, and engage in a public discussion of costs and benefits of using algorithms. (Pp. 1588-89.) She also urges the military to “articulate[] how it will test the quality of its data, avoid training its algorithms on biased data, and train military users to avoid falling prey to undue automation biases.” (P. 1590.)

Deeks previously served as the Assistant Legal Adviser for Political-Military Affairs in the State Department’s Office of the Legal Adviser (where I had the pleasure of working with her), and so she has the experience of an internal advisor combined with the critical eye of an academic commentator. One hopes that the U.S. military—and others around the world—will heed her thoughtful advice about transparency in the use of algorithms. Transparency is not a panacea for problems of data availability, quality, and bias, but it may help with oversight and accountability. And that’s a good first step.

Cite as: Kristen Eichensehr, Military Algorithms and the Virtues of Transparency, JOTWELL (November 20, 2019) (reviewing Ashley S. Deeks, Predicting Enemies, 104 Va. L. Rev. 1529 (2018)), https://cyber.jotwell.com/military-algorithms-and-the-virtues-of-transparency/.

Lessons from Literal Crashes for Code

Bryan H. Choi, Crashworthy Code, 94 Wash. L. Rev. 39 (2019).

Software crashes all the time, and the law does little about it. But as Bryan H. Choi notes in Crashworthy Code, “anticipation has been building that the rules for cyber-physical liability will be different.” (P. 43.) It is one thing for your laptop to eat the latest version of your article, and another for your self-driving lawn mower to run over your foot. The former might not trigger losses of the kind tort law cares about, but the latter seems pretty indistinguishable from physical accidents of yore. Whatever one may think of CDA 230 now, the bargain struck in this country to protect innovation and expression on the internet is by no means the right one for addressing physical harms. Robots may be special, but so are people’s limbs.

In this article, Choi joins the fray of scholars debating what comes next for tort law in the age of embodied software: robots, the internet of things, and self-driving cars. Meticulously researched, legally sharp, and truly interdisciplinary, Crashworthy Code offers a thoughtful way out of the impasse tort law currently faces. While arguing that software is exceptional not in the harms that it causes but in the way that it crashes, Choi refuses to revert to the tropes of libertarianism or protectionism. We can have risk mitigation without killing off innovation, he argues. Tort, it turns out, has done this sort of thing before.

Choi dedicates Part I of the article to the Goldilocksean voices in the current debate. One camp, which Choi labels consumer protectionism, argues that with human drivers out of the loop, companies should pay the cost of accidents caused by autonomous software. Companies are the “least cost avoiders” and the “best risk spreaders.” This argument tends to result in calls for strict liability or no-fault insurance, neither of which Choi believes to be practicable.

Swinging from too hot to too cold, what Choi calls technology protectionism “starts from the opposite premise that it is cyber-physical manufacturers who need safeguarding.” (P. 58.) This camp argues that burdensome liability will prevent valuable innovation. This article is worth reading for the literature review here alone. Choi briskly summarizes numerous calls for immunity from liability, often paired with some version of administrative oversight.

Where Goldilocks’s bears found happiness in the third option, Choi’s third path forward is found wanting. What he calls doctrinal conventionalism takes the view that tort law as-is can handle things. Between negligence and strict products liability, this group argues, tort will figure robots out.

This third way, too, initially seems unsatisfactory. The law may be able to handle technological developments, Choi acknowledges, but the interesting question isn’t whether; it’s how. And crashing code, he argues in Part II, is at least somewhat exceptional. Its uniqueness isn’t in the usual lack of physical injuries, or the need for a safe harbor for innovation. It’s the problem of software complexity that makes ordinary tort frameworks ill-suited for governing code. Choi explains that software complexity makes it impossible for programmers to guarantee a crash-free program. This “very basic property of software…[thus] defies conventional judicial methods of assessing reasonableness.” (P. 79.) No matter how many resources one applied to quality assurance, one could “still emerge with errors so basic a jury would be appalled.” (P. 80.) (Another bonus for the curious: Choi’s discussion of top-down attempts to bypass these problems through mandating particular formal languages in high-stakes fields such as national security and aviation.)

The puzzle, then, isn’t that software now produces physical injuries, thus threatening the existing policy balance between protecting innovation and remediating harm. It’s that these newly physical injuries make visible a characteristic of software that makes it particularly hard to regulate ex post, through lawsuits. In other words, “[s]oftware liability is stuck on crash prevention,” when it should be focused instead on making programmers mitigate risk. (P. 87.)

In Part III, Choi turns to a line of cases in which courts found a way to get industry to increase its efforts at prevention and risk mitigation, without crushing innovation or otherwise shutting companies down. In a series of crashworthiness cases from the 1960s, courts found that car manufacturers were responsible for mitigating injuries in a car crash, even if (a) such crashes were statistically inevitable, and (b) the chain of causation was extremely hard to determine. While an automaker might not be responsible for the crash itself, it could be held liable for failing to make crashing safer.

Crashworthiness doctrine, Choi argues, should be extended from its limited use in the context of vehicular accidents to code. In the software context, he argues that “there are analogous opportunities for cyber-physical manufacturers to use safer designs that can mitigate the effects of a software error between the onset and the end of a code crash event.” (P. 101.) Programmers should be required not to prevent crashes entirely, but to use known methods of fault tolerance, which Choi discusses in detail. Courts applying crashworthiness doctrine to failed software thus would inspect the code to determine whether it used reasonable fault tolerance techniques.

Crashworthy Code could be three articles: one categorizing policy tropes; one identifying what makes software legally exceptional or disruptive; and one discussing tort law’s ability to handle risk mitigation. But what is most delightful about it is Choi’s thoroughness, his refusal to simplify or overreach. There is something truly delicious about finding a solution to a “new” problem in a strain of case law from the 1960s that most people probably don’t know existed. This is what common law is good at: analogies. And this is what the best technology lawyers among us are best at: finding those analogies, and explaining in detail why they fit technology facts.

Cite as: Margot Kaminski, Lessons from Literal Crashes for Code, JOTWELL (October 3, 2019) (reviewing Bryan H. Choi, Crashworthy Code, 94 Wash. L. Rev. 39 (2019)), https://cyber.jotwell.com/when-computer-code-crashes-get-corporeal/.

Remembering Ian Kerr

Ian Kerr

Ian Kerr 1965-2019

Ian Kerr, who passed away far too young in 2019, was an incisive scholar and a much treasured colleague. The wit that sparkled in his papers was matched only by his warmth toward his friends, of whom there were many. He and his many co-authors wrote with deep insight and an equally deep humanity about copyright, artificial intelligence, privacy, torts, and much much more.

Ian was also a valued contributor to the Jotwell Technology Law section. His reviews here display the same playful generosity that characterized everything else he did. In tribute to his memory, we are publishing a memorial symposium in his honor. This symposium consists of short reviews of a selection of Ian’s scholarship, written by a range of scholars who are grateful for his many contributions, both on and off the page.

(more…)

We the North

Ellen P. Goodman & Julia Powles, Urbanism Under Google: Lessons from Sidewalk Toronto, __ Fordham L. Rev. __ (forthcoming 2019), available at SSRN.

National Geographic’s April 2019 issue focused on ‘cities’, presenting photographs, highlighting challenges, and wondering about the future. Its editor highlighted that two-thirds of the world’s population is expected to live in a city by 2050, and recent history is replete with unfinished or abandoned blueprints for what this future might look like. Yet in the field of technology law and urban planning, the biggest story of the last two years may well be that of Toronto, where a proposal to rethink urban life through data, technology, and redevelopment has prompted important reflections on governance, privacy, and control.

In Urbanism Under Google: Lessons from Sidewalk Toronto, forthcoming in the Fordham Law Review, Ellen P. Goodman and Julia Powles set out to tell the story of the ‘Sidewalk Toronto’ project, from its early announcements (full of promise but lacking in detail) to the elaborate (yet no less controversial) legal and planning documents now publicly available. Goodman and Powles contribute to the public and academic scrutiny of this specific project, but their critique of process and transparency will obviously be of value in many other cities, especially as ‘smart city’ initiatives continue to proliferate.

Sidewalk Labs, associated with Google through its status as a subsidiary of Alphabet (Google’s post-restructuring parent company), is working with Waterfront Toronto (the tripartite agency consisting of federal, provincial and municipal government) to redevelop a soggy piece of waterfront land, ‘Quayside’. Or is it? One of Goodman and Powles’ main observations, splendidly delivered as an a-ha moment halfway through the piece, is how the relationship between, on one hand, the Quayside proposal and, on the other, the wider idea of redeveloping waterfront lands has come to public attention. And indeed, the complexity of the project and its associated documents must have been a key driver for Goodman and Powles, as much of the article’s contribution comes from its careful and close reading of the extensive documentation now published by Sidewalk and by Waterfront Toronto.

Sidewalk Toronto has prompted many reactions. Some are enthusiastic about the promise of a great big beautiful tomorrow. Others see a dystopian surveillance (city-)state where every move is not just tracked but monetised. In Goodman and Powles’ account, the focus is on two issues: (1) the corporate and democratic issues arising out of the relationship between Waterfront Toronto and Sidewalk Labs (including the role of the different levels of government and scrutiny operating in Canada) (part 2), and (2) the specific ‘platform’ vision set out by Sidewalk in its plans (part 3).

The corporate and democratic issues highlighted primarily relate to the role of Waterfront Toronto, which is itself a distinctive agency within the Canadian administrative state. The authors argue that insufficient detail about the project was provided at early stages, and that the processes for public engagement did not allow for sufficient participation by the public (especially as a result of information asymmetries). Various incidents from this period, including leaks, resignations, and media commentary, are set out.

Considering the ‘platform’ vision, a set of concerns trouble the authors. Drawing upon Sidewalk’s own textual and visual representations of its intentions, they identify efficiency and datafication as the key outcomes of a platform-led approach, and situate these developments in a wider ‘smart city’ literature. Testing Ben Green’s work on asymmetries of power, and applying broader concepts such as legal scholar Brett Frischmann’s account of infrastructure and geographer Rob Kitchin’s discussion of the embedding of values, they find that Sidewalk’s approach is radical (or at least an exaggerated version of what is already seen in some smart city initiatives) and potentially a concentration of great power and influence.

One of the most fascinating concepts to emerge from Sidewalk’s plans, and the vibrant debate that these plans have provoked in Canada and elsewhere, is ‘urban data’. Sidewalk defines this broadly as data collected in public spaces and certain other spaces, normally de-identified. Goodman and Powles respond to this emergent (and non-statutory) category in both parts 2 and 3. They are not convinced by de-identification, self-certification, or the novelty of what Sidewalk proposes. Nor are they reassured by the emergent (and fashionable) idea of ‘data trusts’ as a way of addressing legal and popular concerns about the impact of initiatives like this on privacy and intellectual property, and questions of ownership and control associated with both issues.

On their return to the urban data’ issue in the later pages, and bringing together the democratic and platform issues, the authors raise a set of broader questions about control over shared spaces (adding to Lilian Edwards’ influential exploration of public/private divides in smart cities). Neatly, then, Lawrence Lessig’s rhetorical and conceptual use of the history of road building and city planning as paving the way for an understanding of (information) architecture as law comes full circle, as Goodman and Powles wonder how control over a digital layer calls into question the norms of planning and land use.

The authors end on a somewhat resigned note, highlighting the infamy and ‘inevitability’ of the project. Today, though, there is still some doubt as to whether Sidewalk’s plans will come to fruition in their current form. As the Toronto-based scholar Natasha Tusikov, well known to scholars of technology law through her pioneering investigation of industry-led governance of Internet ‘chokepoints’, has just written, the June 2019 release of Sidewalk Toronto’s master plan has led into a new and much-watched round of consultations. Sidewalk’s next steps may therefore disclose the extent to which the critiques set out here by Goodman and Powles can, if at all, be resolved within this project—although their analysis can also inform the work that other cities and development agencies might be planning.

Cite as: Daithí Mac Síthigh, We the North, JOTWELL (September 3, 2019) (reviewing Ellen P. Goodman & Julia Powles, Urbanism Under Google: Lessons from Sidewalk Toronto, __ Fordham L. Rev. __ (forthcoming 2019), available at SSRN), https://cyber.jotwell.com/we-the-north/.

Ian Kerr 1965–2019

Ian Kerr

Ian Kerr 1965–2019

Our community, and the Jotwell Tech section, lost a giant this week. Ian Kerr, who died on August 27 of complications from cancer, was a contributing editor to this section since its founding in 2009, and a luminary of the field of law and technology for much longer. As Canada Research Chair in Law, Ethics, and Technology at the University of Ottawa, Kerr was a leader in in taking ethics and interdisciplinarity seriously. His expansive body of work often addressed questions years before others saw them.

Kerr embodied the Jotwell mission statement: to “identify, celebrate, and discuss the best.” He was unafraid to be laudatory; he was positive without apology. His writing could be bitingly funny, and he loved to let the air out of overinflated ideas, but he was unfailingly, irrepressibly generous toward his fellow scholars and his fellow humans. He won teaching award after teaching award. He welcomed, mentored, and was a constant source of encouragement and critical feedback for everyone—senior or junior, professor ,or student.

In Kerr’s words (about his alums and research team), we are “incredibly fortunate to find [ourselves] surrounded by such excellent people.” He himself was one of these excellent people, not least because he brought out what was excellent in everyone around him. We will miss his voice, his leadership, his brilliance, his friendship, his kindness and his irreverent grace.

 

The Constant Trash Collector: Platforms and the Paradoxes of Content Moderation

Tarleton Gillespie’s important book Custodians of the Internet unpacks the simultaneous impossibility and necessity of content moderation, highlighting nuance rather than answering questions. Within big companies, content moderation is treated like custodial work, like sweeping the floors—and recent revelations reinforce that the abjectness of this work seems to contaminate those who do it. The rules are made by people in positions of relative power, while their enforcement is traumatic, poorly-paid, outsourced scutwork. But for major platforms, taking out the trash—making sure the site isn’t a cesspool—is in fact their central function.

Gillespie urges us to pay attention to the differences between a content policy—which is a document that both tries to shape the reactions of various stakeholders and is shaped by them—and actual content moderation; both are vitally important. (Facebook’s newly announced “Supreme Court” is on the former side: it will make important decisions, but make them at a level of generality that will leave much day-to-day work to be done by the custodial staff.) Every provision of a content policy represents a horror story and also something that will definitely be repeated. Gillespie is heartbreakingly clear on the banality of evil at scale: “a moderator looking at hundreds of pieces of Facebook content every hour needs more specific instructions on what exactly counts as ‘sexual violence,’ so these documents provide examples like ‘To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat’—which Facebook gives a green checkmark, meaning posts like that should stay.” (P. 112.)

Given the scale of decision-making, what else could be done? The Apple App Store preapproves apps, but constantly struggles because of the need for speed and the inevitable cries of censorship when Apple decides an app is too political. And Apple can get away with preapproval only because the flood of apps is orders of magnitude less than the flood of Instagram photos or similar content, and because app developers are relatively more likely to share ideas about reasonable expectations of a platform than are web users in general.

Most other platforms have relied on community flagging in the first instance. Relying on the users to report the need for clean-up is practically convenient, but also grants “legitimacy and cover” by signaling that the platform is listening to and trying to help users who are being harmed, while still leaving ultimate control in the platform’s hands. Mechanisms to report content can also be used by harassers, such as the person who reported hundreds of drag queens for violating Facebook’s real-name policy. YouTube sees flags surge when a new country gets access to the platform; Gillespie’s informant attributed this to users’ lack of knowledge of the YouTube community’s values. Gillespie sees in this reaction an “astounding assumption: a different logic would be to treat this new wave of flags as some kind of expression of values held by this new user population.” (P. 130.) Flagging will remain contested not just because values vary, but also because flagging relies on a devoted but small group of users to voluntarily police bad behavior, in conflict with another small group that deliberately seeks to cause mayhem.

There’s another approach, self-labeling, which the nonprofit Archive of Our Own (with which I volunteer) tries: ask users to evaluate their own content as they post it, and provide filters so users can avoid what they don’t want. This distributes the work more equitably. But tagging is time-consuming and can deter use, so commercial platforms make self-tagging limited, either relying on defaults or on rating entire users’ profiles, as on Tumblr. But self-tagging raises problems with consistency, since Tumblr users don’t always agree on what’s “safe,” not to mention what happens when Tumblr itself decides that “gay” is unsafe by definition. I’m obviously invested in the AO3; precisely because his analysis is so incisive, I wish Gillespie had spent a little time on what noncommercial platforms decide to do differently here and why.

Gillespie also has a great discussion of automated filtering. It’s relatively easy to compare images to hashes that screen out known child pornography images. But that relative ease is a product of law, technology, and policy that is hard to replicate for other issues. The database that allows screening is a high priority for platforms because of the blanket illegality of the content, and hashing known images is a far simpler task than identifying new images or their meaning in context. Microsoft developed the software, but recognized it as good PR to donate it for public use rather than keeping it as a trade secret or licensing it for profit, which isn’t true of other filtering algorithms like YouTube’s Content ID. Machine learning is trying to take on more complex tasks, but it’s just not very good yet. (After the book came out, we learned that Amazon’s machine learning tool for assessing resumes learned to discriminate against women—a machine learning algorithm aimed at harassment or hate speech will likewise replicate the biases of the arbiters who train the computer, and who tell it that “choke a bitch” is fine.) Also, we don’t really know what constitutes a “good” detection rate. Is 95% success identifying nude images good? What about a false positive rate of 1%? Are the false positives/negatives biased, the way facial recognition software tends to perform worse with darker-skinned faces?

Nor are the choices limited to removal; algorithmic demotion or screening allows people who know that the content exists to find it, while making it harder for others to stumble across it. But this makes platforms’ promise of sharing much more complicated: to whom are you visible, and when? These decisions can be hard for affected users to discover, much less understand, and they’re particularly important for marginalized groups. One good example is Tumblr’s shadow-banning of search terms like “porn” or “gay” on its app, with political consequences; similarly, it turns out that TripAdvisor reviewers may discover that they can’t use “feminism” or “misogyny” in their reviews (highlighting that algorithmic demotion always interacts with other policy choices). Meanwhile, YouTube and Twitter curate their trending pages to avoid sexual or otherwise undesired content, so it’s not really what’s trending but only an undeclared subset, which curation nonetheless never quite manages to avoid controversy or harm, as platforms rediscover every few months. Amazon does similar things with best-sellers to make sure that shapeshifter porn doesn’t get recommended to people who haven’t already expressed an interest in it. Users can manipulate this differential visibility, too, as we learned with targeted Facebook ads from Russians and others in the 2016 US presidential campaign.

Again, it might be worthwhile to consider the potential alternatives: the noncommercial AO3, like Wikipedia, has done very little to shape users’ searches, unlike YouTube’s radicalization machine.

In concluding, Gillespie judges content moderation to be so difficult that “all things considered, it’s amazing that it works at all, and as well as it does.” (P. 197.) Still, handing it over to private, for-profit companies, with very few accountability or transparency mechanisms, isn’t a great idea. At a minimum, he argues, platforms should have to be able to explain “why a post is there and how I should assess.” (P. 199.)

Gillespie suggests that Section 230 of the Communications Decency Act may warrant modification. He argues that it should still provide immunity for pure conduits and good faith moderation. “But the moment that a platform begins to select some content over others, based not on a judgment of relevance to a search query but in the spirit of enhancing the value of the experience and keeping users on the site,” (P. 43) it should be more accountable for the content of others. (I doubt this distinction would work at all.) Or, the safe harbor could be conditioned on having obligations such as meeting minimum standards for moderation, perhaps some degree of transparency or specific structures for due process/appeal of decisions. It is perhaps unsurprising that Facebook’s splashiest endeavor in this area, its “Supreme Court,” won’t provide these kinds of protections. It’s not in Facebook’s interest to limit its own flexibility in that way even if it is in Facebook’s interest to publicly perform adherence to certain general principles. This performance may well reflect sincere substantive commitments, but that also has advantages in fending off further regulation and in making bigness look good, because only big platforms like Facebook can sustain a “Supreme Court.” Although I have serious concerns about implementation of any procedural obligations (related to my belief that antitrust law would be a better source of regulation than blanket rules whose expense will ensure that no competitors to Facebook can arise), for purposes of this brief review it is probably more useful to note that the procedural turn is Gillespie’s own version of neutrality on the content of content policies. Authoritarian constitutionalism—where the sovereign adheres to principles that it announces but does not concede to its subjects the right to choose those principles—seems to be an easier ask than democratic constitutionalism for platforms.

Gillespie also suggests structural changes that wouldn’t directly change code or terms of service. He argues for greater diversity in the ranks of engineers, managers, and entrepreneurs, who currently tend to be from groups that are already winners and don’t see the downsides of their libertarian designs. This would be a kind of virtual representation that wouldn’t involve actual voting, whether for representatives or for policies, but would likely improve platform policymakers’ ability to notice certain kinds of harms and needs.

Innovation policy has focused on presenting and organizing information and content creation by users, not on innovation in governance and design or implementation of shared values; it could do otherwise. Gillespie’s most provocative question might be: What if we built platforms that tried to elicit civic preferences instead of consumer preferences? If only we knew how.

Cite as: Rebecca Tushnet, The Constant Trash Collector: Platforms and the Paradoxes of Content Moderation, JOTWELL (July 25, 2019) (reviewing Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (2018)), https://cyber.jotwell.com/the-constant-trash-collector-platforms-and-the-paradoxes-of-content-moderation/.