The Journal of Things We Like (Lots)
Select Page

Starting with Consent

James Grimmelmann, Consenting to Computer Use, 84 Geo. Wash. L. Rev. 1500 (2016), available at SSRN.

The Computer Fraud and Abuse Act (“CFAA”), enacted in 1986, has long been a source of consternation for jurists and legal scholars alike. A statute marred by long-standing circuit splits over basic terminology and definitions, the CFAA has strained under the weight of technological evolution. Despite thousands of pages of law review ink spilt on attempting to theoretically resuscitate this necessary but flawed statute, the CFAA increasingly appears to be broken. Something more than a minor Congressional correction is required.

In particular, the central term of the statute—authorization—is not statutorily defined. As the CFAA has morphed through amendments to encompass not only criminal but also civil conduct, the meaning of “authorized access” has become progressively more slippery and difficult to anticipate. Legal scholarship has long voiced concerns over the CFAA, including whether certain provisions are void for vagueness,1 create opportunity for abuse of prosecutorial discretion,2) and give rise to unintended negative impacts on employee mobility and innovation.3

Enter James Grimmelmann’s Consenting to Computer Use. In this work, Grimmelmann offers us a clean slate as an important and useful starting point for the next generation of the CFAA conversation. He returns us to a first-principles analysis with respect to computer intrusion, focusing on the fundamental question of consent.

Grimmelmann urges us to take a step back and hit reset on the scholarly CFAA conversation. In lieu of tortured attempts to find Congressional meaning for “authorization” in legislative history, or misguidedly trying to shoe-horn computer intrusion into last-generation (criminal or civil) trespass regimes, Grimmelmann leads us through an intuitively resonant inquiry around consent. As Grimmelmann succinctly puts it, “[q]uestions of the form, ‘Does the CFAA prohibit or allow X?’ are posed at the wrong level of abstraction. The issue is not whether X is allowed, but whether X is allowed by the computer’s owner.” (P. 1501.)

An inquiry into implicit or explicit consent by a computer’s owner is present in every computer intrusion inquiry, Grimmelmann explains. He reminds us of the importance of the context of the intrusion. Herein lies the primary insight of the paper: the CFAA’s key term requires construction rather than interpretation. In other words, Grimmelmann acknowledges and embraces the suboptimal statutory reality that most other scholars have danced around: the CFAA itself is of little assistance in crafting workable legal analysis for defining computer intrusion and unauthorized access. The starting point for understanding the legal concept of CFAA “authorization” (or lack thereof), Grimmelmann argues, will be found in engaging with the traditional legal concept of consent. He explains that when we begin to rely on consent as the baseline of future CFAA inquiry, courts can then engage with crafting rules in light of the overall goals of the CFAA and the facts of specific cases.

The CFAA context is challenging, and Grimmelmann acknowledges key differences between technological contexts and more traditional ones. Grimmelmann explains that software is automated and plastic—meaning that consent to access is necessarily prospective, and that software can function in unforeseeable ways. These features (bugs?) have added to the complexity of the computer intrusion inquiry. However, when a legal paradigm is constructed around consent, Grimmelmann argues, these elements of automation and plasticity become less dispositive. Providing the example of a compromised vending machine, he explains that it makes no difference whether an intruder tricked the machine by exploiting a hole in the machine’s logic or whether the intruder punched a hole in its side. The issue is the compromise and the lack of consent.

Grimmelmann distinguishes between factual consent and legal consent as distinct concepts, relying on theoretical work from Peter Westen. As Grimmelmann explains the distinction, “factual consent is a function of both code and words; of how a computer is programmed and of its owner’s expressions, such as oral instructions, terms of service, and employee handbooks.” (P. 1511.) Meanwhile, legal consent is based on factual consent, but can depart from it if a jurisdiction believes “that factual consent is not sufficient to constitute legal consent” or that it is not necessary based on the totality of the circumstances, including whether implicit consent may have been granted. (P. 1512.) Grimmelmann cautions that different types of CFAA cases will necessitate a distinction between factual and legal consent. In other words, “without authorization” for purposes of the CFAA can refer to multiple possible types of conduct because legally sufficient consent has always been constructed by courts across various areas of law and various fact patterns.

With this excellent article, Grimmelmann has set the stage for a new line of CFAA scholarship, one that is better-connected to traditional legal first principles. As technological evolution continues to strain the overall framework of the CFAA, this work opens the door to a more aggressive re-evaluation of the statute in technological context and offers us a possible way forward.


Editor’s Note: James Grimmelmann took no part in the selection or editing of this review.

  1. Orin S. Kerr, Vagueness Challenges to the Computer Fraud and Abuse Act, 94 Minn. L. Rev. 1561 (2010). []
  2. The Vagaries of Vagueness: Rethinking the CFAA as a Problem of Private Nondelegation, 127 Harv. L. Rev. 751, 772 (2013) (“To whatever extent prosecutorial discretion might provide some redeeming amount of government participation in the criminal context, such participation is absent in civil cases between private parties.” []
  3. Andrea M. Matwyshyn, The Law of the Zebra, 28 Berkeley Tech. L.J. 155 (2013). []
Cite as: Andrea Matwyshyn, Starting with Consent, JOTWELL (May 19, 2017) (reviewing James Grimmelmann, Consenting to Computer Use, 84 Geo. Wash. L. Rev. 1500 (2016), available at SSRN), https://cyber.jotwell.com/starting-with-consent/.

Make America Troll Again

There is a theory that Donald Trump does not exist, and that the fictional character of “Donald Trump” was invented by Internet trolls in 2010 to make fun of American politics. At first “Trump” himself was the joke: a grotesque egomaniac with orange skin, a debilitating fear of stairs, and a tenuous grasp on reality. He was a rage face in human form. But then his creators realized that there was something even funnier than “Trump’s” vein-popping, bile-specked tirades against bad hombres and nasty women: the panicked and outraged denunciations he inspired from self-serious defenders of the status quo. “Trump’s” election was the greatest triumph of trolling in human history. It has reduced politics, news, and culture to a non-stop, deplorably epic reaction video.

There is no entry for “Donald Trump” in the index of Whitney Phillips’s 2015 book, This Is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. But this playful, perceptive, and unsettling monograph is an outstanding guidebook to the post-Trump hellscape online trolling has made for us. Or perhaps I should say to the hellscape we have made for ourselves, because Phillips’s thesis is that trolling is inherently bound up with the audiences and antagonists who can’t stop feeding the trolls. Much like Trump, trolls “are born of and fueled by the mainstream world.” (Pp. 168-69.)

This is Why We Can’t Have Nice Things is first and foremost an act of ethnography. Phillips embedded herself in online trolling communities, interviewing participants and following them as their targets and methods evolved over the years. The book strikes an especially good balance: close enough to have real empathy for its subjects’ motivations and worldview, but not so close as to lose critical perspective. It also displays an exceptionally good sense of context: the reporting is grounded in specific trolling communities, but Phillips is careful about situating those communities within large cultural trends, online and off.

There are many kinds of trolls: patent trolls who file suits without warning, commentator trolls who make provocative arguments with a straight face. Phillips focuses on what she calls “subcultural trolls,” who self-identify as part of a community of trolls, set apart from the mainstream, engaged in the anonymous (or pseudonymous) exploitation of others for the lulz. Think /b/ on 4chan, think Anonymous, think AutoAdmit, think alt-right.

Phillips defines “lulz” (a corruption of “LOL” with a sharper edge) as “amusement at other people’s distress.” (P. 27.) A classic example is “RIP trolling”: going to social media memorial pages and leaving messages to shock, confuse, and anger grieving families. Phillips argues that lulz are characterized by fetishism, generativity, and magnetism. “Fetishism” is used in a quasi-Marxist sense of dissociation: RIP trolling, for example, involves an act of emotional detachment that cuts away the actual human tragedy and focuses on extracting humor from arbitrary details, like a victim’s lost iPod. “Generativity” refers to the same kind of playful remixing, repurposing, and world-building that online fanfic communities engage in. And “magnetism” captures lulz’ memetic qualities: they draw attention in and allow a trolling community to cohere around iterated themes and phrases.

The heart of the book (Part II), with examples drawn roughly from 2008 to 2011, is a sustained argument against being too quick to treat trolls as the Other. Trolls take expert advantage of mainstream media attention. Their tactics are often straight out of the corporate PR playbook and its even more unsavory cousins, and their cultural postures are funhouse-mirror reflections of attitudes that are prevalent in mainstream culture. (Breitbart, in other words, is a professionalized political trolling operation—or perhaps it would be more accurate to say that it is a news organization genetically enhanced with troll DNA.) “[T]rolls and sensationalist corporate media outlets are in fact locked in a cybernetic feedback loop predicated on spectacle,” Phillips writes. (P. 52.)

Trolls thrive on mainstream media attention in two related ways. One is the classic hoax, updated for the Internet age. Some trolls are masters at feeding the mainstream media false stories (fake news!). Multiple local TV stations fell for troll-supplied stories about a supposed crisis sweeping through the United States of teenagers huffing jenkem (a fermented mixture of feces and urine). The other is that trolls are skilled at turning attention into a game only they can win. Resistance is futile; one cannot argue with a sea lion or reason with the Joker. In this, Phillips argues, trolls channel Schopenhauer. The point is to win the argument by any means necessary, right or wrong. (If the technique sounds familiar, it may be because you’ve seen it coming from the talking heads on Fox News or from behind the podium at the White House Press Briefing Room.)

Aspects of trolling are rooted in widely shared mainstream attitudes. It draws heavily on a muscular strain of free speech libertarianism that shields even the most offensive speech. If you don’t like what I’m saying, it’s your own damn fault for listening, or for being bothered by it. If you don’t want your feelings to be hurt, don’t have feelings; if you don’t like death threats, just kill yourself. Phillips does a nice job tracing trolling’s complicated relationship with race, gender, and sexuality: the same trolls—the same trolling campaign—can enjoy lulz at the expense of vulnerable minorities, privileged white middle-class comfort, conservative intolerance, and liberal pieties. Making racist jokes is both something that many millions of Americans routinely indulge in and something that makes many millions of Americans (not usually the same ones) really angry.

Trolling eats everything, including especially itself, and reduces it all to a pulsing blob of incoherent imagery, held together only by the pleasure of a laugh at the expense of someone who can’t take the joke. Indeed, there is no other joke; trolling is bullying, or dominance politics from which everything but the lulz has been stripped away. Phillips calls it “pure privilege,” and explains that trolls “refuse to treat others as they wish to be treated. Instead, they do what they want, when they want, to whomever they want, with almost perfect impunity.” (P. 26.)

But, to repeat, trolls “aren’t pulling their materials, chosen targets, or impulses from the ether. They are born of and fueled by the mainstream world—its behavioral mores, its corporate institutions, its political structures and leaders—however much the mainstream might rankle at the suggestion.” (Pp. 168-69.)

We have met the troll and it me.

Cite as: James Grimmelmann, Make America Troll Again, JOTWELL (April 21, 2017) (reviewing Whitney Phillips, This Is Why We Can't Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture (2016)), https://cyber.jotwell.com/make-america-troll-again/.

Back to the Essentials

Michael Buckland, Information and Society (The MIT Press Essential Knowledge Series, 2017).

Judging from its title, Professor Michael Buckland‘s book seems to be yet another introduction into the relationship between information and society. Upon reading it you encounter a well-organized, simply but not simplistically written concise introduction enriched by historical references to what was once called library science and is now more often referred to as (non-mathematical) information science.

As such, it fits well into the MIT Press series that has brought us among others John Palfrey’s Intellectual Property Strategy or Samuel Greengard’s The Internet of Things.

Buckland guides us through the various dimensions of information, such as physical characteristics, formal elements, meaning, use, the infrastructure necessary for its use and most of all its cultural dependencies. He uses the passport as an instructive example and introduces the term document to make the various informational perspectives more present. Further chapters deal with organization, naming, description and retrieval techniques for documents and their possible evaluations.

All this brought me back to my own beginnings when, at our research institute in the late 1970s, we were building a metadata system for mainly European publications in the budding discipline of what was then called “Computers and Law.” I still think there is no better exercise to enter a new field of knowledge than to develop and systematize descriptors. But it is not nostalgia that makes me introduce Buckland’s book here as a thing “we like (lots).”

Buckland’s tour through the essentials of information handling—also because of its clear and mind-refreshing language—opens a new perspective on cyberlaw. The book invites us to take a step back from ever-changing technological characteristics, regulatory reactions, and accumulating caselaw and to take a fresh look at what all this is about, at how our societies create, handle, organize, share and restrict information and at how all this should be done considering our constitutional value systems—in short, to look at information law properly and then from there to discuss and evaluate the implications of technological change.

Buckland’s remarks on “The Past and the Future” are a good example for this insight. Among other observations he states (P.173) ” … there is a shift from individuals deriving benefit from the use of documents to documentary regimes seeking to influence control and benefit from individuals.” What he is pointing to here, in highly unobtrusive language, is one of the core issues of cyberlaw—the power shifts in information handling. The book is rich with such windows for a fresh look on what are the fundamentals of cyberlaw, such as his frequent references to the important role of trust systems in communication.

And—last but not least—it should be added—as others have noted before on this series (for example, Nasrine Olson’s book review at 18 New Media & Society 680 (2016).): The books of this series are a nice handy size, feel good to the touch, and have typography gentle to the eyes. Also, such things count when we like things—even more now when we look at screens rather more often than on paper pages. But I am getting nostalgic again …

Cite as: Herbert Burkert, Back to the Essentials, JOTWELL (March 24, 2017) (reviewing Michael Buckland, Information and Society (The MIT Press Essential Knowledge Series, 2017)), https://cyber.jotwell.com/back-to-the-essentials/.

Could There Be Free Speech for Electronic Sheep?

Toni M. Massaro, Helen L. Norton & Margot E. Kaminski, Siri-ously 2.0: What Artificial Intelligence Reveals about the First Amendment, Minn. L. Rev. (forthcoming 2017), available at SSRN.

The goal of “Strong Artificial Intelligence” (hereinafter “strong AI”) is to develop artificial intelligence that can imbue a machine with intellectual capabilities that are functionally equivalent to those possessed by humans. As machines such as robots become more like humans, the possibility that laws intended to mediate the behaviors of humans will be applied to machines grows.

In this article the three authors assert that the First Amendment may protect speech by strong AI. It is a claim, the authors state in their abstract, “that discussing AI speech sheds light on key features of prevailing First Amendment doctrine and theory, including the surprising lack of humanness at its core.” And it is premised on an understanding of a First Amendment which “increasingly focuses not on protecting speakers as speakers but instead on providing value to listeners and constraining the government.”

The first substantive section of the article considers justifications for free speech rights for AI speakers, both positive and negative. Positive justifications embrace the potential usefulness of AI speech to human listeners. According to the authors, AI speech can contribute to human meaning-making and construction of selfhood, and can produce the sorts of ideas and information that can lead to human enlightenment. Negative justifications for free speech rights for AI speakers reflect views which deeply distrusts governmental regulation of speech. The Supreme Court has broadened its views of free speech protection in part based on its doubts about the government’s ability to competently balance social costs and benefits pertaining to speech, especially when driven by censorial motives. The authors conclude that whether it is providing benefits to humans or remaining free from government constraints, AI speech can reasonably be treated like human speech under most existing First Amendment principles and practices, because humanness of the speaker is neither a stated not implied criteria necessary for speech protection. The only exceptions are theories of the First Amendment which are explicitly predicated on the value that free speech has for humans.

The second section of the article explains in more detail that First Amendment law and doctrine are largely inattentive to the humanness of speakers. It contains the observation that corporations famously receive speech protection, rebutting any presumption that innate or prima facie humanness matters to First Amendment rights, even though human autonomy and dignity are values free speech is intended to protect. Humans may need to be part of the equation, but having them as background beneficiaries maybe enough for the First Amendment to attach. The authors further argue that strong AI may in the future be credited with sufficient indicia of personhood to warrant inclusion even in speaker-focused speech protections.

Next the authors discuss whether possessing human emotions is or should be a prerequisite for a speaker to claim First Amendment protection. Not surprisingly, they conclude that AI is growing increasingly affective, while free speech laws ignore emotions, protecting cruel, nasty, racist, sexist and homophobic speech regardless of the emotional damage it might inflict. They repeat the point about corporations having cognizable speech rights, and remind readers that the two key concerns of contemporary free speech jurisprudence are whether the speech potentially has utility, and whether the speech is something the government has no right to silence. If the answer to either question is yes, the speech is protected.

The authors then contemplate whether the speech of other nonhuman speakers such as animals could be ascribed First Amendment protection, once the slippery slope of AI speech protections is sufficiently iced. No, they conclude that unlike AI, animals are not intended to serve human informational needs like computers are. This section of the article gave me a flashback to my law school Evidence class, in which I learned that animals cannot lawfully be declarants nor can their speech constitute hearsay. I’ve since seen and read many legal dramas that flout this well-established legal principle. I suspect this is because of an assumption that audience members like it when animals testify in court enough to forgo accuracy. Animals seem inherently honest. AI beings like robots probably evoke more mixed reactions because of the range of ways they are depicted in popular culture. Commander Data from Star Trek: The Next Generation always seemed trustworthy, but HAL 9000 from 2001: A Space Odyssey will kill you.

The authors then discuss doctrinal and practical objections to First Amendment protection of AI speech. Courts might find a way around the fact that AI speakers cannot be said to have culpable mental states when evaluating and ruling on defamation claims. Judges could, for example, treat AI speakers as dependent legal persons or find another way to facilitate litigation in which an AI speaker is the plaintiff or defendant. Should an AI speaker be found liable, it could be unplugged.

The fourth section of the article looks at what the limits of AI speech protection might be. Free speech protection is already quite expansive, say the authors, but there might be a way to formulate limiting principles including outright regulation that apply only to the unique challenges posed by AI speech. This claim puzzled me a little, because it seems to pull in the direction of content-based distinctions. The offered analogies to regulation of commercial speech, and to professionals’ speech to patients and clients are only partly reassuring. Regulation of commercial speech is a thorny, confusing doctrinal morass, and the authors do not explain why or how courts would do better with AI speech.

Next, the authors note that what AI produces is likely to be characterized as expressive conduct (“or something similar”) rather than pure speech. This raises definitional difficulties not unique to AI in terms of separating speech related motives or interests from activities that can be permissibly regulated.

Finally the authors conclude that legal regimes have always managed to handle emerging technologies and we should expect this to continue with respect to AI speech. There may be a lot of complicated line drawing, but that’s the way it goes in First Amendment jurisprudence.

I enjoyed reading this engaging piece of scholarship very much. It is accessibly written, and the authors’ willingness to generalize about First Amendment law and policy is truly refreshing. Its central claim about the lack of importance of real human beings and their emotions to most free speech theory rings true and has relevance well beyond the strong AI context. The piece confirmed my own beliefs about the current state of free speech, and made me viscerally miss the late C. Edwin Baker, who spent so much time passionately arguing that the central purpose of the First Amendment is the promotion of *human* liberty. He’d have written a far feistier review essay for sure, challenging the authors to be activists who instantiate human liberty interests within the center of the First Amendment. But he would have appreciated the creativity of the article just as I did.

Margot Kaminski took no part in the editing of this review.

Cite as: Ann Bartow, Could There Be Free Speech for Electronic Sheep?, JOTWELL (February 23, 2017) (reviewing Toni M. Massaro, Helen L. Norton & Margot E. Kaminski, Siri-ously 2.0: What Artificial Intelligence Reveals about the First Amendment, Minn. L. Rev. (forthcoming 2017), available at SSRN), https://cyber.jotwell.com/could-there-be-free-speech-for-electronic-sheep/.

What is Cyberlaw, or There and Back Again

Jeanette Hofmann, Christian Katzenbach & Kirsten Gollatz, Between Coordination and Regulation: Finding the Governance in Internet Governance, New Media & Society (2016), available at SSRN.

The concept of “cyberspace” has fascinated legal scholars for roughly 20 years, beginning with Usenet, Bulletin Board Systems, the World Wide Web and other public aspects of the Internet. Cyberspace may be defined as the semantic embodiment of the Internet, but to legal scholars the word “cyberspace” itself initially reified the paradox that the Internet both seemed to be free of law and constituted law, simultaneously. The explorers of cyberspace were like the advance guard of the United Federation of Planets, boldly exploring open, uncharted territory and domesticating it in the interest of the public good. The result was to be both order (of a sort) without law, to paraphrase and re-purpose Robert Ellickson’s work, and law (of a different sort), to distill Lawrence Lessig’s famous exchange with Judge Frank Easterbrook.1 For the last 20 years, more or less, legal scholars have intermittently pursued the resulting project of defining, exploring, and analyzing cyberlaw, but without really resolving this tension, that is, without really identifying the “there” there. Perhaps the best, most engaged, and certainly most optimistic embrace of that point of view is David Post’s In Search of Jefferson’s Moose.

Less speculative and less adventurous cyberlaw scholars, which is to say, most of them, quickly adapted to the seeming hollowness of their project by aligning themselves with existing literatures on governance, a rich and potentially fruitful field of inquiry derived largely from research and policymaking in the modern regulatory state. That material was made both relevant and useful in the Internet context via the emergence of global regulatory systems that speak to the administration of networks, particularly the Domain Name System and ICANN, the institution that was invented to govern it. The essential question of cyberlaw became, and remains: What is Internet governance, and what do we learn about governance in general from our observations and experiences with Internet governance? As an intervention in that ongoing discussion, Between Coordination and Regulation: Finding the Governance in Internet Governance is an especially welcome and clarifying contribution, all the more so because of its relative brevity.

The lead author is the head of the Humboldt Institute for Internet and Society and a veteran observer of and participant in Internet governance dialogues at ICANN and the World Summit on the Information Society (WSIS). She and two colleagues at the Humboldt Institute have produced a useful review of relevant Internet governance literature and a new framework for further research and analysis that is eclectic in its reference to and reliance on existing material and therefore independent of the influence of any single prior theorist or thinker. The resulting framework is both novel yet recognizably derivative of and continuous with respect to earlier work in the field. This is not a work primarily of legal scholarship by legal scholars, but properly understood, it should contribute in important ways to sustaining the ongoing project of cyberlaw. Internet governance is conceptualized here in ways that make clear its relevance and utility to questions of governance generally.

The paper introduces its subject with an overview of the definitional problems associated with the term “governance” and especially the phrase “Internet governance.” In phenomenal terms, the concept often refers to combinations of three things: one, rulemaking and enforcement and associated coordinating behaviors that implicate state actors acting in accordance with established political hierarchies; two, formal and informal non-state actors acting in less coordinated or un-coordinated “bottom up” ways, including through the formation and evolution of social norms; and three, technical protocols and interventions that have human actors as their designers but that have sorts of independent technical agency in enabling and constraining behaviors.

The authors note that many researchers seeking to define and understand relevant combinations equate “governance” with “regulation,” which leads to the implication that governance, like regulation, should be purposive with respect to its domain and that its goals should be evaluated accordingly. They reject that equation, observing that the experience of Internet institutions and other actors, of both legal and socio-technical character, suggests that such a purposive framing of the phenomenon of governance is unhelpfully underinclusive. A large amount of relevant behavior and consequences cannot be traced in purposive terms or in functional terms to planned interventions.

Also rejected, this time on overinclusiveness grounds, is the idea that governance can and should be equated with coordination among actors in a social space, as such. The authors correctly note that if governance is coordination of actors in social life, then virtually any and every social phenomenon is governance, and the concept loses any distinct analytic potential.

In between these two poles of the spectrum—that governance is regulation, or that governance is coordination—the authors settle on the argument that governance is and should be characterized as “reflexive coordination.” They define this concept as follows:

Critical situations occur when different criteria of evaluation and performance come together and actors start redefining the situation in question. Routines are contested, adapted or displaced through practices of articulation and justification. Understanding governance as reflexive coordination elucidates the heterogeneity of sources and means that drive the emergence of ordering structures. (P. 20.)

This approach preserves the role of heterogeneous assemblages of actors, conventions, technologies, purposes, and accidents, while calling additional attention to moments and instances of conflict and dispute, where “routine coordination fails, when the (implicit) expectations of the actors involved collide and contradictory interests or evaluations become visible.” The authors’ point is that this concept, which they refer to as reflexive coordination, or more clearly stated, these processes of reflexive coordination, are specifically aligned with the concept of Internet governance in particular and with governance in general. The reflexivity in question are practices and processes of contestation, conflict, reflection, and resolution that sometimes accompany more ordinary or typical practices and processes of institutional and technical design and activity. Those ordinary or typical practices and processes constitute questions of coordination and/or regulation, broadly conceived. Those are appropriately directed to the Internet, but not under the governance rubric.

The authors acknowledge their debt to a variety of social science research approaches, including Bruno Latour, John Law, Elinor Ostrom, Douglas North, and Oliver Williamson, and to American scholars of law and public policy, notably Michael Froomkin, Milton Mueller, Joel Reidenberg, and Lawrence Lessig, but without resting their case specifically on any one of them or on any particular work. As a student of the subject, I was struck not by the identities of the researchers whose work is cited, but rather by the conceptual affinity between the authors’ concept of “reflexive coordination” and an uncited concept. Recently, in a parallel literature on the anthropology (and dare I say, governance) of open source computer software, Christopher Kelty, now a researcher at UCLA, coined the phrase “recursive public” to describe the attributes of an open source software development collective.2 Kelty writes:

A recursive public is a public that is vitally concerned with the material and practical maintenance and modification of the technical, legal, practical, and conceptual means of its own existence as a public; it is a collective independent of other forms of constituted power and is capable of speaking to existing forms of power through the production of actually existing alternatives. Free Software is one instance of this concept, both as it has emerged in the recent past and as it undergoes transformation and differentiation in the near future.…In any public there inevitably arises a moment when the question of how things are said, who controls the means of communication, or whether each and everyone is being properly heard becomes an issue.… Such publics are not inherently modifiable, but are made so—and maintained—through the practices of participants.3

The extended quotation is offered to suggest that processes of reflexive coordination already resonate in governance domains beyond those associated with the Internet itself. To the extent that reflexive coordination needs affirmation as a generalized model of governance, Kelty’s research on recursive publics offers some useful evidence that the model is useful. Open source software development collectives seem to fit the model of governance quite readily, despite the fact that the concepts of “reflexive coordination” and the “recursive public” arise in different intellectual traditions and for different purposes. The challenges of understanding and practicing Internet governance speak to the challenges of understanding and practicing governance generally. “Between coordination and regulation: Finding the governance in Internet governance” offers a helpful and important step forward in that broader project.

  1. See Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. Chi. Legal F. 201; Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 501 (1999). []
  2. Christopher M. Kelty, Two Bits: The Cultural Significance of Free Software (2008). []
  3. Id. at 3. []
Cite as: Michael Madison, What is Cyberlaw, or There and Back Again, JOTWELL (December 9, 2016) (reviewing Jeanette Hofmann, Christian Katzenbach & Kirsten Gollatz, Between Coordination and Regulation: Finding the Governance in Internet Governance, New Media & Society (2016), available at SSRN), https://cyber.jotwell.com/what-is-cyberlaw-or-there-and-back-again/.

Automatic – for the People?

Andrea Roth, Trial by Machine, 104 Georgetown Law Journal 1245 (2016).

Crucial decision-making functions are constantly migrating from humans to machines. The criminal justice system is no exception. In a recent insightful, eloquent, and rich article, Professor Andrea Roth addresses the growing use of machines and automated processes in this specific context, critiquing the ways these processes are currently implemented. The article concludes by stating that humans and machines must work in concert to achieve ideal outcomes.

Roth’s discussion is premised on a rich historical timeline. The article brings together measures old and new—moving from the polygraph to camera footage, impairment-detection mechanisms such as Breathalyzers, and DNA typing, and concluding with AI recommendation systems of the present and future. The article provides an overall theoretical and doctrinal discussion and demonstrates how these issues evolved. Yet it also shows that as time moves forward, problems often remain the same.

The article’s main analytical contribution is its two central factual assertions: First, that machines and mechanisms are introduced unequally, as a way to strengthen the prosecution and not to exonerate. In other words, there are no similar opportunities to apply these tools to enhance defendants’ cases. Secondly, machines and automated processes are inherently flawed. This double analytic move might bring a famous “Annie Hall” joke to mind: “The food at this place is really terrible . . . and such small portions.”

The article’s first innovative and important claim—regarding the pro-prosecution bias of decisions made via machine—is convincing and powerful. Roth carefully works through technological test cases to show how the state uses automated and mechanical measures to limit “false negatives”—instances in which criminals eventually walk free. Yet when the defense suggests using the same measures to limit “false positives”—the risk that the innocent are convicted—the state pushes back and argues that machines and automated processes are problematic. Legislators and courts would be wise to act upon this critique and consider balancing the usage of automated measures.

Roth’s second argument—automation’s inherent flaws—constitutes an important contribution to a growing literature pointing out the problems of automated processes. The article explains that such processes are often ridden with random errors which are difficult to locate. Furthermore, they are susceptible to manipulation by the machine operators. Roth demonstrates in several contexts how subjective assumptions can be and are buried in code, inaccessible to relevant litigants. Thus, the so-called “objective” automated process in fact introduces unchecked subjected biases of the system’s programmers. Roth further notes that the influence of these biased processes is substantial. Even in instances in which the automated processes are intended to merely recommend an outcome, the humans using it give extensive deference to the automated decision.

The article fairly addresses counter-arguments, noting the virtues of automated processes. Roth explains how automated processes can overcome systematic human error and thus limit false positives in the context of DNA evidence and computer-assisted sentencing. To this I might add that machines allow for replacing decisions made in the periphery of systems with those made by central planners. In many instances, it might be both efficient and fair to prefer systematic errors made by the central authority to the biases arising when rules are applied with discretion in the field and subjected to the many biases of agents.

In addition, Roth explains that automated processes are problematic, as they compromise dignity, equity, and mercy. Roth’s argument that trial by machine compromises dignity is premised on the fact that applying some of these mechanical and automated measures calls for degrading processes and the invasion of the individual’s property.

This dignity-based argument could have been strengthened by a claim often voiced in Europe: to preserve dignity, a human should be subjected to the decision of a fellow human, especially when there is much at stake. Anything short of that will prove to be an insult to the affected individual’s honor. Europeans provide strong legal protections for dignity which are important to mention—especially given the growing influence of EU law (a dynamic at times referred to as the “Brussels Effect”). Article 22 of the recently introduced General Data Protection Regulation (GDPR) provides that individuals have the right not to be subjected to decisions that are “based solely on automated processing” when these are deemed to have a significant effect. Article 22 provides several exceptions, yet individuals must be provided with a right to “obtain human intervention,” and have the ability to contest the automated findings and conduct additional examinations as to how the decision was reached (see also Recital 71 of the GDPR). Similar provisions were featured in Article 12(a) and Article 15 of the Data Protection Directive which the GDPR is set to replace over the next two years, and in older French legislation. To be fair, it is important to note that in some EU Member States these provisions have become dead letters. Their recent inclusion in the GDPR will no doubt revive them. However, the GDPR does not pertain to criminal adjudication.

Roth’s argument regarding equity (or the lack thereof in automated decisions) is premised on the notion that automated processes are unable to exercise moral judgment. Perhaps this is about to change. Scholars are already suggesting the creation of automated tools that will do precisely that. Thus, this might not be a critique of the processes in general, but of the way they are currently implemented—a concern that could be mitigated over time as technology progresses.

The lack of mercy in machine-driven decisions is obviously true. However, the importance of building mercy into our legal systems is debatable. Is the existing system equally merciful to all social segments? One might carefully argue that very often the gift of mercy is yet another privilege of the elites. As I argue elsewhere, automation can remove various benefits the controlling minorities still have—such as the cry for mercy—and this might indeed explain why societies are slow to adopt these measures, given the political power of those to be harmed from its expansion.

To conclude, let’s return to Woody Allen and the “Annie Hall” reference. If, according to Roth, automated processes are problematic, why nonetheless should we complain that the portions are so small, and consider expanding their use to limit “false positives”? Does making both claims make sense? I believe it does. For me and others who are unconvinced that automated processes are indeed problematic (especially given the alternatives) the article both describes a set of problems with automation we must consider, and also provides an alarming demonstration of the injustices unfolding in implementation. But joining these two arguments should also make sense to those already convinced that machine-driven decisions are highly problematic. This is because it is quite clear that machines and automated processes are here to stay. Therefore, it is important both to identify their weaknesses and improve them (at times by integrating human discretion) and to assure that the advantages they provide are equally shared throughout society.

Cite as: Tal Zarsky, Automatic – for the People?, JOTWELL (November 8, 2016) (reviewing Andrea Roth, Trial by Machine, 104 Georgetown Law Journal 1245 (2016)), https://cyber.jotwell.com/automatic-for-the-people/.

What is the Path to Freedom Online? It’s Complicated

Yochai Benkler, Degrees of Freedom, Dimensions of Power, Daedelus (2016).

In recent years, the internet has strengthened the ability of state and corporate actors to control the behavior of end users and developers. How can freedom be preserved in this new era? Yochai Benkler’s recent piece, Degrees of Freedom, Dimensions of Power, is a sharp analysis of the processes that led to this development, which offers guidelines for what can be done to preserve the democratic and creative promise of the internet.

For over two decades the internet was synonymous with freedom, promising a democratic alternative to dysfunctional governments and unjust markets. As a “disruptive technology,” it was believed to be capable of dismantling existing powers, displacing established hierarchies, and shifting power from governments and corporations to end users. These high hopes for participatory democracy and new economic structures have been largely displaced by concerns over the rise of online titans (Facebook, Google, Amazon), mass surveillance and power misuse. The power to control distribution and access no longer resides at the end-nodes. Instead it is increasingly held by a small number of state and corporate players. Governments and businesses harvest personal data from social media, search engines and cloud services, and use it as a powerful tool to enhance their capacities. They also use social media to shape public discourse and govern online crowds. The most vivid illustration of this trend was provided during the recent coup attempt in Turkey, when President Recep Tayyip Erdoǧan used social media to mobilize the people of Turkey to take to the streets and fight against the plotters.

How did we reach this point? Since the 1990s it has been evident that the internet may subvert power. In this article, Benkler explains how power may also shape the internet, and how it creates new points of control.

There are many ways to describe this shift of power. Some versions focus on changes in architecture and the rise of cloud computing and mobile internet. Others emphasize market pressure to optimize efficiency and consumer demands for easy-to-use downloading services.

Benkler draws a multidimensional picture of the forces that destabilized the first generation of decentralized internet. These include control points offered by the technical architecture, such as proprietary portable devices (iPhone, Kindle), operating systems (iOS, Android), app stores and mobile networks. The power shift was also affected by business models such as ad-supported platforms and big data, enabling market players to effectively predict and manipulate individual preferences. The rise of proprietary video streaming (Netflix), and Digital Rights Management (DRM) as a prevailing distribution standard, are further threatening to marginalize open access to culture. What made the internet free, Benkler argues, is the integrated effect of these various dimensions, and it was change in these dimensions “. . . that underwrite the transformation of the Internet into a more effective platform for the reconcentration of power.”

This multidimensional analysis enhances our understanding of power and demonstrates how it may restrain our freedom. Power, defined by Benkler as “the capacity of an entity to alter the behaviors, beliefs, outcomes or configurations of some other entity” is neither good nor evil. Therefore, we should not simply seek to dismantle it, but rather to enable online users to resist it. Consequently, efforts to resist power and secure freedom should focus on interventions that disrupt forms of power as they emerge. This is an ongoing process in which “we must continuously diagnose control points as they emerge and devise mechanisms of recreating diversity of constraint and degrees of freedom in the network to work around these forms of reconcentrated power.”

Power is dynamic and manifests itself in many forms. Consequently, the complex system analyzed by Benkler does not offer instant solutions. There may be no simple path towards achieving freedom in the digital era, but plenty can be done to preserve the democratic and creative promise of the internet. Benkler offers several concrete proposals for interventions of this kind, such as facilitating user-owned and common-based infrastructure that is uncontrolled by the state or the market; universal strong encryption controlled by users; regulation of app stores; or distributed mechanisms for auditing and accountability.

Exploring the different ways in which power is exercised in the online ecosystem may further inform our theory of change. Benkler calls our attention to these virtues of decentralized governance. Decentralized design alone may not secure decentralized power, and may not guarantee freedom. Indeed, if we are concerned about preserving freedom, it is insufficient to simply yearn for decentralization. Yet, decentralized design also reflects an ideology. The “original Internet” was not simply a technical system but also a system of values. It assumes that collective action should be exercised through rough consensus and running codes. That is why decentralization may still matter for online freedom.

The “original Internet” provided hard evidence that loosely governed distributed collective action could actually work, and that it could foster important emancipatory and creative progress. Indeed, the distributed design was instrumental for the booming of innovation and creativity, and for widening political participation of individuals over the past two decades. The fact that some of the forces that shape the internet have deserted it does not undermine these core values.

Benkler warns that “the values of a genuinely open Internet that diffuses and decentralizes power are often underrepresented where the future of power is designed and implemented.” It does not follow, however, that the virtues of distributed systems should be eliminated. He calls on academics to fill this gap by focusing on the challenges to distributed design, diagnosing control points, and devising tools and policies to secure affordances of freedom in years to come.

Cite as: Niva Elkin-Koren, What is the Path to Freedom Online? It’s Complicated, JOTWELL (October 13, 2016) (reviewing Yochai Benkler, Degrees of Freedom, Dimensions of Power, Daedelus (2016)), https://cyber.jotwell.com/what-is-the-path-to-freedom-online-its-complicated/.

New App City

Nestor M. Davidson & John J. Infranca, The Sharing Economy as an Urban Phenomenon, 34 Yale L. & Pol’y Rev. 215 (2016).

It may seem odd to put this article in the category of “Cyberlaw,” since it is so thoroughly about the embodied nature of new business models usually attributed to the distributed, placeless internet. But that’s precisely the point: the internet has a materiality that is vital to its functioning, and so do specific parts of it. Regulation, too, must contend with the physical basis of online activities. Julie Cohen has often written about the situatedness of the digital self and its construction within a field of other people, institutions, and activities; Davidson and Infranca explore that situatedness by explaining why local government law is an important matter for internet theorists.

Davidson and Infranca’s article thus puts an important emphasis on the materiality of internet-coordinated activities, even if my take is ultimately more pessimistic than that of the authors. They begin by noting that

[u]nlike for earlier generations of disruptive technology, the regulatory response to these new entrants has primarily been at the municipal level. Where AT&T, Microsoft, Google, Amazon and other earlier waves of technological innovation primarily faced federal (and international) regulatory scrutiny, sharing enterprises are being shaped by zoning codes, hotel licensing regimes, taxi medallion requirements, insurance mandates, and similar distinctly local legal issues.

Why? The authors argue that these new services “fundamentally rely for their value proposition on distinctly urban conditions. … [I]t is the very scale, proximity, amenities, and specialization that mark city life that enable sharing economy firms to flourish.” An Uber driver in a rural area doesn’t have the same customer base that could easily take advantage of the extra space in her car, or her house; someone like me who wants to find a Latin tutor for one hour per week is going to have much more luck in the Metro Washington area than in a rural area. Indeed,

the sharing economy is actually thriving … because it recombines assets and people in a decidedly grounded, place-based way. Sharing economy firms have found success by providing innovative solutions to the challenges of life in crowded urban areas. Even the reputation scoring and other trust mechanisms that are so central to sharing economy platforms create value by responding to particular urban conditions of dense, mass anonymity.

Moreover, urban regulations can limit the supply of urban amenities, like taxis and cheap spaces to sleep in during visits, making the need for relief greater than in rural areas. And the new economic entities can improve matching between people who would want to transact if only they knew about each other, a process that gets better at larger scales and thus works best in larger groups of people. The authors’ account of these benefits and their relationship to the affordances of the city is persuasive and readable. Their point about using platform-based reputation to mitigate some of the risks of anonymity while preserving most of its benefits is especially insightful.

There are also, of course, risks associated with these new entities. Davidson and Infranca primarily identify congestion (such as housing shortages allegedly exacerbated by investors’ use of properties for Airbnb guests rather than long-term residents) and “bad” regulatory arbitrage as the risks to which municipal regulation can be an appropriate response. The authors are largely positive about the potential these changes offer for local governments, arguing that “the political economy of the sharing economy is nudging local governments to be more transparent about the goals of public intervention and to justify empirically the link between local regulatory regimes and their intended outcomes.” Thus, Uber, Airbnb, and the like will create not only a new economy, but also “a new urban geography,” and a new regulatory landscape.

It’s a really nice story, in which everyone can win. For example, big data can improve regulatory outcomes: “Given the intersection between the data generated by the sharing economy and the local spaces through which goods and services move, local governments are well situated to tailor regulation in a holistic but still fine-grained manner.” But can local governments actually take advantage of this data? When we look at Uber’s market capitalization and ability to hire national political figures as lobbyists, versus the resources of a city struggling to make regulatory distinctions, can we be sure that Uber will share the data that a city needs? So far, Uber’s release of information has been extremely controlled, except when dissemination is in its own interests, including its interest in deterring criticism. Davidson and Infranca do note Uber’s pushback on local regulations as well as its successful battle with New York City’s mayor. (Pushback might be the nicest term. Intentional lawbreaking might also fit.) The authors also rightly highlight Airbnb’s apparent manipulation of data it released to lawmakers in order to support its claims that there weren’t a lot of investor-owned units in New York. I’m all for regulatory transparency, but it has to be matched with transparency and truth from the regulated.

Consider, in relation to these regulatory struggles, Anil Dash’s point on Twitter that Alton Sterling and Eric Garner, two African-American men who were killed by the police in the course of their on-street sales of consumer goods, were “bending the law to [a] far lesser degree than execs at AirBNB & Uber.” In the same thread, he continued, “The ‘gig economy’ that’s being advocated — who can participate without being endangered?” Whose “regulatory arbitrage” is met with discussion over whether it’s wrongful or brilliant, and whose with bullets? This is a topic also explored in Kate Losse’s The Unbearable Whiteness of Breaking Things. If only certain entrepreneurs can stress and strain local regulation without being met with physical force, then the distributional effects of the sharing economy will be even more tilted in favor of those who already have access to cultural and market capital.

And then there’s the separately harmful but related problem of participating in sharing economy institutions while black. The authors are hopeful that even if the “sharing economy” companies weaken some social ties by encouraging the monetization of ordinary neighborliness, “[t]he platforms that facilitate the pairing of providers and users of sharing-economy services and goods might enable interactions across heterogeneous groups that would not occur in the absence of the platform.” But they don’t explicitly discuss racial discrimination, either structural or individual. They offer one example of a “sharing economy” institution targeting members of the African diaspora for co-working space, but in a world where Trump supporters have their own dating app, it seems to me that the risks of discrimination deserve more attention. Davidson and Infranca briefly note the problem of ADA compliance, but it merits even more attention, especially since avoiding the cost of accessibility is one of the things that enables new sharing economy entrants to avoid cost-spreading and underprice existing services.

Consider these ads for TaskRabbit, which I saw on the DC Metro a few weeks ago, as statements about economic and social class: A white woman in a yoga pose, captioned “Mopping the Floors,” and a white man on a climbing wall, captioned “Hanging Shelves,” with the TaskRabbit slogan “We do chores. You live life,” beneath both. But then when do “we” live our lives? Or are “our” lives appropriately lived doing chores, while “yours” are not? (In reality, I am among the “you” hailed here, even though I don’t do yoga.) And, invisibly, there are the owners of TaskRabbit, who actually don’t do the chores, though they take their cut of the payments. What do you mean, “we”?

taskrabbit

To deal with distributional problems, Davidson and Infranca suggest encouraging local co-ops and government provision of sharing services—which might actually justify the name “sharing.” Those suggestions are promising, but not very much like most existing models, except for that venerable institution so rarely invoked in discussions of the “sharing economy,” the public library. Indeed, the authors’ analysis might have been strengthened by further reference to the coordinating and capacity-enhancing roles played by public libraries.

Reluctant or unable to tax in order to fund libraries and other public services, though, many municipalities have decided to make lots of their money by ticketing the poor. Meanwhile, a lot of the regulatory arbitrage of the sharing economy means that the “platforms” aren’t bearing costs related to inspection, taxes, etc. that are imposed on local operators who aren’t backed by Silicon Valley. One could argue that this isn’t just a contrast, but instead that these phenomena are linked and mutually reinforcing. But this is not the kind of separating equilibrium that we should be aiming for.

And this leads me to another point: Davidson and Infranca convincingly explain why municipalities would want to, and should, regulate the “sharing economy,” given its likely profound impact on them. But why does that mean that states and the national government wouldn’t want to regulate sharing economy actors, given that cities are pretty substantial parts of most states and of the nation as a whole? Many phenomena that were and are characteristic of urban life invoked federal or state intervention in previous decades, including the Clean Air Act; multiple rounds of federal housing legislation; and the Highway Act of 1973, which provided funding for public transit. Municipalities are currently being left on their own to regulate because many state governments, and an urban-investment-hostile Congress, are repeating President Ford’s famous advice to cities: drop dead! (I’m a fan of Section 230, but I can see where it fits into a narrative in which cities are not left to themselves, but actively precluded from regulating in the interest of their own citizens.)

From all this, one might conclude that the online “sharing economy” is a variant on Eddie Murphy’s classic skit: it’s a way for mainly non-African-Americans to get the benefits of urban living without having to deal with a feared urban underclass. Just as white suburbs, historically, often benefited from the amenities of the city without having to pay for them or for the city’s schools, reintermediation using new online entities allows that ability to pick and choose urban interactions, so “our” connections become ever more granular. Davidson and Infranca reference Jane Jacobs’ classic account of the benefits of the city, but, as they note, many of those benefits came from positive externalities conferred on other people. Many “sharing economy” entrepreneurs are struggling mightily to internalize those benefits for themselves.

Ultimately, the authors provide an important descriptive account that makes the physicality of new online businesses more salient in ways that will assist in any discussion of the appropriate regulatory responses to them. And they offer an optimistic view of the future of municipal governance—one I am more than happy to hope materializes.

(Title courtesy of James Grimmelmann.)

Cite as: Rebecca Tushnet, New App City, JOTWELL (September 13, 2016) (reviewing Nestor M. Davidson & John J. Infranca, The Sharing Economy as an Urban Phenomenon, 34 Yale L. & Pol’y Rev. 215 (2016)), https://cyber.jotwell.com/new-app-city/.

My Favourite Things: The Promise of Regulation by Design

Lachlan Urquhart & Tom Rodden, A Legal Turn in Human Computer Interaction? Towards ‘Regulation by Design’ for the Internet of Things (2016), available at SSRN.

Ten years have passed since the second edition of Lawrence Lessig’s Code; John Perry Barlow’s A Declaration of the Independence of Cyberspace, in turn, came ten years before that. In their working paper A Legal Turn in Human Computer Interaction?, doctoral researcher Lachlan Urquhart (with a background in law) and computing professor Tom Rodden, both based at the University of Nottingham in England, make an avowedly post-Lessig case for greater engagement between the cyberlaw concept of regulation and the field of human-computer interaction (HCI).

Their work is prompted by the growing interest in “privacy by design” (PbD). First the subject of discussion and recommendation, it has taken on a more solid form in recent years, through legislative changes such as the EU’s new General Data Protection Regulation. An area where PbD seems particularly promising is the second prompt for this working paper, namely the so-called “Internet of Things” and the emergence of various technologies, often for use in a domestic setting, which prompt a reconsideration of the relationship between privacy and technological developments.

Although the authors demonstrate a keen understanding of the “post-regulatory state,” of Zittrain’s approach to generativity, and of Andrew Murray’s important and powerful response to Lessig (that Lessig understates the agency and dynamism of the target of regulation), they clearly wish to push things a little further. This comes in part through an application of Suzanne Bødker’s argument (also of a decade ago!), within HCI, that the incorporation of technologies into everyday, domestic life raises particular challenges – a “third wave” as she put it. For this study, this means that the systems-theory-led scholarship in cyberlaw may have its limitations, as the authors criticize. Emergent approaches in HCI, including Bødker’s third wave, may address these barriers to understanding.

In particular, Urquhart and Rodden contend that two intellectual traditions within HCI are important to turning PbD away from the fate of being limited to law as a resource, and into something that might actually make an appreciable difference to the realisation of privacy rights. These approaches are participatory design and value-led or value-sensitive design. The latter encompasses an interesting argument that specifically legal values could be the subject of greater attention. The former approach is provocative, as the authors draw on the history of participatory design in Scandinavian labour contexts; with the industrial and economic models of first Web 2.0 and now the sharing economy continuing to provoke debate, this might prove a turn too far for some. However, the fact that they situate their argument, and a case study of each HCI approach, within the context of the stronger legal mandates for PbD makes their contentions relevant and capable of application even in the short term.

This is a working paper, and some of the ideas are clearly still being developed. The authors draw upon a wide range of literature about both regulation and HCI, and some of the key contributions come from juxtaposition (e.g. Hildebrandt’s ambient law work set alongside separate and perhaps longer-established scholarship in HCI, which is not particularly well-known even in cyberlaw circles). This may indeed be another and quite different take on Murray’s important question of 2013, on where cyberlaw goes from here. One thing is certain: “code is law” still shapes much of how we write and teach, but the most interesting work seems to go deeper into the code box and the law box – with, as in the case of this fascinating study, surprising and stimulating results.

Cite as: Daithí Mac Síthigh, My Favourite Things: The Promise of Regulation by Design, JOTWELL (July 29, 2016) (reviewing Lachlan Urquhart & Tom Rodden, A Legal Turn in Human Computer Interaction? Towards ‘Regulation by Design’ for the Internet of Things (2016), available at SSRN), https://cyber.jotwell.com/my-favourite-things-the-promise-of-regulation-by-design/.

Police Force

Works mentioned in this review:

Police carry weapons, and sometimes they use them. When they do, people can die: the unarmed like Walter Scott and Tamir Rice, and bystanders like Akai Gurley and Bettie Jones. Since disarming police is a non-starter in our gun-saturated society, the next-best option is oversight. Laws and departmental policies tell officers when they can and can’t shoot; use-of-force review boards and juries hold officers accountable (or are supposed to) if they shoot without good reason. There are even some weapons police shouldn’t have at all.

Online police carry weapons, too, because preventing and prosecuting new twists on old crimes often requires new investigative tools. The San Bernadino shooters left behind a locked iPhone. Child pornographers gather on hidden websites. Drug deals are done in Bitcoins. Hacker gangs hold hospitals’ computer systems for ransom. Modern law enforcement doesn’t just passively listen in: it breaks security, exploits software vulnerabilities, installs malware, sets up fake cell phone towers, and hacks its way onto all manner of devices and services. These new weapons are dangerous; they need new rules of engagement, oversight, and accountability. The articles discussed in this review help start the conversation about how to guard against police abuse of these new tools.

In one recent case, the FBI seized control of a child pornography website. For two weeks, the FBI operated the website itself, sending a “Network Investigative Technique” — or, to call things by their proper names, a piece of spyware — to the computers of people who visited the website. The spyware then phoned home, giving the FBI the information it needed (IP addresses) to start identifying the users so they could be investigated and prosecuted on child pornography charges.

There’s something troubling about police operation of a spyware-spewing website; that’s something we normally expect from shady grey-market advertisers, not sworn officers of the law. For one thing, it involves pervasive deception. As Elizabeth E. Joh and Thomas W. Joo explain in Sting Victims: Third-Party Harms in Undercover Police Operations, this is hardly a new problem. Police have been using fake names and fake businesses for a long time. Joh and Joo’s article singles out the underappreciated way in which these ruses can harm third parties other than the targets of the investigation. In child abuse cases, for example, the further distribution of images of children being sexually abused “cause[s] new injury to the child’s reputation and emotional well-being.”

Often, the biggest victims of police impersonation are the specific people or entities being impersonated. Joh and Joo give a particularly cogent critique of this law enforcement “identity theft.” The resulting harm to trust is especially serious online, where other indicia of identity are weak to begin with. The Justice Department settled for $143,000 a civil case brought by a woman whose name and intimate photographs were used by the DEA to set up a fake Facebook account to send a friend request to a fugitive.

Again, deception by police is not new. But in a related essay, Bait, Mask, and Ruse: Technology and Police Deception, Joh nicely explains how “technology has made deceptive policing easier and more pervasive.” A good example, discussed in detail by Stephanie K. Pell and Christopher Soghoian in their article, A Lot More Than a Pen Register, and Less Than a Wiretap: What the StingRay Teaches Us About How Congress Should Approach the Reform of Law Enforcement Surveillance Authorities, is IMSI catchers, or StingRays. These portable electronic devices pretend to be cell phone towers, forcing nearby cellular devices to communicate with them, exposing some metadata in the process. This is a kind of lie, and not necessarily a harmless one. Tricking phones into talking to fake cell towers hinders their communications with real ones, which can raise power consumption and hurt connectivity.

In an investigative context, StingRays are commonly used to locate specific cell phones without the assistance of the phone company, or to obtain a list of all cell phones near the StingRay. Pell and Soghoian convincingly argue that StingRays successfully slipped through holes in the institutional oversight of surveillance technology. On the one hand, law enforcement has at times argued that the differences between StingRays and traditional pen registers meant that they were subject to no statutory restrictions at all; on the other, it has argued that they are sufficiently similar to pen registers that no special disclosure of the fact that a StingRay is to be used is necessary when a boilerplate pen register order is presented to a magistrate. Pell and Soghoian’s argument is not that StingRays are good or bad, but rather that an oversight regime regulating and legitimizing police use of dangerous technologies breaks down if the judges who oversee it cannot count on police candor.

In a broader sense, Joh and Joo and Pell and Soghoian are all concerned about police abuse of trust. Trust is tricky to establish online, but it is also essential to many technologies. This is one reason why so many security experts objected to the FBI’s now-withdrawn request for Apple to use its code signing keys to vouch for a modified and security-weakened custom version of iOS. Compelling the use of private keys in this way makes it harder to rely on digital signatures as a security measure.

The FBI’s drive-by spyware downloads are troubling in yet another way. A coding mistake can easily destroy data rather than merely observing it, and installing one piece of unauthorized software on a computer makes it easier for others to install more. Lawful Hacking, by Steven M. Bellovin, Matt Blaze, Sandy Clark, and Susan Landau, thinks through some of these risks, along with more systemic ones. In order to get spyware on a computer, law enforcement frequently needs to take advantage of an existing unpatched vulnerability in the software on that computer. But when law enforcement pays third parties for information about those vulnerabilities, it helps incentivize the creation of more such information, and the next sale might not be to the FBI. Even if the government finds a vulnerability itself, keeping that vulnerability secret undercuts security for Internet users, because someone else might find and exploit that same vulnerability independently. The estimated $1.3 million that the FBI paid for the exploit it employed in the San Bernadino case — along with the FBI’s insistence on keeping the details secret — sends a powerful signal that the FBI is more interested in breaking into computers than in securing them, and that that is where the money is.

The authors of Lawful Hacking are technologists, and their article is a good illustration of why lawyers need to listen to technologists more. The technical issues — including not just how software works but how the security ecosystem works — are the foundation for the legal and policy issues. Legislating security without understanding the technology is like building a castle on a swamp.

Fortunately, legal scholars who do understand the technical issues — because they are techies themselves or know how to listen to them — are also starting to think through the policy issues. Jonathan Mayer’s Constitutional Malware is a cogent analysis of the Fourth Amendment implications of putting software on people’s computers without their knowledge, let alone their consent. Mayer’s first goal is to refute what he calls the “data-centric” theory of Fourth Amendment searches, that so long as the government spyware is configured such that it discloses only unprotected information, it is irrelevant how the software was installed or used. The article then thinks through many of the practicalities involved with using search warrants to regulate spyware, such as anticipatory warrants, particularity, and notice. It concludes with an argument that spyware is sufficiently dangerous that it should be subject to the same kind of “super-warrant” procedural protections as wiretaps. Given that spyware can easily extract the contents of a person’s communications from their devices at any time, the parallel with wiretaps is nearly perfect. Indeed, on any reasonable measure, spyware is worse, and police and courts ought to give it closer oversight. To similar effect is former federal magistrate judge Brian Owsley’s Beware of Government Agents Bearing Trojan Horses, which includes a useful discursive survey of cases in which law enforcement has sought judicial approval of spyware.

Unfortunately, oversight by and over online law enforcement is complicated by the fact that a suspect’s device could often be anywhere in the world. This reality of life online raises problems of jurisdiction: jurisdiction for police to act and jurisdiction for courts to hold them accountable. Ahmed Ghappour’s Searching Places Unknown: Law Enforcement Jurisdiction on the Dark Web points out that when a suspect connects through a proxy-based routing service such as Tor, mapping a device’s location may be nearly impossible. Observing foreigners abroad is one thing; hacking their computers is quite another. Other countries can and do regard such investigations as violations of their sovereignty. Searching Places Unknown offers a best-practices guide for avoiding diplomatic blowback and the risk that police will open themselves up to foreign prosecution. One of the most important suggestions is minimization: Ghappour recommends that investigators proceed in two stages. First, they should attempt to determine the device’s actual IP address and no more; with that information, they can make a better guess at where the device is and a better-informed decision about whether and how to proceed.

This, in the end, is what tainted the evidence in the Tor child pornography investigation. Federal Rule of Criminal Procedure 41 does not give a magistrate judge in Alexandria, Virginia the authority to authorize the search of a computer in Norwood, Massachusetts. This NIT-picky detail in the Federal Rules may not be an issue much longer. The Supreme Court has voted — in the face of substantial objection from tech companies and privacy activists — to approve a revision to Rule 41 giving greater authority to magistrates to issue warrants for “remote access” searches. But since many of these unknown computers will be not just in another district but abroad, the diplomatic issues Ghappour flags would remain relevant even under a revised Rule 41. So would Owsley’s and Mayer’s recommendations for careful oversight.

Reading these articles together highlights the ways in which the problems of online investigations are both very new and very old. The technologies at issue — spyware, cryptographic authentication, onion routing, cellular networks, and encryption — were not designed with much concern for the Federal Rules or law enforcement budgeting processes. Sometimes they bedevil police; sometimes they hand over private data on a silver platter. But the themes are familiar: abuse of trust and positions of authority, the exploitation of existing vulnerabilities and the creation of new ones. Oversight is a crucial part of the solution, but at the moment it is piecemeal and inconsistently applied. The future of policing has already happened. It’s just not evenly distributed.

Cite as: James Grimmelmann, Police Force, JOTWELL (July 4, 2016) (reviewing seven works), http://cyber.jotwell.com/police-force/.