The Journal of Things We Like (Lots)
Select Page

Money For Your Life: Understanding Modern Privacy

Stacy-Ann Elvy, Paying for Privacy and the Personal Data Economy, 117 Colum. L. Rev. 1369 (2017).

The commercial law of privacy has long occupied a relatively marginal place in modern legal scholarship, situated in gaps among doctrinal exposition, critical conceptual elaboration, and economically-motivated modeling. Much of the explanation for the omission is surely technological. Until Internet technologies came along in the mid-1990s, it was difficult to turn private information into a “thing” that was both technically and economically worth buying and selling.

Technology and markets have passed the point of no return on that score. Claude Shannon, credited as the author of the insight that all information can be converted into digits, has met Adam Smith. Yet relevant legal scholarship has not quite found its footing. Paying for Privacy and the Personal Data Economy, from Stacy-Ann Elvy, offers a novel way forward. Professor Elvy’s article offers a nifty, highly concrete, and eminently useful framework for thinking about the commercial law of things that consist of assets derived from consumers’ private information. It is not only the case that commercial law is one of the legally-relevant attributes of privacy and privacy practices. Privacy can be thought of as a mode of commercial law.

Paying for Privacy lays out its argument in a series of simple steps. It begins with a brief review of the emergence of the now-familiar Internet of Things, network-enabled everyday objects, industrial devices, and related technologies that increasingly permeate and collect data concerning numerous aspects of individuals’ daily lives. That review is pertinent not merely to common claims about the urgency of privacy regulation but also and more importantly to the premise that the supply of data-collecting technologies by industry (with accompanying privacy-implicating features) is likely to lead soon to increased demand by consumers for privacy-mediating, privacy-regulating, and privacy-protecting instruments.

The supply/demand metaphor is purposeful, if somewhat speculative, for it leads to a thorough and useful description and taxonomy of instruments currently on offer. Those include “traditional” privacy models involving personal data traded for “free” services (such as Facebook) and “freemium” services (such as LinkedIn) that offer both subscription-based and “free” versions of their services, harvesting money from subscribers (and advertisers and partners) and money and data from the free users. More recent PFP or “Pay For Privacy” models include newer firms offering multiple versions of “pay for privacy” services. Those include “privacy as a luxury,” in which providers offer added privacy controls for users in exchange for higher payments, and privacy discounts, by which users get cheaper versions of services if they agree to participate in data monitoring and collection. Switching perspectives from the service to the consumer yields a series of models collected as the PDE, or “Personal Data Economy.” Those include the “data insights model,” companies that enable individual consumers to monitor and aggregate private information about themselves, perhaps for their own use and perhaps to monetize by offering to third parties. In the related “data transfer model,” companies broker markets in which consumers voluntarily collect and contribute data about themselves, making it available for transfer (typically, purchase) by third parties.

The taxonomy is only a snapshot of current practices. This field seems to be so dynamic that inevitably many of the details in the article will be superseded, no doubt sooner rather than later. But the taxonomy helpfully reveals the two-sided character of privacy commerce. Rounding out that basic insight, one might add that there are privacy sellers and privacy buyers, privacy borrowers and privacy lenders, privacy principals and privacy agents, privacy capital and privacy debt, privacy currency and privacy assets. There are secondary markets and tertiary markets. As Professor Elvy notes, the list of privacy intermediaries includes privacy ratings firms – firms that play much the same role as the bond ratings firms that participated so enthusiastically (and eventually, so devastatingly) in the subprime mortgage market of the early 2000s.

Having laid out this framework, in the rest of the article Professor Elvy thoughtfully parses the weaknesses of the commercial law of privacy and develops a counterpart set of prescriptions and recommendations for further evaluation and possible implementation. All of this is admirably immediate and concrete.

Her critique is linked model by model to the taxonomy; the review below condenses it in the interest of space. First, not all consumers have equal or fair opportunities to collect and market their private data. To some significant degree, and for reasons that may be beyond their control or influence, those consumers either cannot participate in the wealth-creating dimensions of privacy or, because of social, economic, or cultural vulnerabilities (Professor Elvy highlights children and tenants), are effectively coerced into participating. Second, the article repeats, with helpful added doses of commercial law context, the widespread contract law critique that consumers are presented with vague, illusory, and incomplete “choices” in respect of collection, aggregation, and use of private data. Third and fourth (to combine two categories of critique offered in the article), current market and legal understandings of privacy as commercial law treat privacy primarily as what one might call an “Article 2” asset, that is, in terms of sales of things. Overlooked in this developing commercial market is privacy as what one might call an “Article 9” asset, that is, as a source of security and securitization. The potentially predatory and discriminatory implications of that second character should be obvious to anyone with a passing familiarity with the history of consumer lending, and Professor Elvy hammers on those.

Paying for Privacy concludes with a review of the fragmented legal landscape for addressing these problems and a complementary summary of recommendations for improving the prospects of consumers while preserving valuable aspects of both PFP and PDE models. Professor Elvy nods in the direction of COPPA (the Children’s Online Privacy Protection Act) and the possibility of industry-specific or sector-specific regulation. Most of her energy is directed to clarifying the jurisdiction of the Federal Trade Commission with respect to PDE models to deal with unfair trade practices regarding privacy that do not fit into traditional or accepted models of harm addressable by the FTC. All of this has the air of the technical, but its broader substantive import should not be overlooked. Paying for Privacy serves as a helpful entrée to a newer, broader – and difficult — vision of privacy’s future.

Cite as: Michael Madison, Money For Your Life: Understanding Modern Privacy, JOTWELL (January 8, 2018) (reviewing Stacy-Ann Elvy, Paying for Privacy and the Personal Data Economy, 117 Colum. L. Rev. 1369 (2017)),

The Section Formerly Known As Cyber

We’ve moved! The Cyberlaw section of Jotwell is now the Technology Law section. Two trends in legal scholarship since Jotwell’s launch drove the decision. First, the “cyber-” prefix is no longer strongly associated with the broader field of Internet law. Instead, it tends to refer to specific subfields, like cybercrime and cybersecurity. Those are part of our beat, but hardly all of it. Second, scholars and reviewers have expanded their own interests outwards, using similar intellectual tools to study drones, robotics, and other technological topics. Our new name recognizes these shifts. We’re keeping the same URLs, so all the archives and new reviews will still be at And everything else about the section remains the same, including our hard-working contributors. We look forward to sharing with you many more things we like (lots).

James Grimmelmann
Margot Kaminski
Jotwell Technology Law Section co-editors
A. Michael Froomkin
Jotwell Editor-in-Chief

From Status Update to Social Media Contract

Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harvard L. Rev. (forthcoming 2017), available at SSRN.

Under current US First Amendment jurisprudence, the government can do very little to regulate speech online. It can penalize fraud and certain other kinds of false or potentially misleading speech; direct true threats; and infringement of intellectual property rights and related speech. But it cannot penalize most harassment, hate speech, falsity, and other speech that does immediate harm. Nor can the government generally bar particular speakers. Last Term, the Supreme Court struck down a provision of state law that tried to prevent convicted sex offenders from participating in “social media” where minors might also be participating.

There are good reasons for most of the limits the courts have imposed on the government’s speech-regulating powers—yet those limits have left a regulatory vacuum into which powerful private entities have stepped to regulate the speech of US social media users, suppressing a lot of speech that the government can’t, and protecting other speech despite their power to suppress it. The limits these intermediaries impose, with some important exceptions, look very similar whether the speech comes from the US or from a country that imposes heavier burdens on intermediaries to control the speech of their users. Klonick’s fascinating paper explores the evolution of speech regulation policies at major social media companies, particularly Twitter and Facebook, along with Alphabet’s (Google’s) YouTube.

Klonick finds “marked similarities to legal or governance systems with the creation of a detailed list of rules, trained human decision-making to apply those rules, and reliance on a system of external influence to update and amend those rules.” One lesson from her story may be the free speech version of ontogeny recapitulating phylogeny: regardless of what the underlying legal structure is, or whether an institution is essentially inventing a structure from scratch, speech regulations pose standard issues of definition (defamation and hate speech are endlessly flexible, not to mention intellectual property infringements), enforcement (who will catch the violators?), and equity/fairness (who will watch the watchmen?).

Klonick’s research also provides important insights on the relative roles of algorithms and human review in detecting and deterring unwanted content. While her article focuses on the guidelines followed by human decision-makers, those fit into a larger context of partially automated screening. Automated screening for child pornography seems to be a relative success story, as she explains. However, as many interested parties have pointed out in response to the Copyright Office’s inquiry on §512’s safe harbors and private content protection mechanisms, even with automated enforcement and “claiming” by putative copyright owners via Content ID, algorithms cannot avoid problems of judgment and equitable treatment, especially when some copyright owners have negotiated special rights to override the DMCA process, and keep contested content down regardless of its fair use status, once it’s been identified by Content ID.

Klonick’s account can also usefully be read alongside Zeynep Tufekci’s Twitter and Tear Gas: The Power and Fragility of Networked Protest. Tufekci covers some aspects of speech policies that are particularly troubling, including the misuse of Facebook’s “real name” policy to suppress activists in countries where using a formal name could potentially be deadly; targeted, state-supported attacks on activists that involve reporting them for “abuse” and hate speech; and content moderation that can be politically ignorant, or worse: “in almost any country with deep internal conflict, the types of people who are most likely to be employed by Facebook are often from one side of the conflict—the side with more power and privileges.” Facebook’s team overseeing Turkish content, for example, is in Dublin, disadvantaging non-English speakers and women (whose families are less likely to be willing to relocate for their jobs). Similarly, Facebook’s response to the real-name problem is to allow use of another name when it’s in common use by the speaker, but that usually requires people to provide documents such as school IDs. As Tufekci points out, documents using an alternate identity are most likely to be available to people in relatively privileged positions in developed countries, thus muting their protest but leaving similar people without such forms of ID exposed.

These details of implementation are far more than trivial. And Tufekci’s warning that governments quickly learn how to use, and misuse, platform mechanisms for their own benefit is a vital one. The extent to which an abuse team can be manipulated will, I expect, soon become a separate challenge for the content policy teams Klonick documents—if they decide to resist that manipulation, which is not guaranteed. Some of these techniques, moreover, resist handling by an abuse team even when identified. When government-backed teams overwhelm social media with trivialities in order to distract from a potentially important political event, as is apparently common in China, what policies and algorithms could identify the pattern, much less sort the wheat from the chaff?

Along with this comparison, Klonick’s piece offers the opportunity to revisit some relatively recent techno-optimists—West Coast code has started to look in places more like outsourced Filipino or Indian area codes, so what does that mean for internet governance? Consider Clay Shirky’s Cognitive Surplus: Creativity and Generosity in a Connected Age, a witty book whose examples of user-generated activism now seem dated, only seven years later, with the rise of “fake news” disseminated by foreign content farms, GamerGate, and revenge porn. It’s still true that, as Joi Ito wrote, “you should never underestimate the power of peer-to-peer social communication and the bonding force of popular culture. Although so much of what kids are doing online may look trivial and frivolous, what they are doing is building the capacity to connect, to communicate, and ultimately, to mobilize.” Because of this power, a legal system that discourages you from commenting on and remixing the first things you love, in communities who love the same thing you do, also discourages you from commenting on and remixing everything else. But what Klonick’s account makes clear is that discouragement can come from platforms as well as directly from governments, whether because of over-active filters such as Content ID that suppress remixes or because of more directly politicized interventions such as those Tufekci discusses.

Shirky’s book, like many of its era, was relatively silent about the role of government in enacting (or suppressing) the changes promoted by people taking advantage of new technological affordances. Consider one of Shirky’s prominent examples of the power of (women) organizing online: a Facebook group organized to fight back against anti-woman violence perpetrated in the Indian city of Mangalore by the religious fundamentalist group Sri Ram Sene. As Shirky tells it, “[p]articipation in the Pink Chaddi [underwear] campaign demonstrated publicly that a constituency of women were willing to counter Sene and wanted politicians and the police to do the same…. [T]he state of Mangalore arrested Muthali and several key members of Sene … as a way of preventing a repeat of the January attacks.” (Emphasis mine.) The story has a happy ending because actual government, not “governance” structures, intervened. How would the content teams at Facebook react if today’s Indian government decided that similar protests were incitements to violence?

The fact that internet intermediaries have governance aspirations without formal government power (or participatory democracy) also directs our attention to the influences on the use of that power. Klonick states that “platforms moderate content because of a foundation in First Amendment norms, corporate responsibility, and at the core, the economic necessity of creating an environment that reflects the expectations of its users. Thus, platforms are motivated to moderate by both the Good Samaritan purpose of § 230, as well as its concerns for free speech.” But note what drops out of that second sentence—explicit acknowledgement of the profit motive, which becomes both a driver of some speech protections and a reason, or an excuse, for some speech suppression. Pressure from advertisers, for example, led YouTube to crack down on “pro-terrorism” speech on the platform. Klonick also argues that “platforms are economically responsive to the expectations and norms of their users,” which leads them “to both take down content their users don’t want to see and keep up as much content as possible,” including by pushing back against government takedown requests. But this seems to me to equivocate about who the relevant “users” are—after all, if you’re not paying for a service, you’re the product it’s selling, and content that advertisers or large copyright owners don’t want to see may be far more vulnerable than content that individual participants don’t want to see.

One question Klonick’s story raised for me, then, was what a different system might look like. What if platforms were run the way public libraries are? Libraries are the real “sharing” economies, and in the US have resisted government surveillance and content filtering as a matter of mission. Similarly, the Archive of Our Own, with which I am involved, has user-centric rules that don’t need to prioritize the preservation of ad revenue. Although these rules are hotly debated within fandom, because what is welcoming to some users can be exclusionary to others, they are distinctively mission-oriented. (I should also concede that size, too, makes a difference—eventually, a large enough community that includes political content will attract government attention; Twitter hasn’t made a profit, but it has received numerous subpoenas and national security letters.)

Klonick suggests that the key to optimal speech regulation for platforms is some sort of participatory reform, perhaps involving both procedural and substantive protections for individual users. In other words, we need to reinvent the democratic state, embedding the user/citizen in a context that she has some realistic chance to affect, at least if she knows her rights and acts in concert with other users. The obvious problem is the one of transition: how will we get from here to there? Klonick understandably doesn’t take up that question in any detail. Absent the coercive power of real law, backed by guns and taxes, it’s hard for me to imagine the transition to participatory platform governance. Moreover, the same dynamics that brought us Citizens United make it hard to imagine that corporate interests—both platform and advertiser—would accede to any such mandates, likely raising First Amendment objections of their own.

Klonick’s article helps to identify how individual speech online is embedded in structures that guide and constrain speakers; its descriptive account will be very useful to understanding these structures. I worry, however, that understanding won’t be enough to save us. We want to think well of our governors; we don’t want to be living in 1984, or Brave New World. But the development of intermediary speech policies tells us, among other things, that we might end up looking from man to pig, and pig to man, and finding it hard to tell the difference.

Disclosure: Kate Klonick is a former student of mine, though this paper comes from her work years later.

Cite as: Rebecca Tushnet, From Status Update to Social Media Contract, JOTWELL (November 29, 2017) (reviewing Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harvard L. Rev. (forthcoming 2017), available at SSRN),

Rules for Digital Radicals

In 1971, activist and community organizer Saul Alinsky summarized lessons from a lifetime of organizing in his book, Rules for Radicals: A Pragmatic Primer for Realistic Radicals. Published in what would be the twilight of his life, Rules for Radicals was in many ways a tactical field guide for those seeking to instigate widespread social change. It still influences social movements on both the left and right. And yet, today’s wired world is much different—and more dynamic—than Alinsky’s pre-internet society, which relied largely on centralized forms of mass communication.

Now, both activists and governments operate under a new set of diffuse structures and communication mediums. Twitter, Facebook, and the like alter the terms of engagement for public protest and participatory democracy. And Zeynep Tufekci’s new book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, helps us understand precisely how networked communications can amplify social movements, at the same time that it provides important notes of caution. In this way, while written as an accessible scholarly account rather than an operation manual, Tufekci’s book provides rules—or at least guideposts—for digital radicals.

Through detailed analysis of contemporary movements such as Occupy, Black Lives Matters, and the Gezi Park protests, coupled with comparisons to historical movements, such as the Civil Rights movement of the 1950s and 1960s, Tufekci develops a framework for understanding how modern movements can exploit—and be exploited by—digital communication technologies.

What she highlights is that though social media permits movements to galvanize supporters quickly, helping them organize massive public protests in short order, something is lost in terms of internal, deliberative structure that a movement may need in order to survive down the stretch. Tufekci labels the collective bonds and capabilities developed through the constant maintenance of organizational communities “network internalities.” Internal organizational contestation has long-term value.

Tufekci analogizes the work of developing network internalities to the importance of building muscles for long term durability. For example, she compares the March on Washington, which took months to plan and helped create enduring movement capacity through both formal and informal institutions, with the 2013 Gezi Park protest in Turkey. The Gezi Park protests were spawned almost overnight and helped generate a strong protest culture but, unfortunately, did not translate into a sustained political movement (yet).

In other words, while the ability to organize rapidly is no doubt a real asset afforded by digital communication tools, it comes with attendant limitations—organizational structures only start to be developed after the movement’s first big moment, and often too late. Today’s movements may lack the organizational structure for making collective decisions, limiting their ability to make tactical shifts as the protests unfold.

Perhaps even more significantly, quickly organized protests may fail to signal any long-lasting organizational capacity or threat to those in power. For Tufekci, social movements are only as powerful as the capacities that they signal. She identifies three principal, but non-exclusive, capacities that are critical to movements’ success—narrative capacity (the ability to get the public’s attention and tell the movement’s story), disruptive capacity (the ability to interrupt the government’s business as usual), and electoral capacity (the ability to credibly endanger politicians’ electoral prospects).

As to each one, if a movement is able to organize massive amounts of people into a one-day protest, for example, the humongous Women’s March that followed Donald Trump’s inauguration, but that massive protest does not credibly signal a threat to the government’s electoral chances, the impact of the protest is greatly diluted and permits the government to ignore, rather than engage and potentially overreact to, the protest. Underscoring Tufekci’s point that participatory tactics are only as impactful as the capabilities they signal, Ben Wikler, a leader at MoveOn, recently implored people activated by Republican efforts to unwind the Affordable Care Act NOT to call congresspeople who didn’t represent them. Otherwise, the strength of the signal provided by calls could be weakened and interpreted as not posing electoral capacity.

In the midst of developing her helpful capacity-signals taxonomy for analyzing movements’ strengths, Tufekci foregrounds that although social media holds great promise in that it enables movements to circumvent traditional forms of media and gain direct attention for their respective causes, new forms of censorship are also being deployed. That is, governments and those in power are not sitting idly by—they too have in many instances embraced social media and used it to discredit mediums used by activists through the spread of fake news and conspiracy theories. Those in power are actively engaged in diminishing the attention movements receive.

But here, though an academic book rather than a practical field guide, Tufekci’s thorough analysis nevertheless might have benefited a bit from the inclusion of—or gesture toward—some tactical solutions, akin to the approach utilized by Alinsky. Tufekci’s lament of misinformation’s role in hampering social movements might have been accompanied by reference to particular suggestions activists could employ to provide their social media posts with credibility. For instance, the Witness organization, which trains activists on how to use video to protect human rights, instructs activists to set the date and time on their cameras and to capture contextualizing details from the scene, both of which verify the authenticity of the images.

But aside from a handful of missed opportunities to make the lessons from her analysis more concrete (which may have been outside the scope of an academic project), Tufekci’s book is a critical contribution for those seeking to understand how to best leverage social media for social change. While lauding movement activists’ integrity and commitment to participatory forms of engagement that involve many, Tufekci also gently nudges today’s activists to consider whether digital technologies can be utilized more efficiently and with longer-lasting effect. The book lives up to its title—highlighting networked activism’s power and, equally if not more importantly, uncovering its weaknesses so that they may be overcome.

Cite as: Scott Skinner-Thompson, Rules for Digital Radicals, JOTWELL (November 14, 2017) (reviewing Zeynep Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest (2017)),

The Answer to the Machine is in the Rule of Law?

Mirielle Hildebrandt, Law as Computation in the Era of Artificial Legal Intelligence. Speaking Law to the Power of StatisticsU. Toronto L. J. (2017), available at SSRN.

Mireille Hildebrandt’s forthcoming article is a companion piece to her Chorley Lecture of 2015.1 In the earlier piece, she highlights the relationship between the ‘deep structure of modern law’ and the printing press and written text – building on this a case concerning constitutional democracy and transparency, both in the world of print and the world of electronic data. In this new paper, the emphasis is on law as computation – as compared with law as information in the earlier lecture.

Machine learning is often discussed as an opportunity for legal practice and adjudication, but what will that mean? Hildebrandt highlights how machine learning in the context of law is primarily a simulation of human reasoning found in written legal text; one needs to identify how law is associated with ‘meaningful information’ rather than information simpliciter. Key concerns with applying machine learning in law include the catch-22 of deskilled lawyers becoming unable to verify a machine’s output, and various ways in which such systems can be opaque.

Hildebrandt hopes that we can ‘speak law to the power of statistics’ and argues that machine learning and related practices and technologies ‘may contribute to better informed legal reasoning – if done well’. There is an interesting and healthy scepticism about the funding of current efforts and what this might mean for the consequences of what may be reported as innovation. Much of this relates, of course, to the driving factors around innovation in the legal profession and the changing ‘law firm’. The work therefore also sits within the body of literature now interrogating algorithmic governance (e.g. Kathy O’Neal’s Weapons of Math Destruction, Frank Pasquale’s The Black Box Society, and, more recently, the question of whether data protection law might provide a remedy for such concerns in Lilian Edwards and Michael Veale’s Slave to the Algorithm? Why a ‘Right to Explanation’ is Probably Not The Remedy You are Looking For).

Provocatively, Hildebrandt wonders whether the result of a certain type of interdisciplinary engagement is that law is simply treated as one kind of regulation e.g. in the mind of the law-and-economics scholar; this is contrasted with a (perhaps deliberately idealised) lawyer as the ‘dignified steward of individual justice and societal order’. Her response, which may resonate with many legal scholars, is to draw upon Neil MacCormick’s presentation of law as an ‘argumentative’ discipline (MacCormick did, as Hildebrandt does, engage with speech act theory as a means of understanding legal reasoning). The challenge, then, is to identify the way(s) to test and contest emerging forms of decision making, and to ensure that the relevant people are equipped with the skills and/or the nous to ask searching questions and to scrutinise the systems that we are rapidly putting in place.

This draft paper will appear in a much-anticipated issue of the University of Toronto Law Journal. Already, the Canadian journal has contributed to a debate around the legal singularity (of interest even if you think that the legal singularity is about as likely as The Singularity itself), in a special issue on artificial intelligence, big data and the law; the forthcoming issue, based around a March 2017 symposium, includes further contributions on democratic oversight and the future of legal education. Indeed, that question of how future lawyers will be trained is something that Hildebrandt ruminates upon in her article and struck a chord with this reviewer (currently working in a legal system where the training of solicitors is about to undergo significant change. If the next generation of lawyers and legal researchers is to be able to take on the socially important challenges outlined by Hildebrandt (especially in countering the arms race between those with the requisite resources and motivations), we may need to think a bit harder about the shape of the law school.

  1. Published as Mireille Hildebrandt, Law as Information in the Era of Data-Driven Agency, 79 The Modern L. Rev. 1 (2016).
Cite as: Daithí Mac Síthigh, The Answer to the Machine is in the Rule of Law?, JOTWELL (October 2, 2017) (reviewing Mirielle Hildebrandt, Law as Computation in the Era of Artificial Legal Intelligence. Speaking Law to the Power of StatisticsU. Toronto L. J. (2017), available at SSRN),

Democracy Unchained

K. Sabeel Rahman, Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy, 39 Cardozo L. Rev. 5 (forthcoming, 2017), available at SSRN.

In the mid-2000s, digital activists spearheaded the net neutrality movement to ensure fair treatment of the customers of Internet Service Providers (ISPs), as well as to protect the companies trying to reach them. Net neutrality rules limit or ban preferential treatment; for example, they might prevent an ISP like Comcast from offering exclusive access to Facebook and its partner sites on a “Free Basics” plan. Such rules have a sad and tortuous history in the US: rebuffed under Bush, long delayed and finally adopted by Obama’s FCC, and now in mortal peril thanks to Donald Trump’s elevation of Ajit Pai to be chairman of the Commission. But net neutrality as a popular principle has had more success, animating mass protests and even comedy shows. It has also given long-suffering cable customers a way of politicizing their personal struggles with haughty monopolies.

But net neutrality activists missed two key opportunities. They often failed to explain how far the neutrality principle should extend, as digital behemoths like Google, Facebook, Apple, Microsoft, and Amazon wielded extraordinary power over key nodes of the net. Some commentators derided calls for “search neutrality” or “app store neutrality;” others saw such measures as logical next steps for a digital New Deal. Moreover, they did not adequately address key economic arguments. Neoliberal commentators insisted that the US would only see rapid advances in speed and quality of service if ISPs could recoup investment by better monetizing traffic. Progressives argued that “something is better than nothing;” a program like “Free Basics” probably benefits the disadvantaged more than no access at all.

In his Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy, K. Sabeel Rahman offers a theoretical framework to address these concerns. He offers a “definition of infrastructural goods and services” and a “toolkit of public utility-inspired regulatory strategies” that is a way to “diagnose and respond to new forms of private power in a changing economy,” including powerful internet platforms. He also gives a clear sense of why the public interest in regulating large internet firms should trump investors’ arguments for untrammeled rights to profits—and demands “public options” for those unable to afford access to privately controlled infrastructure.

Law’s treatment of infrastructure has been primarily economic in orientation. For example, Brett Frischmann’s magnum opus, Infrastructure: The Social Value of Shared Resources, offered a sophisticated theory of the spillover benefits of transportation, communication, environmental, and other forms of infrastructure, building on economists’ analyses of topics like externalities and congestion costs. Rahman complements this work by highlighting political and moral dimensions of infrastructure. The early 21st century Progressive movement did not seek to regulate utilities simply because a large firm may not be efficient. They also worried directly about the power exercised by such firms: their ability to influence politicians, take an outsized share of GDP, and sandbag both rival firms and political opponents. As Rahman explains, “Industries triggered public utility regulation when there was a combination of economies of scale limiting ordinary accountability through market competition, and a moral or social importance that made the industries too vital to be left to the whims of the market or the control of a handful of private actors.”

Identifying the list of “foundational goods and services” meriting direct utility regulation is inevitably a mix of politics, science, and law. Determining, for example, whether broadband internet should be treated in a manner similar to telephone service, depends on scientific analysis (e.g., might it soon become easier to provide internet over electric lines to complement existing cable), political mandates (e.g., voters electing Republicans at this point may be assumed not to prioritize broadband regulation, as party lines on the issue are relatively clear), and legal judgments (e.g., is broadband so similar to wireline service that it would defeat the purpose of the relevant statutes to treat it far differently). This delicate balance of the “three cultures” of science, democracy, and law, means that the scope of utilities regulation will always be somewhat in flux. While the federal government is, today, chipping away at the category, future administrations may revive and expand it. If so, they will benefit from Rahman’s rigorous definition of infrastructure as “those goods and services which (i) have scale effects in their production or provision suggesting the need for some degree of market or firm concentration; (ii) unlock and enable a wide variety of downstream economic and social activities for those with access to the good or service; and (iii) place users in a position of potential subordination, exploitation, or vulnerability if their access to these goods or services is curtailed.”

Not just the scope, but also the content of public utility regulation has also evolved over time. As Rahman relates, three broad categories of regulation can provide a “21st century framework for public utility regulation:”

1) [F]irewalling core necessities away from behaviors and practices that might contaminate the basic provision of these goods and services—including through structural limits on the corporate organization and form of firms that provide infrastructural goods;

2) [I]mposing public obligations on infrastructural firms, whether negative obligations to prevent discrimination or unfair disparities in prices, or positive obligations to pro-actively provide equal, affordable, and accessible services to under-served constituencies; and

3) [C]reating public options, state-chartered, cheaper, basic versions of these services that would offer an alternative to exploitative private control in markets otherwise immune to competitive pressures.

These three approaches (“firewalls”, “public obligations” and “public options”) have all helped increase the accountability of private powers in the past (as Robert Lee Hale’s work, praised as an inspiration in Rahman’s, has shown). Cable firms cannot charge you a higher rate because they dislike your politics. Nor can they squeeze businesses that they want to purchase, charging higher and higher rates to an acquisition target until it relents. Nor should regulators look kindly on holding companies that would more ruthlessly financialize essential services (or the horizontal shareholding that functions similarly to such holding companies.).

There are many legal scholars working in fields like communications law, banking law, and cyberlaw, who identify the limits of dominant regulatory approaches, but are researching in isolation. Rahman’s article provides a unifying framework for them to learn from one another, and should catalyze important interdisciplinary work. For example, it is well past time for those writing about search engines to explore how principles of net neutrality could translate into robust principles of search neutrality. The European Commission has documented Google’s abuse of its dominant position in shopping services. Subsequent remedial actions should provide many opportunities for the imposition of public obligations (such as commitments to display at least some non-Google-owned properties prominently in contested search engine results pages) and firewalling (which might involve stricter merger review when a megafirm makes yet another acquisition).

Rahman also shows a critical complementarity between competition law and public utility regulation. Antitrust concepts can help policymakers assess when a field has become concentrated enough to merit regulatory attention. Both judgments and settlements arising out of particular cases could inform the work of, say, a future “Federal Search Commission,” which could complement the Federal Communications Commission. The same problem of “bigness” that can allow a megafirm to abuse its platform by squeezing rivals, also creates opportunities to abuse users. Just as the Consumer Financial Protection Bureau serves a vital function

Many large internet platforms are now leveraging data advantage into profits, and profits into further domination of advertising markets. The dynamic is self-reinforcing: more data means providing better, more targeted services, which in turn attracts a larger customer base, which offers even more opportunities to collect data. Once a critical mass of users is locked in, the dominant platform can chisel away at both consumer and producer surplus. For example, under pressure from investors to decrease its operating losses, Uber has increased its cut from drivers’ earnings and has price discriminated against certain riders based on algorithmic assessments of their ability and willingness to pay. The same model is now undermining Google’s utility (as ads crowd out other information), and Facebook’s privacy policies (which get more egregiously one-sided the more the social network’s domination expands).

Rahman offers us a rigorous way of recognizing such platform power, offering a tour de force distillation of cutting edge social science and critical algorithm studies. Industries ranging from internet advertising to health care could benefit from a public utility-centered approach. This is work that could lead to fundamental reassessments of contemporary regulatory approaches. It is exactly the type of research that state, federal, and international authorities should consult as they try to rein in the power of many massive firms in our increasingly concentrated, winner-take-all economy.

Cite as: Frank Pasquale, Democracy Unchained, JOTWELL (August 17, 2017) (reviewing K. Sabeel Rahman, Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy, 39 Cardozo L. Rev. 5 (forthcoming, 2017), available at SSRN),

Disruptive Platforms

Orly Lobel, The Law of the Platform, 101 Minn. L. Rev. 87 (2016), available at SSRN.

Until recently, the law of the online platform involved intermediary liability for online content and safe harbors like CDA §230 or DMCA §512. The recent rise of online service platforms, a/k/a the “Uberization of everything,” has challenged this model. What Orly Lobel calls the “platform economy”—which includes the delivery of services (see Task Rabbit), the sharing of assets (see Airbnb), and more—has led to new laws, doctrinal adjustments, and big questions. What happens when the internet meets the localized, physical world? Are these platforms newly disruptive, or old issues in new wrapping? And how do we best design regulations for technological change? The Law of the Platform will appeal to those looking for thoughtful discussion of these questions. It will also appeal, more practically, to those searching for an encyclopedic overview of the fast-developing law in this area, from permitting requirements to employment law to zoning.

Lobel argues that the platform economy represents the “third generation of the Internet”: built on online platforms, but affecting offline service markets. Unlike the first generation of the Web, which connected us to information through search engines, or the second generation, which disrupted publishing, news, music, and retail, the third generation is characterized by “transforming the service economy, allowing greater access to offline exchanges for lower prices.” The platforms do not themselves own the physical assets or hire the labor to which they provide access. Instead, they sell access and information—and desperately try to avoid labels like “employer” or “bank” that might lead to regulation. Lobel maps a number of these digital platforms to their physical world counterparts: Airbnb and VRBO to hotels; Parking Panda to parking sites; Uber and Lyft to taxis; and EatWith to restaurants.

Lobel’s take on these platforms is largely positive. She sees the platform economy as lowering transaction costs and leading to “the market… perfecting.” To share just several of the characteristics Lobel observes: the platform economy creates economies of scale, connecting individuals in huge marketplaces. It reduces waste, and allows the more efficient use of privately owned resources. It allows both supply and demand to be broken down into smaller parts, facilitating smaller exchanges. It allows hyper-customization—you can now rent a “non-smoking, pet-friendly, Kosher, and partially furnished apartment for three nights in a specific neighborhood.” The platform economy reduces intermediation, getting rid of the middleman and thereby lowering costs. And importantly to Lobel, the dynamic ratings that platforms provide can reduce search costs and monitoring costs by providing incentives for good behavior by participants. Coase explained that high transaction costs would in real life prevent many transactions from occurring, but according to Lobel, the logic, technology, and networks of trust that new platforms bring to bear can and do enable these previously lost transactions.

Lobel thus appears in many ways to be a platform optimist. There are indications, however, that such optimism might not be warranted. Uber lost $2 billion in 2015 and $2.8 billion in 2016, subsidizing both sides of transactions to hook drivers with bonuses and riders with cheaper rides. A transportation industry analyst estimated in November 2016 that Uber was covering 60 percent of the cost of each ride. The picture painted by these numbers does not suggest a company that is “the concept of supply and demand embodied,” but rather a behemoth using significant venture capital resources to establish market dominance.

This brings us to the second half of Lobel’s article, on regulation. Lobel asks whether new platforms are successful “because they are introducing new business models… or because they seek regulatory avoidance and generate value from such avoidance.” Again, she seems to side with the platforms, characterizing them as both perfecting existing markets (through competition) and creating new ones (through differentiation). VRBO, Airbnb, and Homeaway are not just substitutes for a hotel, but create a differentiated experience of adventuring at private homes. An Airbnb study in California found that fourteen percent of customers would not have visited San Francisco at all but for an Airbnb stay. And because the rentals are cheaper than hotels, people stay longer and spend more in the local economy. Lobel seems largely convinced that these platforms don’t just lower costs in existing markets, but create new markets as well.

But the billion dollar question (or in Uber’s case, $68 billion) is: are these platforms able to create these new markets because of innovation, or are they lowering costs by cleverly bypassing necessary regulatory regimes? What makes the platform economy legally disruptive is that these companies tend not to fit neatly into existing legal categories in regulated areas, like “employer” or “lender” or “bank.” Whether this is because of the law’s failure to keep pace with technological changes or these companies’ deliberate strategies to evade high-cost regulatory compliance through “sharewashing” is debatable. Back in March, the New York Times disclosed that Uber deliberately tagged and evaded enforcement authorities in Portland, OR; Boston; Paris; Las Vegas; and more. The DOJ is now investigating. But as Lobel points out, some attempts at regulation, like New York City’s taxicab medallion system, seem clearly geared towards protecting incumbents and keeping new actors out.

The middle third of the article taxonomizes the differences between illegitimately protectionist regulation and legitimate regulatory goals and regimes. Lobel divides platform regulations into three categories: (1) permitting, licensing, and price controls; (2) taxation; and (3) broadly speaking, “regulations that are about fairness, externalities, and normative preferences.” Lobel breezes through the tax issues, explaining that questions of collection are “largely technical” and platform providers should be responsible for tax collection for efficiency reasons. In contrast, Lobel characterizes regulations in the first category—permitting, occupational licensing, and price controls—as largely the result of industry capture, where incumbents extract rent at the expense of consumers and competitors (presumably, she’s not a fan of the bar). She argues that we should more directly regulate towards the goals these systems are designed to get at—safety, professionalism, and other forms of consumer protection—rather than using ex ante systems that favor incumbents.

The hardest cases, Lobel argues, are those that revolve around issues of “public welfare in the platform,” such as governing the characteristics and safety levels of particular neighborhoods (zoning) or protecting workers’ rights (employment laws). Her nuanced analysis of zoning regulations calls for empirical evaluation of the safety impact of short-term housing on residential neighborhoods. Her discussion of employment law makes two important observations: one, that the rise of the contingent workforce is not a feature of platforms alone; and two, that the resulting employment law issues—whether an employee is a covered employee or independent contractor—also arise in cases having nothing to do with the platform economy (eg, FedEx in the Ninth Circuit).

In other words, the legal disruption in these areas may have as much to do with the law itself, with older categories that are now breaking down in a number of areas, as with particular disruptive features of the platform economy. Solving these problems requires balancing competing social values, such as fairness with freedom of contract. “The platform provides new opportunities to continue these debates, but it does not transform or transcend these hard choices in any meaningful way.”

The last third of the article ventures into more dangerous territory. Lobel has previously done important work on the relationship between public regulation and private (or public-private) governance. She closes The Law of the Platform by returning to this topic. Where traditional regulation fails, Lobel argues, platforms themselves can through private “regulation” ensure consumer trust and a certain degree of consumer protection. Platforms do this by obtaining insurance, by voluntarily running background checks, and through rating and recording systems that track all transactions on a platform. It is this last form of governance that most excites Lobel, and most worries me.

“The confidence generated by state permitting, occupational licensing, and other regulatory requirements is substitutable with crowd confidence,” Lobel claims. Consumer review systems, Lobel proposes, now serve as a type of governance, forcing transparency better than a command-and-control public regulatory regime. “[W]atchdogging is crowdsourced,” she states. Constant data-gathering means prices will stay updated, and bad actors will quickly be uncovered, protecting consumers and ensuring their trust.

Unfortunately, Lobel does not discuss the downsides of ubiquitous data collection, from creating or exacerbating power disparities, to chilling positive behaviors in addition to negative behaviors, to the economic consequences of hacking. She does not address significant governance concerns—over transparency, discrimination, and self-serving behavior—that come from having this data housed in private, not regulatory or public, hands. And she does not discuss the economic or normative costs of business models formed on selling that privately gathered data back to government for a range of purposes, from infrastructure improvement to government surveillance.

The article closes with a general paean to dynamic and experimental governance as a better approach than command-and-control rule-making and enforcement. Experimentation (for example, in different localities) and data-gathering in the name of anti-discrimination policies are all well and good, but again there are costs to a more universal shift to softer enforcement that Lobel does not address here. Companies are often inspired to self-regulate because of a background threat of harsher government enforcement. The risk in a larger move towards soft self-governance over government regulation in the area of technological development is that consumer concerns will take a decided backseat under that kind of a regime.

The Law of the Platform is rich, complicated, and raises many questions. Lobel does romanticize the platform, even as she acknowledges public welfare issues. She also romanticizes a lighter regulatory touch in the area of technological development, even while recognizing the legitimacy of a number of consumer concerns. But her discussions throughout of legal disruption and regulatory design make this a piece well worth reading for anyone following changes to technology and the law.

Cite as: Margot Kaminski, Disruptive Platforms, JOTWELL (July 19, 2017) (reviewing Orly Lobel, The Law of the Platform, 101 Minn. L. Rev. 87 (2016), available at SSRN),

Inspecting Big Data’s Warheads

“Welcome to the dark side of Big Data,” growls the last line of the first chapter of Cathy O’Neil’s recent book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. As that sentence (and that subtitle) suggest, this is not a subtle book. O’Neil chronicles harms from the widespread use of machine learning and other big data systems in our society. O’Neil is convinced that something ominous and harmful is afoot, and she lays out a bill of particulars listing dozens of convincing examples.

This is a book that I like (lots) because we need outspoken and authoritative chroniclers of the downsides of big data decisionmaking. It advances a carefully articulated and well-supported argument, delivered with urgency and passion. For readers yearning for a balanced look at both the benefits and the costs of our increasingly automated society, however, keep searching.

If we built a prototype for a qualified critic of big data, her background would look a lot like O’Neil’s: Harvard math PhD, MIT postdoc, Barnard professor, hedge fund quant during the financial crisis, start-up data scientist. Throw in blogger ( and Occupy organizer for good measure, and you cannot quibble with the credentials. O’Neil is an author who knows what she is talking about, who also happens to be a writer of compelling, clear prose, an evidently skilled interviewer, and a great speaker.

Perhaps most importantly, the book provides legal scholars with a concise and salient label—weapons of math destruction, or WMDs—to describe decisionmaking algorithms possessing three features: opacity, scale, and harm. This label and three-factor test can help us identify and call out particularly worrisome forms of automated decisionmaking.

For example, she seems to worry most—and have the most to say—about so-called “value added modeling” systems for assessing the effectiveness of teachers in public schools. Reformers such as Michelle Rhee, former Chancellor of the DC public schools, spurred by policies such as No Child Left Behind, embraced a data-centric model, which selected which teachers to fire based heavily on the test scores of their students. The affected teachers had little visibility into the magic formulae that decided their fate (opacity); these tests affected thousands of teachers around the country (scale); and good teachers were released from important jobs they loved, depriving their students of their talents (harm). When opacity, scale, and harm align in an algorithmic decisionmaking system, software can worsen inequality and ruin lives.

Building on these factors, O’Neil returns repeatedly to the important role of feedback in exacerbating (and sometimes blunting) the harm of WMDs. If we use the test results of students to identify topics they are not learning, to change what or how we are teaching, this is a positive and virtuous feedback loop, not a WMD. But when we decide to fire the bottom five percent of teachers based on those same scores, we are assuming the validity and accuracy of the test, making it impossible to use feedback to test the strength of those assumptions. The critical role of feedback is an important key insight of the book.

The book brims with other examples of WMDs, devoting considerable attention to criminal recidivism scoring systems, employment screening programs, predictive policing algorithms, and even the U.S. News college ranking formula. O’Neil spends entire chapters covering big data systems that stand in our way of getting a job, succeeding at work, buying insurance, and securing credit.

Legal scholars who write about automated decisionmaking or artificial intelligence may be surprised to see this book reviewed in these pages. O’Neil’s book is long on description with very little attention paid to policy solutions. A book of deep legal scholarship, this is not. As capably as she writes about math and algorithms, O’Neil falters—and I’m guessing she would cop to this—when it comes to law and regulation, mixing equal parts unrealistically optimistic sentiments about laws like FCRA; vague descriptions about the prospect of Constitutional challenges to data practices; and unrealistic calls for new legislation.

Despite these extra-disciplinary shortcomings, this book should be read by legal scholars, who are not likely to already know all the stories in this book and who will find many compelling (if chilling) examples to cite. As one who does not focus on education policy, for example, I was struck by the detailed and personal stories of teachers fired because of the whims of value-added modeling. And even for the old stories I had heard before, I was struck by how well O’Neil tells them, distilling complicated mathematical concepts into easy-to-digest descriptions and using metaphor and analogy with great skill. I will never again think of a model without thinking of O’Neil’s lovely example of the model she uses to select what to cook for dinner for her children.

The book is in parts intemperate. But we live in intemperate times, and the problems with big data call for an intemperate call-to-arms. A more measured book, one which tried to mete out praise and criticism for big data in equal measure, would not have served the same purpose. This book is a counterpoint to the ceaseless big data triumphalism trumpeted by powerful partisans, from Google to the NSA to the U.S. Chamber, who view unfettered and unexamined algorithmic decisionmaking as their entitlement and who view criticism of big data’s promise as an existential threat. It responds as well to big data’s academic cheerleaders, who spread the word about the unalloyed wonderful potential for big data to drive innovation, grow the economy, and save the world. A milquetoast response would have been drowned out by these cheery tales, or worse, co-opted by them.

“See,” big data’s apologists would have exclaimed, “even Cathy O’Neil agrees about big data’s important benefits.” O’Neil is too smart to have written a book that could have been co-opted in this way. “Big Data has plenty of evangelists, but I’m not one of them,” O’Neil proudly proclaims. Neither am I, and I’m glad that we have a thinker and writer like O’Neil shining a light on some of the worst examples of the technological futures we are building.

Cite as: Paul Ohm, Inspecting Big Data’s Warheads, JOTWELL (June 20, 2017) (reviewing Cathy O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, 2016),

Starting with Consent

James Grimmelmann, Consenting to Computer Use, 84 Geo. Wash. L. Rev. 1500 (2016), available at SSRN.

The Computer Fraud and Abuse Act (“CFAA”), enacted in 1986, has long been a source of consternation for jurists and legal scholars alike. A statute marred by long-standing circuit splits over basic terminology and definitions, the CFAA has strained under the weight of technological evolution. Despite thousands of pages of law review ink spilt on attempting to theoretically resuscitate this necessary but flawed statute, the CFAA increasingly appears to be broken. Something more than a minor Congressional correction is required.

In particular, the central term of the statute—authorization—is not statutorily defined. As the CFAA has morphed through amendments to encompass not only criminal but also civil conduct, the meaning of “authorized access” has become progressively more slippery and difficult to anticipate. Legal scholarship has long voiced concerns over the CFAA, including whether certain provisions are void for vagueness,1 create opportunity for abuse of prosecutorial discretion,2) and give rise to unintended negative impacts on employee mobility and innovation.3

Enter James Grimmelmann’s Consenting to Computer Use. In this work, Grimmelmann offers us a clean slate as an important and useful starting point for the next generation of the CFAA conversation. He returns us to a first-principles analysis with respect to computer intrusion, focusing on the fundamental question of consent.

Grimmelmann urges us to take a step back and hit reset on the scholarly CFAA conversation. In lieu of tortured attempts to find Congressional meaning for “authorization” in legislative history, or misguidedly trying to shoe-horn computer intrusion into last-generation (criminal or civil) trespass regimes, Grimmelmann leads us through an intuitively resonant inquiry around consent. As Grimmelmann succinctly puts it, “[q]uestions of the form, ‘Does the CFAA prohibit or allow X?’ are posed at the wrong level of abstraction. The issue is not whether X is allowed, but whether X is allowed by the computer’s owner.” (P. 1501.)

An inquiry into implicit or explicit consent by a computer’s owner is present in every computer intrusion inquiry, Grimmelmann explains. He reminds us of the importance of the context of the intrusion. Herein lies the primary insight of the paper: the CFAA’s key term requires construction rather than interpretation. In other words, Grimmelmann acknowledges and embraces the suboptimal statutory reality that most other scholars have danced around: the CFAA itself is of little assistance in crafting workable legal analysis for defining computer intrusion and unauthorized access. The starting point for understanding the legal concept of CFAA “authorization” (or lack thereof), Grimmelmann argues, will be found in engaging with the traditional legal concept of consent. He explains that when we begin to rely on consent as the baseline of future CFAA inquiry, courts can then engage with crafting rules in light of the overall goals of the CFAA and the facts of specific cases.

The CFAA context is challenging, and Grimmelmann acknowledges key differences between technological contexts and more traditional ones. Grimmelmann explains that software is automated and plastic—meaning that consent to access is necessarily prospective, and that software can function in unforeseeable ways. These features (bugs?) have added to the complexity of the computer intrusion inquiry. However, when a legal paradigm is constructed around consent, Grimmelmann argues, these elements of automation and plasticity become less dispositive. Providing the example of a compromised vending machine, he explains that it makes no difference whether an intruder tricked the machine by exploiting a hole in the machine’s logic or whether the intruder punched a hole in its side. The issue is the compromise and the lack of consent.

Grimmelmann distinguishes between factual consent and legal consent as distinct concepts, relying on theoretical work from Peter Westen. As Grimmelmann explains the distinction, “factual consent is a function of both code and words; of how a computer is programmed and of its owner’s expressions, such as oral instructions, terms of service, and employee handbooks.” (P. 1511.) Meanwhile, legal consent is based on factual consent, but can depart from it if a jurisdiction believes “that factual consent is not sufficient to constitute legal consent” or that it is not necessary based on the totality of the circumstances, including whether implicit consent may have been granted. (P. 1512.) Grimmelmann cautions that different types of CFAA cases will necessitate a distinction between factual and legal consent. In other words, “without authorization” for purposes of the CFAA can refer to multiple possible types of conduct because legally sufficient consent has always been constructed by courts across various areas of law and various fact patterns.

With this excellent article, Grimmelmann has set the stage for a new line of CFAA scholarship, one that is better-connected to traditional legal first principles. As technological evolution continues to strain the overall framework of the CFAA, this work opens the door to a more aggressive re-evaluation of the statute in technological context and offers us a possible way forward.

Editor’s Note: James Grimmelmann took no part in the selection or editing of this review.

  1. Orin S. Kerr, Vagueness Challenges to the Computer Fraud and Abuse Act, 94 Minn. L. Rev. 1561 (2010).
  2. The Vagaries of Vagueness: Rethinking the CFAA as a Problem of Private Nondelegation, 127 Harv. L. Rev. 751, 772 (2013) (“To whatever extent prosecutorial discretion might provide some redeeming amount of government participation in the criminal context, such participation is absent in civil cases between private parties.”
  3. Andrea M. Matwyshyn, The Law of the Zebra, 28 Berkeley Tech. L.J. 155 (2013).
Cite as: Andrea Matwyshyn, Starting with Consent, JOTWELL (May 19, 2017) (reviewing James Grimmelmann, Consenting to Computer Use, 84 Geo. Wash. L. Rev. 1500 (2016), available at SSRN),

Make America Troll Again

There is a theory that Donald Trump does not exist, and that the fictional character of “Donald Trump” was invented by Internet trolls in 2010 to make fun of American politics. At first “Trump” himself was the joke: a grotesque egomaniac with orange skin, a debilitating fear of stairs, and a tenuous grasp on reality. He was a rage face in human form. But then his creators realized that there was something even funnier than “Trump’s” vein-popping, bile-specked tirades against bad hombres and nasty women: the panicked and outraged denunciations he inspired from self-serious defenders of the status quo. “Trump’s” election was the greatest triumph of trolling in human history. It has reduced politics, news, and culture to a non-stop, deplorably epic reaction video.

There is no entry for “Donald Trump” in the index of Whitney Phillips’s 2015 book, This Is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. But this playful, perceptive, and unsettling monograph is an outstanding guidebook to the post-Trump hellscape online trolling has made for us. Or perhaps I should say to the hellscape we have made for ourselves, because Phillips’s thesis is that trolling is inherently bound up with the audiences and antagonists who can’t stop feeding the trolls. Much like Trump, trolls “are born of and fueled by the mainstream world.” (Pp. 168-69.)

This is Why We Can’t Have Nice Things is first and foremost an act of ethnography. Phillips embedded herself in online trolling communities, interviewing participants and following them as their targets and methods evolved over the years. The book strikes an especially good balance: close enough to have real empathy for its subjects’ motivations and worldview, but not so close as to lose critical perspective. It also displays an exceptionally good sense of context: the reporting is grounded in specific trolling communities, but Phillips is careful about situating those communities within large cultural trends, online and off.

There are many kinds of trolls: patent trolls who file suits without warning, commentator trolls who make provocative arguments with a straight face. Phillips focuses on what she calls “subcultural trolls,” who self-identify as part of a community of trolls, set apart from the mainstream, engaged in the anonymous (or pseudonymous) exploitation of others for the lulz. Think /b/ on 4chan, think Anonymous, think AutoAdmit, think alt-right.

Phillips defines “lulz” (a corruption of “LOL” with a sharper edge) as “amusement at other people’s distress.” (P. 27.) A classic example is “RIP trolling”: going to social media memorial pages and leaving messages to shock, confuse, and anger grieving families. Phillips argues that lulz are characterized by fetishism, generativity, and magnetism. “Fetishism” is used in a quasi-Marxist sense of dissociation: RIP trolling, for example, involves an act of emotional detachment that cuts away the actual human tragedy and focuses on extracting humor from arbitrary details, like a victim’s lost iPod. “Generativity” refers to the same kind of playful remixing, repurposing, and world-building that online fanfic communities engage in. And “magnetism” captures lulz’ memetic qualities: they draw attention in and allow a trolling community to cohere around iterated themes and phrases.

The heart of the book (Part II), with examples drawn roughly from 2008 to 2011, is a sustained argument against being too quick to treat trolls as the Other. Trolls take expert advantage of mainstream media attention. Their tactics are often straight out of the corporate PR playbook and its even more unsavory cousins, and their cultural postures are funhouse-mirror reflections of attitudes that are prevalent in mainstream culture. (Breitbart, in other words, is a professionalized political trolling operation—or perhaps it would be more accurate to say that it is a news organization genetically enhanced with troll DNA.) “[T]rolls and sensationalist corporate media outlets are in fact locked in a cybernetic feedback loop predicated on spectacle,” Phillips writes. (P. 52.)

Trolls thrive on mainstream media attention in two related ways. One is the classic hoax, updated for the Internet age. Some trolls are masters at feeding the mainstream media false stories (fake news!). Multiple local TV stations fell for troll-supplied stories about a supposed crisis sweeping through the United States of teenagers huffing jenkem (a fermented mixture of feces and urine). The other is that trolls are skilled at turning attention into a game only they can win. Resistance is futile; one cannot argue with a sea lion or reason with the Joker. In this, Phillips argues, trolls channel Schopenhauer. The point is to win the argument by any means necessary, right or wrong. (If the technique sounds familiar, it may be because you’ve seen it coming from the talking heads on Fox News or from behind the podium at the White House Press Briefing Room.)

Aspects of trolling are rooted in widely shared mainstream attitudes. It draws heavily on a muscular strain of free speech libertarianism that shields even the most offensive speech. If you don’t like what I’m saying, it’s your own damn fault for listening, or for being bothered by it. If you don’t want your feelings to be hurt, don’t have feelings; if you don’t like death threats, just kill yourself. Phillips does a nice job tracing trolling’s complicated relationship with race, gender, and sexuality: the same trolls—the same trolling campaign—can enjoy lulz at the expense of vulnerable minorities, privileged white middle-class comfort, conservative intolerance, and liberal pieties. Making racist jokes is both something that many millions of Americans routinely indulge in and something that makes many millions of Americans (not usually the same ones) really angry.

Trolling eats everything, including especially itself, and reduces it all to a pulsing blob of incoherent imagery, held together only by the pleasure of a laugh at the expense of someone who can’t take the joke. Indeed, there is no other joke; trolling is bullying, or dominance politics from which everything but the lulz has been stripped away. Phillips calls it “pure privilege,” and explains that trolls “refuse to treat others as they wish to be treated. Instead, they do what they want, when they want, to whomever they want, with almost perfect impunity.” (P. 26.)

But, to repeat, trolls “aren’t pulling their materials, chosen targets, or impulses from the ether. They are born of and fueled by the mainstream world—its behavioral mores, its corporate institutions, its political structures and leaders—however much the mainstream might rankle at the suggestion.” (Pp. 168-69.)

We have met the troll and it me.

Cite as: James Grimmelmann, Make America Troll Again, JOTWELL (April 21, 2017) (reviewing Whitney Phillips, This Is Why We Can't Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture (2016)),