The Journal of Things We Like (Lots)
Select Page

The GDPR’s Version of Algorithmic Accountability

Lilian Edwards and Michael Veale, Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For, 16 Duke L. & Tech. Rev. 18 (2017), available at SSRN.

Scholarship on whether and how to regulate algorithmic decision-making has been proliferating. It addresses how to prevent, or at least mitigate, error, bias and discrimination, and unfairness in algorithmic decisions with significant impacts on individuals. In the United States, this conversation largely takes place in a policy vacuum. There is no federal agency for algorithms. There is no algorithmic due process—no notice and opportunity to be heard—not for government decisions, nor for private companies’. There are—as of yet—no required algorithmic impact assessments (though there are some transparency requirements for government use). All we have is a tentative piece of proposed legislation, the FUTURE of AI Act, that would—gasp!—establish a committee to write a report to the Secretary of Commerce.

Europe, however, is a different story. The General Data Protection Regulation (GDPR) went into direct effect on EU Member States on May 25, 2018. It contains a hotly debated provision, Article 22, that may impose a version of due process on algorithmic decisions that have significant effects on individuals. For those looking to understand how the GDPR impacts algorithms, I recommend Lilian Edwards’ and Michael Veale’s Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For. Edwards and Veale have written the near-comprehensive guide to how EU data protection law might affect algorithmic quality and accountability, beyond individualized due process. For U.S. scholars writing in this area, this article is a must-read.

Discussions of algorithmic accountability in the GDPR have, apart from this piece, largely been limited to the debate over whether or not there is an individual “right to an explanation” of an algorithmic decision. Article 22 of the GDPR places restrictions on companies that employ algorithms without human intervention to make decisions with significant effects on individuals. Companies can deploy such algorithmic decision-making only under certain circumstances (when necessary for contract or subject to explicit consent), and even then only if they adopt “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests.” These “suitable measures” include “at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” They also arguably include a right to obtain an explanation of a particular individualized decision. The debate over this right to an explanation centers on the fact that it appears in a Recital (which, in brief, serves as interpretative guidance), and not in the GDPR’s actual text. The latest interpretative document on the GDPR appears to agree with scholars who argue that a right to an explanation does exist, because it is necessary for individuals to contest algorithmic decisions. This suggests that the right to explanation will be oriented towards individuals, and making algorithmic decisions understandable by (or legible to) an individual person.

Edwards and Veale move beyond all of this. They do engage with the debate about the right to an explanation, pointing out both potential loopholes and the limitations of individualized transparency. They helpfully add to the conversation about the kinds of explanations that could be provided: (A) model-centric explanations that disclose, for example, the family of model, input data, performance metrics, and how the model was tested; and (B) subject-centric explanations that disclose, for example, not just counterfactuals (what would I have to do differently to change the decision?) but the characteristics of others similarly classified, and the confidence the system has in a particular individual outcome. But they worry that an individualized right to an explanation would in practice prove to be a “transparency fallacy”—giving a false sense of individual control over complex and far-reaching systems. They valuably add that the GDPR contains a far broader toolkit for getting at many of the potential problems with algorithmic decision-making. Edwards and Veale observe that the tools of omnibus data protection law—which the U.S. lacks—are tools that can also work in practice to govern algorithms.

First, they point out that the GDPR consists of far more than Article 22 and related transparency rights. This is an important point to make to a U.S. audience, which might otherwise come away from the right to explanation debate believing that in the absence of a right to an explanation, algorithmic decision-making won’t be governed by the GDPR. That conclusion would be wrong. Edwards and Veale point out that the GDPR contains other individual rights—such as the right to erasure, and the right to data portability—that will affect data quality and allow individuals to contest their inclusion in profiling systems, including ones that give rise to algorithmic decision-making. (I was surprised, given concerns over algorithmic error, that they did not also discuss the GDPR’s related right to rectification—the right to correct data held on an individual—which has been included in calls for algorithmic due process by U.S. scholars such as Citron & Pasquale and Crawford & Schultz.) These individual rights potentially give individuals control over their data, and provide transparency into profiling systems beyond an overview of how a particular decision was reached. But there remains the question of whether individuals will invoke these rights.

Edwards and Veale identify that the GDPR goes beyond individual rights to “provide a societal framework for better privacy practices and design.” For example, the GDPR requires something like privacy by design (data protection by design and by default), requiring companies to build data protection principles, such as data minimization and purpose specification, into developing technologies. For high-risk processing, including algorithmic decision-making, the GDPR requires companies to perform (non-public) impact assessments. And the GDPR includes a system for formal co-regulation, nudging companies towards codes of conduct and certification mechanisms. All of these provisions will potentially influence design and best practices in algorithmic decision-making. Edwards and Veale argue that these provisions—aimed at building better systems at the onset, and providing ongoing oversight over systems once deployed—are better suited to governing algorithms than a system of individual rights.

Edwards and Veale are not GDPR apologists. They recognize significant limitations in the law, including the lack of a true class-action mechanism, even where the GDPR contemplates third-party actions by NGOs. They acknowledge that data-protection authorities are often woefully underfunded and understaffed. And, like others, they point out mismatches between the GDPR’s language and current technological and social practices—asking, for example, whether behavioral advertising constitutes an algorithmic “decision.” But they helpfully move the conversation about algorithmic accountability away from the “right to an explanation” and towards the broader regulatory toolkit of the GDPR.

Where the piece falters most is in its almost offhand dismissal of individualized transparency. Some form of transparency will be necessary for the regulatory system that they describe to work—a complex co-regulatory system involving impact assessments, codes of conduct, and self-certification. Without public oversight of some kind, that system may be subject to capture, or at least devoid of important feedback from both civil society and public experts. And, as the ongoing conversation about justifiability shows, both the legitimizing and the dignitary value of individualized decisional transparency cannot be dismissed so lightly.

I wish this piece had a different title. In dismissing the value of an individual right to explanation, the title obscures the valuable work Edwards and Veale do in charting other regulatory approaches in the GDPR. However the right to an explanation debate plays out, they show that unlike in the United States, algorithmic decision-making is in the regulatory crosshairs in the EU.

Cite as: Margot Kaminski, The GDPR’s Version of Algorithmic Accountability, JOTWELL (August 16, 2018) (reviewing Lilian Edwards and Michael Veale, Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For, 16 Duke L. & Tech. Rev. 18 (2017), available at SSRN),

The Difference Engine: Perpetuating Poverty Through Algorithms

Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018).

We have a problem with poverty, which we have converted into a problem with poor people. Policymakers tout technology as a way to make social programs more efficient, but they end up encoding the social problems they were designed to solve, thus entrenching poverty and over-policing of the poor. In Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks uses three core examples—welfare reform software in Indiana, homelessness service unification in Los Angeles, and child abuse prediction in Pennsylvania—and shows that while they vary in how screwed up they are (Indiana terribly, Los Angeles a bit, and Pennsylvania very hard to tell), they all rely on assumptions that leave poor people more exposed to coercive state control. That state control both results from and contributes to the assumption that poor people’s problems are their own fault. The book is a compelling read and a distressing work, mainly because I have little faith that the problems Eubanks so persuasively identifies can be corrected.

Eubanks writes:

Across the country, poor and working-class people are targeted by new tools of digital poverty management and face life-threatening consequences as a result. Automated eligibility systems discourage them from claiming public resources that they need to survive and thrive. Complex integrated databases collect their most personal information, with few safeguards for privacy or data security, while offering almost nothing in return. Predictive models and algorithms tag them as risky investments and problematic parents. Vast complexes of social service, law enforcement, and neighborhood surveillance make their every move visible and offer up their behavior for government, commercial, and public scrutiny.

As Eubanks points out, the poor are test subjects because they offer “‘low rights environments’ where there are few expectations of political accountability and transparency.” Even those who do not care about poverty should be paying attention, however, because “systems first designed for the poor will eventually be used on everyone.”

Eubanks’ recommendation, even as more punitive measures are being enacted, is for more resources and fewer requirements. Homelessness isn’t a data problem, it’s a carpentry problem, and a universal basic income or universal health insurance would allocate care far better than a gauntlet of automated forms. Eubanks points out that automation, despite its promised efficiencies, has coincided with kicking people off of assistance programs. In 1973, nearly half of people under the poverty line received AFDC (Aid to Families with Dependent Children), but a decade later that was 30 percent (coinciding with the introduction of the computerized Welfare Management System) and now it’s less than 10 percent. Automated management is a tool of plausible deniability, allowing elites to believe that the most worthy of the poor are being taken care of and that the unworthy don’t deserve care, as evidenced by the fact that they failed to behave as they were asked to do in complying with various requirements to submit information and be subjected to surveillance.

Eubanks begins with the most obvious disaster: Indiana’s expensive contract with IBM to get rid of most caseworkers and automate medical coverage. Thousands of people were wrongly denied coverage, creating trauma for medically vulnerable people even when the denials were ultimately reversed. Indiana’s failure to create a working centralized system led to some backlash. Eubanks quotes people who suggest that the result from the backlash was a hybrid human-computer system, which restored almost enough caseworkers to deal with the people who make noise, but not enough for those who can’t. Of course, human caseworkers have their own problems—accounts of implicit and even explicit racial bias abound—but discrimination is easily ported to statistical models, such that states with higher African-American populations have “tougher rules, more stringent work requirements, and higher sanction rates.” And Indiana’s automated experiment disproportionately drove African Americans off the TANF (Temporary Assistance for Needy Families) rolls, perhaps in part because the system treated any error (including those made by the system itself) as deliberate noncompliance, and many people simply gave up.

The Los Angeles homelessness story is different, but not different enough. It provides a useful contrast of a “progressive” use of data and computerization. The idea was to create “coordinated entry,” so that homeless people who contacted any service provider would be connected with the right resources, sorting between the short-term and long-term homeless, who need different services, some of which can be less than helpful if given to the wrong groups. There’s a lot of good there, including the idea of “housing first”: rather than limiting housing only to those who are sober, employed, etc., the aim is to get people housed because of how hard all those other things are without housing. Eubanks profiles a woman for whom coordinated entry was a godsend.

But Eubanks also identifies two core problems: (1) The system itself is under-resourced; all the coordination in the world won’t help when there are only 10 beds for every 100 people in need of them. (2) The information collected is invasive and contributes to the criminalization and pathologization of poor people. The data are kept with minimal security and no protection against police scrutiny, which is particularly significant because, as Eubanks rephrases Anatole France, “so many of the basic conditions of being homeless—having nowhere to sleep, nowhere to put your stuff, and nowhere to go to the bathroom—are also officially crimes.” Homeless people can rarely pay tickets, and so the unpaid fines turn into warrants (turning into days in jail when they can’t afford bail, even though these kinds of nuisance charges are usually dismissed once in front of a judge). People in the database turn into fugitives.

These two problems reinforce each other. Given the low chance of getting help, people are less willing to explain their circumstances, often stories of escalating misfortune and humiliation, to the representative of the state’s computer. The resource crunch also contributes to workers’ felt imperative to find the most deserving and thus to scrutinize every applicant for appropriate levels of dysfunctionality. Too little trauma, and services might be deemed unnecessary. But too much dysfunctionality can also be disqualifying—the housing authority might determine that a client is incapable of living independently. One group of caseworkers Eubanks discusses “counsel their clients to treat the interview at the housing authority like a court proceeding.” They also see vulnerable clients rejected by landlords; Section 8 vouchers to pay for housing are nice, but still require a willing landlord, and the vouchers expire after six months, meaning that a lot of clients just give up. Meanwhile, “[s]ince 1950, more than 13,000 units of low-income housing have been removed from Skid Row, enough for them all.” It’s also worth noting how much discretion remains with humans, despite the appearance of Olympian objectivity in a housing need score: clients are assessed based on self-reports, and they won’t always tell people they haven’t grown to trust about circumstances bearing on their needs, including trauma.

What really mattered to getting resources devoted to addressing homelessness in Los Angeles, Eubanks argued, was rights, not data. Court rulings found that routine police practices—barring sleeping in public and confiscating and destroying the property of homeless people found in areas where they were considered undesirable—were unconstitutional. Once that happened, tent cities sprung up in places visible to people with money and power. Better data helped in identifying what resources were needed where, but tent cities were the driver of reform.

Finally, the experience of child welfare prediction software in Allegheny County, Pennsylvania, has continuities with and divergences from the other two stories. The software is at the moment used just to back up individual caseworkers’ determinations of whether to further investigate child abuse based on a call to the child welfare hotline, though Eubanks already saw caseworkers tweaking their own estimates of risk to match the model’s, an instance of automation bias that ought to alarm us. Some of the problems were statistical: the number of child deaths and near-deaths in the county is thankfully very low, and you can’t do a good model with a handful of cases a year for a population of 1.23 million.

Setting the base-rate problem aside, you can’t actually measure levels of child abuse. You can measure proxies, such as how many calls to CPS (Child Protective Services) are made and how many children CPS removes from a home. As a result, the automated system ends up predicting “decisions made by the community (which families will be reported to the hotline) and by the agency and the family courts (which children will be removed from their families), not which children will be harmed.” Unfortunately, those proxies are precisely the ones we know are infected with persistent racial and class bias, so that bias is baked into the predictions. This is the same problem explained so well in Cathy O’Neil’s Weapons of Math Destruction, a good book to read along with this one.

In Allegheny County itself, “the great majority of [racial] disproportionality in the county’s child welfare services arises from referral bias, not screening bias.” Sometimes this arises from perceptions of neighborhoods being bad, so the threshold for reporting someone from those neighborhoods is lower—which in the US means minority neighborhoods. But the prediction system “focuses all its predictive power and computational might on call screening, the step it can experimentally control, rather than concentrating on referral, the step where racial disproportionality is actually entering the system.” And it gets worse: the model is evaluated for whether it predicts future referrals. “[T]he activity that introduces the most racial bias into the system is the very way the model defines maltreatment.”

In rural or suburban areas, where witnesses are rarer, no one may call the hotline. Families with enough resources use private services for mental health or addiction treatment and thus don’t create a record available to the state (if they don’t directly talk about child abuse in a way that triggers mandatory reporting). Either way, those disproportionately whiter and wealthier families stay out of the system for conduct that would, if they were visible to the system, increase their risk score. The system can provide very useful services, but those services then become part of the public record, helping define a family as at-risk. A child whose parents were investigated by CPS now has a record of interaction with the system that, when she becomes a mother, will increase her risk score if someone reports her. Likewise, use of public services is coded as a risk factor. A quarter of the predictive variables in the model are “direct measures of poverty”—TANF, SSI (Supplemental Security Income), SNAP (Supplemental Nutrition Assistance Program), and county medical assistance. Another quarter of the predictive variables measure “interaction with juvenile probation” and the child welfare agency itself, when “professional middle-class families have more privacy, interact with fewer mandated reporters, and enjoy more cultural approval of their parenting” than poorer families. Nuisance calls by people with grudges are also a real problem.

Even if that didn’t bother you, consider this: of 15,000 abuse reports in 2016, at its current rate of (proxy-defined) accuracy, the system would produce 3,600 incorrect predictions. And the planned model is supposed to be “run on a daily or weekly basis on all babies born in Allegheny County.” This is a big step forward not just in extending the tech to everyone, but also in commitment to prediction. Prediction is about guessing how poor people might behave in the future based on data from their networks, not just about judging their past individual behavior, and thus it can infect entire communities and generations. At the same time, “digital poorhouses,” as Eubanks calls the networks into which data about poor people are fed, are hard to see and hard to understand, making them harder to organize against.

Eubanks also points out that parents can naturally resent outside scrutiny and often feel that once the child welfare system is involved the standards keep getting raised on them, no matter what they try to do. And caseworkers interpret resistance and resentment as danger signs. While these reactions aren’t directly dependent on the technology, they are human behaviors that change what the technology does in the world.

In theory, big data could increase transparency and decrease discrimination where that comes from the humans in the system. Unfortunately, that doesn’t seem to be what’s happening. Among other things, the purported “transparency” of algorithms, even putting trade secrets aside, is very much a transparency for the elite who can figure the code out, not for ordinary participants in democratic governance, who basically have to take experts’ explanations on faith.

In addition, Eubanks finds:

the philosophy that sees human beings as unknowable black boxes and machines as transparent…deeply troubling. It seems to me a worldview that surrenders any attempt at empathy and forecloses the possibility of ethical development. The presumption that human decision-making is opaque and inaccessible is an admission that we have abandoned a social commitment to try to understand each other. Poor and working-class people in Allegheny County want and deserve more: a recognition of their humanity, an understanding of their context, and the potential for connection and community.

This sounds great, but I wonder if it is fully convincing, in the fallen world in which we live. On the other hand, given that there are other interventions that wouldn’t sort the “worthy” from the “unworthy” in the ways that current underfunded services are forced to do, it is certainly persuasive to argue that we shouldn’t try to move from biased caseworkers to biased algorithms.

Along with non-technical solutions, Eubanks offers some ethics for designers, focusing on whether the tools they make increase the self-determination and agency capabilities of the poor, and whether they’d be tolerated if targeted at the non-poor. I think she’s overly optimistic about the latter criterion, at least as applied to private corporate targeting, which we barely resist. The example of TSA airport screening is also depressing. Perhaps I’d suggest the modification that, if we expect wealthier people to buy their way out of the system, as they can with TSA Pre-check and CLEAR Global Entry (at least if they’re not Muslim), then there is a problem with the system. Informed consent and designing with histories of oppression in mind, rather than assuming that equity and good intentions are the default baselines, are central to her vision of good technological design.

Like the far more caustic Evgeny Morozov, Eubanks contends that we have turned to technology to solve human problems in ways that are both corrupting and self-defeating. And Eubanks doesn’t focus the blame on Silicon Valley. The call for automation is coming from inside the polity. In fact, while IBM comes in for substantial criticism for overpromising in the Indiana example, the real drivers in Eubanks’ story are the policy wonks who are either trying to shrink the system until it can be drowned in the bathtub (Indiana), or sincerely trying to build something helpful while resources are continually being drained from the system (Los Angeles and Pennsylvania).

Ultimately, Eubanks argues, the problem is that we’re in denial about poverty, an experience that will happen to the majority of Americans for at least a year between the ages of 20 and 65, while two-thirds of us will use a means-tested public benefit such as TANF, SNAP, Medicaid, or SSI. But we persist in pretending that poverty is “a puzzling aberration that happens only to a tiny minority of pathological people.” We pass a suffering man on the street and fail to ask him if he needs help. We don’t keep our tormented child in an isolated place, as they do in Omelas. Instead of walking away, we walk by—but we don’t meet each other’s eyes as we do so. This denial is expensive in so many ways—morally, monetarily, and even physically, as we build entire highways, suburbs, private schools, and prisons so that richer people don’t have to share in the lives of poorer people. It rots politics: “people who cannot meet eachothers’ eyes will find it very difficult to collectively govern.” Eubanks asks us to admit that, as Dan Kahan and his colleagues have repeatedly demonstrated in work on cultural cognition, our ideological problems won’t be solved with data, no matter how well formed the algorithm.

Cite as: Rebecca Tushnet, The Difference Engine: Perpetuating Poverty Through Algorithms, JOTWELL (July 18, 2018) (reviewing Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018)),

Power-Lest it Should Be Forgotten

Yik Chan Chin and Changfeng Chen, Internet Governance: Exploration of Power Relationship1 Chinese Law eJournal 34 (2018).

There is a relatively new SSRN source I have found to be very useful: the Chinese Law e-Journal sponsored by the University of Hong Kong Faculty of Law (edited by Fu Hualing and Shitong Qiao, and thus referred to as Fu and Qiao, which appropriately might be translated as a “happy or blessed bridging”). This source is very broad with regard to the subjects it covers—many among them relating to Technology Law—and provides a valuable insight into how mainly but not exclusively Chinese researchers view developments in China and in the world.

Internet Governance: Exploration of Power Relationship, by Yik Chan Chin and Changfeng Chen, is included in this e-Journal, and was presented at the 2017 Giganet Symposium in Geneva in December last year. That symposium was held back to back (“Day Zero”) with the annual meeting of the Internet Governance Forum (IGF), a United Nations forum that sees itself as perhaps the example of a multistakeholder platform for governance. The paper looks at the reality of Internet governance in China, in search for a mechanism that comes close to the IGF’s multistakeholder model. It provides both a valuable account of the realities of Internet governance in China, and a method for thinking about what constitutes power in blends of multistakeholder and directive governance.

The authors describe in detail the Beijing Internet Association (BIA), a body with more than 100 entities, public and private, that acts as an intermediary between government agencies and those entities. The researchers analyzed this association by using social network analysis, questioning the actors in this setting about their interrelations. Their aim is to identify what they call “the significant force in shaping of Internet governance” power in China.

The authors identify power through three methods: (1) by identifying communications structures between the actors (For example: Which actors can communicate directly? Are there nodes that monopolize interactions?); (2) by assessing the capability of actors to act as a broker, i.e. the ability to bring other actors together to act and share information; and finally (3) by “capacity,” defined by the authors as a set of abilities to understand issues and influence interests. Using this methodology, the authors have—not surprisingly in the Chinese context—identified the secretariat of the BIA as the decisive seat of power in that Internet Governance regime.

Nevertheless, they still regard the BIA as a structure that does incorporate multistakeholder interests, even if strongly directed by government and the Party via its secretariat. The BIA, according to their research, builds strongly on social rather than formal or legal binding forces, using coordination rather than directives, indicating a multistakeholder approach. The BIA is perceived as a pragmatic response to the complexities of the Internet, and a result of learning from failures of more directive interventions. The BIA oscillates between being a dissemination and feedback mechanism for government information and directives, and a self-regulatory body with the described elements of multistakeholderism. The authors also point out differences from other multistakeholder concepts, and refer to internal problems of the BIA model, in particular because it seeks to integrate commercially competing interests. They finally discuss the chances of the BIA’s developing a more self-coordinating rather than directive-oriented governance structure.

The description of the BIA in the paper provides useful information for those not too familiar with the more detailed workings of Internet governance mechanisms in China. Some of the problems the BIA is dealing with sound familiar, even if they may have different political connotations, such as the establishment of an “anti-rumor network,” not unlike attempts in other Internet governance structures to address “Fake News” and political manipulation.

Beyond such detailed insights into the realities of Internet regulation in China, the article achieves three things:
(1) It shows that even in China, inclusive governance mechanics are used to address the limitations of direct centralized government regulation of complex technical, economic, and social issues, even if these mechanics leave no doubt where the final decision making power is situated.
(2) While those mechanics might be observed by some as an indicator of a possible global convergence of Internet governance models, this article invites us to refocus on the role of power in current multistakeholder settings.
(3) The article provides us with a tool set that can help us in assessing what constitutes “power” in the context of mixed governance.

Cite as: Herbert Burkert, Power-Lest it Should Be Forgotten, JOTWELL (June 19, 2018) (reviewing Yik Chan Chin and Changfeng Chen, Internet Governance: Exploration of Power Relationship, 1 Chinese Law eJournal 34 (2018)),

An Argument for the Coherence of Privacy Law

William McGeveran, Privacy and Data Protection Law (2016).

William McGeveran’s new casebook on Privacy and Data Protection Law announces the death of the “death march” that anyone who has ever taught or taken a course in Information Privacy Law has encountered. The death march is the slog in the second half of the semester through a series of similar-but-not-identical federal sectoral statutory regimes, each given just one day of instruction, such as the Privacy Act, FCRA, HIPAA, Gramm Leach Bliley, and FERPA. Professors asked to cover so much substantive law beyond their area of scholarly focus (nobody can focus on all of these) usually resort to choosing only two or three. Even then, the coverage tends to be cursory and unsatisfying.

The death march points to a larger problem: information privacy law doesn’t really exist. At best, privacy law is an assemblage of barely related bits and pieces. The typical privacy course covers constitutional law, a little European Union data protection, a tiny bit of tort, some state law, and the death march of federal statutes. The styles of legal practice covered run the gamut from criminal prosecution and defense, to civil litigation, regulatory practice, corporate governance, and beyond. To justify placing so much in one course, we try futilely to bind together these bits and pieces through broad themes such as harm, social norms, expectations of privacy, and technological change.

My long-held doubt about the coherence of privacy law has led me to teach the course a bit apologetically, feeling like a fraud for pretending to find connections where there are almost none. I’m pleased to report that my belief isn’t universally held: McGeveran’s compelling new casebook is built on the idea that privacy law can be rationalized into a coherent area of practice and pedagogy, one it presents in an organized and tightly woven structure.

I don’t think I’m alone in the belief that privacy law lacks coherence. Daniel Solove, in his magisterial summary of privacy law, Understanding Privacy, argues that rather than give privacy a single, unified definition, the best we can do is identify a Wittgensteinian set of family resemblances of related concerns. Solove’s very good casebook on Information Privacy Law, co-authored with Paul Schwartz, reflects this pragmatic resignation. Their book starts with a long chapter quoting many scholars who cast privacy in different lights and philosophical orientations. Solove and Schwartz don’t do much to try to reconcile these inconsistent voices, suggesting that we ought not try to find any unified theory or consistent coherence in this casebook or this field. Having given up on coherence in chapter one, the rest of the book reads like a series of barely related silos. It’s no wonder that the authors also offer their book sliced into four smaller volumes, which to my mind work better standing on their own.

The other leading, also excellent, casebook, Privacy Law and Society, by Anita Allen and Marc Rotenberg, follows a similar organization, but without the introductory philosophical debate. It too presents privacy law as silos of substance and practice, dividing the field into five broad, but largely disconnected areas: tort, constitutional law, federal statutes, communications privacy, and international law.

McGeveran takes a very different approach. He divides his casebook into three parts, the first two advancing the coherence thesis, both representing refreshingly creative syntheses of privacy law. In Part One, McGeveran provides “Foundations”, which gives a relatively short chapter each on constitutional law, tort law, consumer protection law, and data protection. McGeveran wisely resists the urge to tell any of these four stories at this point in their full depth, delaying parts of each for later in the book. This survey method gives the student a better appreciation for the most important tools in the privacy lawyer’s toolkit; encourages more explicit comparisons between the four categories; and allows for learning through repetition and reinforcement when the topics are revisited later.

The other major innovation is McGeveran’s decision to single out consumer protection law as a distinct area of practice. This builds on work from Solove and Woodrow Hartzog, who have argued that we should treat the jurisprudence of the FTC as a form of common law, and from Danielle Citron, who has pointed to state attorneys general as unheralded great protectors of privacy. McGeveran’s book embraces both arguments, elevating the work of the FTC and state AGs to their due places as primary pillars of U.S. privacy law. This modernizes teaching of the subject, by reflecting what privacy practice has become in the 21st century, with many privacy lawyers advising clients about the FTC far more frequently than they think about tort or constitutional law.

Part Two is even more innovative. It consists of four chapters that follow stages in the “Life Cycle of Data”: “collection”, “processing and use”, “storage and security”, and “disclosures and transfers.” Solove’s influence is again felt here, as these stages echo the major parts of the privacy taxonomy he introduced in Understanding Privacy. Each stage of Part Two introduces new substantive law, but organized around the types of data flows they govern. This prepares students for the issue spotting they will encounter in practice, centering on the data rather than on the artificial boundaries between areas of law. The techie in me appreciates the way this focuses student attention on the broad theme of the impact of technology on privacy.

Because these two parts are so innovative and successful, they serve as the spoonfuls of sugar that help the death march of Part Three go down (although admittedly even this part was still a bit of a slog when I taught from the book this past fall). Students are primed by this point to place statutes like FERPA or HIPAA into the legal framework of Part One and the data lifecycle of Part Two, making them reinforcing examples of the coherent whole rather than disconnected silos. This also reduces the costs (and the guilt) for instructors of cutting sections of the death march. They understand that, thanks to the foundational structures of Part One and Two, their students will be better equipped to encounter, say, educational privacy for the first time on the job.

Finally, as a work of scholarship, not merely pedagogy, McGeveran’s argument for the coherence of privacy law might be an important marker in the evolution of our still relatively young field. Roscoe Pound said that Warren & Brandeis did “nothing less than add a chapter to our law,” a quote well-loved by privacy law scholars. William Prosser has been credited for taking the next step, turning Warren and Brandeis’s concerns into concrete legal doctrine, in the form of the four privacy torts.

This book is positively Prosserian in its aspirations. McGeveran attempts to organize, rationalize, and lend coherence to a messy, incoherent set of fields that we’ve adopted the habit of placing under one label, even if they do not deserve it. I’m not entirely convinced that he has succeeded, that there is something singular and coherent called privacy law, but this book is the best argument for the proposition I have seen. And as a teacher, it is refreshing to leaven my skepticism with this well-designed, compelling new classroom tool.

Cite as: Paul Ohm, An Argument for the Coherence of Privacy Law, JOTWELL (May 22, 2018) (reviewing William McGeveran, Privacy and Data Protection Law (2016)),

Black Box Stigmatic Harms (and how to Stop Them)

Margaret Hu, Big Data Blacklisting, 67 U. Fla. L. Rev. 1735 (2016).

There is a remarkable body of work on the US government’s burgeoning array of high-tech surveillance programs. As Dana Priest and Bill Arkin revealed in their Top Secret America series, there are hundreds of entities which enjoy access to troves of data on US citizens. Ever since the Snowden revelations, this extraordinary power to collate data points about individuals has caused unease among scholars, civil libertarians, and virtually any citizen with a sense of how badly wrong supposedly data-driven decision-making can go.

In Big Data Blacklisting, Margaret Hu comprehensively demonstrates just how well-founded that suspicion is. She shows the high stakes of governmental classifications: No Work, No Vote, No Fly, and No Citizenship lists are among her examples. Persons blackballed by such lists often have no real recourse—they end up trapped in useless intra-agency appeals under the exhaustion doctrine, or stonewalled from discovering the true foundations of the classification by state secrecy and trade secrecy laws. The result is a Kafkaesque affront to basic principles of transparency and due process.

I teach administrative law, and I plan to bring excerpts of Hu’s article into our due process classes on stigmatic harm (to update lessons from cases like Wisconsin v. Constantineau and Paul v. Davis.) What is so evident from Hu’s painstaking work (including her diligent excavation of the origins, methods, and purposes of a mind-boggling alphabet soup of classification programs) is the quaint, even antique, nature of the Supreme Court’s decisionmaking on stigmatic harm. A durable majority on the Court has held that erroneous, government-generated stigma, by itself, is not the type of injury that violates the 5th or 14th Amendment. Only a concrete harm immediately tied to a reputational injury (stigma-plus) raises due process concerns. As Eric Mitnick has observed, “under the stigma-plus standard, the state is free to stigmatize its citizens as potential terrorists, gang members, sex offenders, child abusers, and prostitution patrons, to list just a few, all without triggering due process analysis.” Mitnick catalogs a litany of commentators who characterize this standard as “astonishing,” “puzzling,” “perplexing,” “cavalier,” “wholly startling,” “disturbing,” “odious,” “distressingly fast and loose,” “disingenuous,” “ill-conceived,” an “affront[] [to] common sense,” “muddled and misleading,” “peculiar,” “baroque,” “incoherent,” and my personal favorite, “Iago-like.” Hu shows how high the stakes have become thanks to the Court’s blockage of sensible reform of our procedural due process jurisprudence.

Presented numerous opportunities to do so, the Court simply refuses to deeply consider the cumulative impact of a labyrinth of government classifications. We need legal change here, Hu persuasively argues, because there are so many problems with the analytical capacities of government agencies (and their contractors), as well as the underlying data they are relying on. Cascading, knock-on effects of mistaken classification can be enormous. In area after area, from domestic law enforcement to anti-terrorism to voting roll review, Hu collects studies from experts that indicate not merely one-off misclassifications, but a deeper problem of recurrent error and bias. The database bureaucracy she critiques could become an unchallengeable monolith of corporate and government power arbitrarily arrayed against innocents, which prevents them from challenging their stigmatization both judicially and politically. When the state can simply use software and half-baked algorithms to knock legitimate voters off the rolls, without notice or due process, the very foundations of its legitimacy are shaken. Similarly, a lack of programmatic transparency and evaluative protocols in many settings makes it difficult to see how the traditional touchstones of the legitimacy of the administrative state could possibly be operative in some of the databases Hu describes.

Many scholars in the field of algorithmic accountability have been focused on procedural due process, aimed at giving classified citizens an opportunity to monitor and correct the data stored about them, and the processes used to analyze that data. Hu is generous in her recognition of the scope and detail of that past work. But with the benefit of her comprehensive, trans-substantive critique of big data blacklisting programs, she comes to the conclusion that extant proposals for reform of such programs may not do nearly enough to restore citizens’ footing, vis a vis government, to the level of equality and dignity that ought to prevail in our democracy. Rather, Hu argues that, taken as a whole, the current panoply of big data blacklisting programs offend substantive due process: basic principles that impose duties on government not to treat persons like things.

This is a bold intellectual move that reframes the debate over the surveillance state in an unexpected and clarifying way. Isn’t there something deeply objectionable about the gradual abdication of so many governmental, humanly-judged functions to private sector, algorithmically-processed databases and software—especially when technical complexity is all too often a cloak for careless or reckless action? For someone unfamiliar with the reach, fallibility, and stakes of big data blacklisting, it might seem jarring to contemplate that a pervasive, largely computerized method of classifying citizens might be as objectionable as, say, a law forbidding the teaching of foreign languages, or denying the right to marry to prisoners (other laws found to violate substantive due process). However, Hu has done vital work to develop a comprehensive case against big data blacklisting that makes several of its instantiations seem at least as offensive to constitutional values as those restrictions.

Moreover, when blacklisting itself is so resistant to traditional procedural due process protections (for example, in cases of black box processing), substantive due process claims may be the only way to relieve citizens of burdens it imposes. Democratic processes cannot be expected to protect the discrete, insular minorities targeted unfairly by big data blacklisting. Even worse, these “invisible minorities” may never even be able to figure out exactly what troubling classifications they have been tarred with, impairing their ability to even make a political case for themselves.

Visionary when it was written, Big Data Blacklisting becomes more relevant with each data breach and government overreach in the news. It is agenda-setting work that articulates the problem of government data processing in a new and compelling way. I have rarely read work that so meticulously credits pathbreaking work in the field, while still developing a unique perspective on a cutting edge legal issue. I hope that legal advocacy groups will apply Hu’s ideas in lawsuits against arbitrary government action cloaked in the deceptive raiments of algorithmic precision and data-driven empiricism.

Cite as: Frank Pasquale, Black Box Stigmatic Harms (and how to Stop Them), JOTWELL (April 17, 2018) (reviewing Margaret Hu, Big Data Blacklisting, 67 U. Fla. L. Rev. 1735 (2016)),

New Kids on the Blockchain

Bitcoin was created in 2009 by a member of a cryptography mailing list who goes under the pseudonym of Satoshi Nakamoto, and whose identity is still a mystery. The project was designed to become a decentralized, open source, cryptographic method of payment that uses a tamper-free, open ledger to store all transactions, also known as the blockchain. In a field that is replete with hype and shady operators, David Gerard’s book Attack of the 50 Foot Blockchain has become one of the most prominent and needed sceptical voices studying the phenomenon. Do not let the amusing title you deter you; this is a solid book filled with solid and thorough research that goes through all of the most important aspects of cryptocurrencies, and it is one of the most cited take-downs of the technology.

The book covers a wide range of topics on cryptocurrencies and blockchain, and does so in self-contained chapters that can be read almost independently. The book does not follow a strict chronological order. This structure actually makes the book entirely more readable and a delight from cover to cover, not only because of the interesting subject matter, but also because of Gerard’s wit and knowledge.

The work follows three main themes: explaining Bitcoin and unearthing its various problems; the prevalence of fraudulent practices and unsavoury characters in cryptocurrencies, and then explaining blockchains and smart contracts, and their various criticisms.

In the introductory section Gerard does an excellent job of explaining the technology without the usual techno-jargon that surrounds the subject, and goes through the main reasons that proponents advocate the use of Bitcoin. Cryptocurrencies are often offered as a decentralised solution to the excesses incurred by financial institutions and governments. “Be your own bank” is cited as one of the advantages of Bitcoin, but Gerard accurately describes the various problems that this presents. Being your own bank means requiring security fit for a bank, which most people do not have. Moreover, some of the characteristics present in Bitcoin make it particularly unsuitable as a means of payment. Bitcoin is based on scarcity; only 21 million coins will ever be mined, so there is a strong incentive to hoard coins and hold. Similarly, cryptocurrency transactions are irreversible; if you lose coins in a hack, or make a transaction mistake, the coins are gone forever.

In the chapters dealing with fraud, Gerard does an excellent job of going through the dark side of cryptocurrencies. Cryptocurrencies rely on intermediaries, either exchanges that will accept your “fiat” currency and exchange it into digital currency, or “wallets”, where people can store their coins. The problem is that this unregulated space attracted fraudsters and amateurs in equal measure, and during its short history the space has been filled with Ponzi schemes, con-men, and manipulators. Gerard also describes the use of Bitcoin in the Dark Web, where it is the currency of choice of various illegal businesses.

But it is in his criticism of the blockchain technology where the book really shines. Even vocal Bitcoin critics used to think that that even if cryptocurrencies fail, the underlying blockchain technology would remain and become an important contribution to the way in which online transactions are made. Gerard became one of the first critics of the blockchain itself.

The blockchain is an immutable and decentralised record of all of the transactions that requires no trust in an intermediary. This is supposed to prove useful in any situation where a trustless system is required. But as Gerard points out, there are not a lot of situations when this is even the case, and most instances presented by blockchain advocates are not necessary. The book describes two main issues with using blockchain in a business environment. Firstly, decentralization is always expensive; there is a reason why many companies have been moving towards centralization of network services through the hiring of cloud providers. Decentralization means that you have to make sure that everyone is using the same protocols and compatible systems, but also you have to account for redundancies as you have to rely on services that are not always available, this results in slower and more cumbersome networks that spend more energy to produce a similar result. Secondly, if data management is a problem in your business, then adding a blockchain won’t make the problem go away. On the contrary, he sets out a number of questions that must be asked whenever anyone is thinking of implementing a blockchain to existing business models, including whether the technology can scale, and whether a centralised system will work just as well.

Finally, the book analyses smart contracts, which are contracts conducted digitally through a combination of cryptocurrencies and tokens recorded on a blockchain. The idea is that the parties to a contract code terms and conditions into an immutable token written in computer code which defines the parameters of the contract (conditions, payment, operational parameters), and those who want to transact with each other will write another token that will meet those parameters, at which point the payment is made and the electronic contract concluded. This contract is immutable and irrevocable.

Gerard accurately points out that this combination of immutability and irrevocability are toxic in a legal environment, as any error in the code can lead to nefarious legal consequences. Traditional contracts rely on human intent, and if a mistake is made or a conflict arises, the parties can go to court. But in a smart contract, the code is the last word, and there is no recourse in case of an error or a conflict other than trying to re-write the blockchain, which is not possible unless a majority of participants in the scheme agree to change the code.

This book is a must-read for anyone interested in an easy-to-read and enjoyable criticism of cryptocurrencies and the blockchain. It is a testament of the strength of the ideas presented that we are just now starting to undergo a much-needed check on the blockchain hype from various quarters. Even if cryptocurrencies manage to get past this early stage unscathed, it will be books like this one that will help to narrow the focus away from the narrative of bubbles and easy gains.

Cite as: Andres Guadamuz, New Kids on the Blockchain, JOTWELL (April 3, 2018) (reviewing David Gerard, Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Ethereum & Smart Contracts (2017)),

Governing The New Governors and Their Speech

Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech RegulationU.C. Davis L. Rev. (forthcoming 2018), available at SSRN.

Jack Balkin is one of the leading thinkers and visionaries in the fields of information and cyber law. Every one of his scholarly contributions must be closely read. His recent article, Free Speech in the Algorithmic Society is no exception. It is highly recommended to those interested in fully understanding the current and future tensions between emerging technologies and human rights. The article also provides numerous gems – well-structured statements that eloquently articulate the central challenges of the day, some of which are quoted below.

The article starts off by introducing and defining the “Algorithmic Society” as one that “facilitates new forms of surveillance, control, discrimination and manipulation by both government and by private companies.” As before, society is driven by those seeking fame and fortune. However, much has changed. For instance, Balkin lists the four main sources of wealth the digital age brings about as “intellectual property, fame, information security and Big Data.” To achieve such wealth in this society, individuals are subjected to being governed by algorithms. At the same time, firms and governments achieve “practical omniscience”, while not only knowing what is happening but often accurately predicting what will happen next. These enhanced abilities, Balkin warns, lead to power asymmetries between groups of people (and not only between individuals and technologies) and generate several substantial challenges.

The article follows Balkin’s earlier scholarship which addressed the changing role of free speech doctrines and the First Amendment in the digital age, and the way they apply to the Internet titans. Indeed, Balkin explains that the central constitutional questions of this age will be those related to free speech and freedom of expression. The “Frightful Five” (and any future giants that might emerge) will cry for free speech protection to fend off intervention in their platforms and business models. Yet, at the same time, they will shrug off claims that they must comply with free speech norms themselves, while noting that they are merely private parties to whom these arguments do not pertain.

Continuing this line of scholarship, “Free Speech in the Algorithmic Society” introduces a rich discussion, which spans across several key topics, starting with the rise of “information fiduciaries”. These, Balkin defines, should include digital entities, which collect vast amounts of personal data about their users yet offer very limited insights as to their internal operations. Naturally, this definition includes leading search engines and social media platforms. Balkin concludes that information fiduciaries should be subjected to some of the duties classic fiduciaries were subjected to. To summarize their central obligation, Balkin states that they must not “act like con artists – inducing trust in their end users to obtain personal information and then betraying end users…”. Clearly, articulating this powerful obligation in “legalese” will prove to be a challenge.

The article also introduces the notion of “algorithmic nuisance”. This concept is important when addressing entities that have not entered a contractual relationship with individuals, yet can potentially negatively impact them. Balkin explains that these entities rely on algorithmic processes to make judgments about individuals at important and even crucial junctures. Such reliance – when extensive – inflicts costs and side effects on those subjected to the judgment. This is especially true of individuals singled out as risky, due to error. Balkin explains such individuals may be subjected to discrimination and manipulation. Furthermore, some people will be pressured to “conform their lives to the requirements of the algorithm,” thus undermining their personal autonomy. To limit these problems, Balkin suggests that such “nuisance” be treated as other forms of nuisances in public and private law, while drawing an interesting comparison to pollution and environmental challenges. As with pollution, Balkin suggests that those causing algorithmic nuisance be forced to “internalize the costs they shift onto others”. Balkin moves on to apply the concepts of “information fiduciaries” and “algorithmic nuisance” to practical examples such as smart appliances and personal robots.

The article’s next central point pertains to “New School Speech Regulation.” By this, Balkin refers to the dominant measures for curtailing speech in the digital age. As opposed to previous forms of speech regulation which addressed the actual speaker, today’s measures focus on dominant digital intermediaries, which control the flow of information to and from users. Balkin explains that regulating such entities is now “attractive to nation states” and goes on to detail the various ways this could be done. It should be noted that the analysis is quite U.S.-specific. Outside the U.S., nations are often frustrated by their inability to regulate the powerful (often U.S.-based) online intermediaries, and therefore the analysis of this issue is substantially different.

Beyond the actions of the state, Balkin points out that these online intermediaries, at their discretion, may take down materials which they consider abusive and violate their policies. Balkin notes that users “resent” the fact that the criteria are at times hidden and the measures applied arbitrarily. Yet these steps are often welcomed by users. At times, these steps might even prove efficient (to borrow from the outcomes of some analyses examining the actions of the company towns of previous decades– see my discussion here). Furthermore, relying on broad language to take assumedly arbitrary actions allows firms to punish “bad actors” whose actions are clearly frowned upon by the crowd, yet cannot be easily tied to an existing prohibition (if merely a detailed list of forbidden actions is strictly relied upon)– an important right to retain in an ever-changing digital environment.

Balkin further explains that the noted forms of speech regulation are closely related, and together form three important forces shaping the individual’s ability to speak online: (1) state regulation of speech; (2) the intermediary’s governance attempts, and (3) the government’s attempts to regulate speech by influencing the intermediary. This important triangular taxonomy is probably the article’s most important contribution and must be considered when facing similar questions. Balkin later demonstrates how these forces unfold when examining the test cases of “The Right to Be Forgotten” and “Fake News.”

What can be done to limit the concerns here noted? Balkin does not believe these problems can solve themselves via market forces. He explains that individuals are limited to signaling their discontent with their “voice,” rather than by “exiting” (using the terminology introduced by Hirschman) – and the power of their voice is quite limited. It should be noted that some other forms of limited signaling might still unfold, such as reducing activity within a digital platform. Yet it is possible that such signaling will still prove insufficient. Rather than relying on markets or calling on regulators to resolve these matters, Balkin argues that change must come from within the companies themselves – by them understanding that they are now entities with obligations to promote free speech on a global level. One can only hope that this wish will be fulfilled. Reading this article and spreading its vision, with hope that it would make its way to the leaders of today’s technology giants, will certainly prove to be an important step forward.

Cite as: Tal Zarsky, Governing The New Governors and Their Speech, JOTWELL (February 13, 2018) (reviewing Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech RegulationU.C. Davis L. Rev. (forthcoming 2018), available at SSRN),

Money For Your Life: Understanding Modern Privacy

Stacy-Ann Elvy, Paying for Privacy and the Personal Data Economy, 117 Colum. L. Rev. 1369 (2017).

The commercial law of privacy has long occupied a relatively marginal place in modern legal scholarship, situated in gaps among doctrinal exposition, critical conceptual elaboration, and economically-motivated modeling. Much of the explanation for the omission is surely technological. Until Internet technologies came along in the mid-1990s, it was difficult to turn private information into a “thing” that was both technically and economically worth buying and selling.

Technology and markets have passed the point of no return on that score. Claude Shannon, credited as the author of the insight that all information can be converted into digits, has met Adam Smith. Yet relevant legal scholarship has not quite found its footing. Paying for Privacy and the Personal Data Economy, from Stacy-Ann Elvy, offers a novel way forward. Professor Elvy’s article offers a nifty, highly concrete, and eminently useful framework for thinking about the commercial law of things that consist of assets derived from consumers’ private information. It is not only the case that commercial law is one of the legally-relevant attributes of privacy and privacy practices. Privacy can be thought of as a mode of commercial law.

Paying for Privacy lays out its argument in a series of simple steps. It begins with a brief review of the emergence of the now-familiar Internet of Things, network-enabled everyday objects, industrial devices, and related technologies that increasingly permeate and collect data concerning numerous aspects of individuals’ daily lives. That review is pertinent not merely to common claims about the urgency of privacy regulation but also and more importantly to the premise that the supply of data-collecting technologies by industry (with accompanying privacy-implicating features) is likely to lead soon to increased demand by consumers for privacy-mediating, privacy-regulating, and privacy-protecting instruments.

The supply/demand metaphor is purposeful, if somewhat speculative, for it leads to a thorough and useful description and taxonomy of instruments currently on offer. Those include “traditional” privacy models involving personal data traded for “free” services (such as Facebook) and “freemium” services (such as LinkedIn) that offer both subscription-based and “free” versions of their services, harvesting money from subscribers (and advertisers and partners) and money and data from the free users. More recent PFP or “Pay For Privacy” models include newer firms offering multiple versions of “pay for privacy” services. Those include “privacy as a luxury,” in which providers offer added privacy controls for users in exchange for higher payments, and privacy discounts, by which users get cheaper versions of services if they agree to participate in data monitoring and collection. Switching perspectives from the service to the consumer yields a series of models collected as the PDE, or “Personal Data Economy.” Those include the “data insights model,” companies that enable individual consumers to monitor and aggregate private information about themselves, perhaps for their own use and perhaps to monetize by offering to third parties. In the related “data transfer model,” companies broker markets in which consumers voluntarily collect and contribute data about themselves, making it available for transfer (typically, purchase) by third parties.

The taxonomy is only a snapshot of current practices. This field seems to be so dynamic that inevitably many of the details in the article will be superseded, no doubt sooner rather than later. But the taxonomy helpfully reveals the two-sided character of privacy commerce. Rounding out that basic insight, one might add that there are privacy sellers and privacy buyers, privacy borrowers and privacy lenders, privacy principals and privacy agents, privacy capital and privacy debt, privacy currency and privacy assets. There are secondary markets and tertiary markets. As Professor Elvy notes, the list of privacy intermediaries includes privacy ratings firms – firms that play much the same role as the bond ratings firms that participated so enthusiastically (and eventually, so devastatingly) in the subprime mortgage market of the early 2000s.

Having laid out this framework, in the rest of the article Professor Elvy thoughtfully parses the weaknesses of the commercial law of privacy and develops a counterpart set of prescriptions and recommendations for further evaluation and possible implementation. All of this is admirably immediate and concrete.

Her critique is linked model by model to the taxonomy; the review below condenses it in the interest of space. First, not all consumers have equal or fair opportunities to collect and market their private data. To some significant degree, and for reasons that may be beyond their control or influence, those consumers either cannot participate in the wealth-creating dimensions of privacy or, because of social, economic, or cultural vulnerabilities (Professor Elvy highlights children and tenants), are effectively coerced into participating. Second, the article repeats, with helpful added doses of commercial law context, the widespread contract law critique that consumers are presented with vague, illusory, and incomplete “choices” in respect of collection, aggregation, and use of private data. Third and fourth (to combine two categories of critique offered in the article), current market and legal understandings of privacy as commercial law treat privacy primarily as what one might call an “Article 2” asset, that is, in terms of sales of things. Overlooked in this developing commercial market is privacy as what one might call an “Article 9” asset, that is, as a source of security and securitization. The potentially predatory and discriminatory implications of that second character should be obvious to anyone with a passing familiarity with the history of consumer lending, and Professor Elvy hammers on those.

Paying for Privacy concludes with a review of the fragmented legal landscape for addressing these problems and a complementary summary of recommendations for improving the prospects of consumers while preserving valuable aspects of both PFP and PDE models. Professor Elvy nods in the direction of COPPA (the Children’s Online Privacy Protection Act) and the possibility of industry-specific or sector-specific regulation. Most of her energy is directed to clarifying the jurisdiction of the Federal Trade Commission with respect to PDE models to deal with unfair trade practices regarding privacy that do not fit into traditional or accepted models of harm addressable by the FTC. All of this has the air of the technical, but its broader substantive import should not be overlooked. Paying for Privacy serves as a helpful entrée to a newer, broader – and difficult — vision of privacy’s future.

Cite as: Michael Madison, Money For Your Life: Understanding Modern Privacy, JOTWELL (January 8, 2018) (reviewing Stacy-Ann Elvy, Paying for Privacy and the Personal Data Economy, 117 Colum. L. Rev. 1369 (2017)),

The Section Formerly Known As Cyber

We’ve moved! The Cyberlaw section of Jotwell is now the Technology Law section. Two trends in legal scholarship since Jotwell’s launch drove the decision. First, the “cyber-” prefix is no longer strongly associated with the broader field of Internet law. Instead, it tends to refer to specific subfields, like cybercrime and cybersecurity. Those are part of our beat, but hardly all of it. Second, scholars and reviewers have expanded their own interests outwards, using similar intellectual tools to study drones, robotics, and other technological topics. Our new name recognizes these shifts. We’re keeping the same URLs, so all the archives and new reviews will still be at And everything else about the section remains the same, including our hard-working contributors. We look forward to sharing with you many more things we like (lots).

James Grimmelmann
Margot Kaminski
Jotwell Technology Law Section co-editors
A. Michael Froomkin
Jotwell Editor-in-Chief

From Status Update to Social Media Contract

Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harvard L. Rev. (forthcoming 2017), available at SSRN.

Under current US First Amendment jurisprudence, the government can do very little to regulate speech online. It can penalize fraud and certain other kinds of false or potentially misleading speech; direct true threats; and infringement of intellectual property rights and related speech. But it cannot penalize most harassment, hate speech, falsity, and other speech that does immediate harm. Nor can the government generally bar particular speakers. Last Term, the Supreme Court struck down a provision of state law that tried to prevent convicted sex offenders from participating in “social media” where minors might also be participating.

There are good reasons for most of the limits the courts have imposed on the government’s speech-regulating powers—yet those limits have left a regulatory vacuum into which powerful private entities have stepped to regulate the speech of US social media users, suppressing a lot of speech that the government can’t, and protecting other speech despite their power to suppress it. The limits these intermediaries impose, with some important exceptions, look very similar whether the speech comes from the US or from a country that imposes heavier burdens on intermediaries to control the speech of their users. Klonick’s fascinating paper explores the evolution of speech regulation policies at major social media companies, particularly Twitter and Facebook, along with Alphabet’s (Google’s) YouTube.

Klonick finds “marked similarities to legal or governance systems with the creation of a detailed list of rules, trained human decision-making to apply those rules, and reliance on a system of external influence to update and amend those rules.” One lesson from her story may be the free speech version of ontogeny recapitulating phylogeny: regardless of what the underlying legal structure is, or whether an institution is essentially inventing a structure from scratch, speech regulations pose standard issues of definition (defamation and hate speech are endlessly flexible, not to mention intellectual property infringements), enforcement (who will catch the violators?), and equity/fairness (who will watch the watchmen?).

Klonick’s research also provides important insights on the relative roles of algorithms and human review in detecting and deterring unwanted content. While her article focuses on the guidelines followed by human decision-makers, those fit into a larger context of partially automated screening. Automated screening for child pornography seems to be a relative success story, as she explains. However, as many interested parties have pointed out in response to the Copyright Office’s inquiry on §512’s safe harbors and private content protection mechanisms, even with automated enforcement and “claiming” by putative copyright owners via Content ID, algorithms cannot avoid problems of judgment and equitable treatment, especially when some copyright owners have negotiated special rights to override the DMCA process, and keep contested content down regardless of its fair use status, once it’s been identified by Content ID.

Klonick’s account can also usefully be read alongside Zeynep Tufekci’s Twitter and Tear Gas: The Power and Fragility of Networked Protest. Tufekci covers some aspects of speech policies that are particularly troubling, including the misuse of Facebook’s “real name” policy to suppress activists in countries where using a formal name could potentially be deadly; targeted, state-supported attacks on activists that involve reporting them for “abuse” and hate speech; and content moderation that can be politically ignorant, or worse: “in almost any country with deep internal conflict, the types of people who are most likely to be employed by Facebook are often from one side of the conflict—the side with more power and privileges.” Facebook’s team overseeing Turkish content, for example, is in Dublin, disadvantaging non-English speakers and women (whose families are less likely to be willing to relocate for their jobs). Similarly, Facebook’s response to the real-name problem is to allow use of another name when it’s in common use by the speaker, but that usually requires people to provide documents such as school IDs. As Tufekci points out, documents using an alternate identity are most likely to be available to people in relatively privileged positions in developed countries, thus muting their protest but leaving similar people without such forms of ID exposed.

These details of implementation are far more than trivial. And Tufekci’s warning that governments quickly learn how to use, and misuse, platform mechanisms for their own benefit is a vital one. The extent to which an abuse team can be manipulated will, I expect, soon become a separate challenge for the content policy teams Klonick documents—if they decide to resist that manipulation, which is not guaranteed. Some of these techniques, moreover, resist handling by an abuse team even when identified. When government-backed teams overwhelm social media with trivialities in order to distract from a potentially important political event, as is apparently common in China, what policies and algorithms could identify the pattern, much less sort the wheat from the chaff?

Along with this comparison, Klonick’s piece offers the opportunity to revisit some relatively recent techno-optimists—West Coast code has started to look in places more like outsourced Filipino or Indian area codes, so what does that mean for internet governance? Consider Clay Shirky’s Cognitive Surplus: Creativity and Generosity in a Connected Age, a witty book whose examples of user-generated activism now seem dated, only seven years later, with the rise of “fake news” disseminated by foreign content farms, GamerGate, and revenge porn. It’s still true that, as Joi Ito wrote, “you should never underestimate the power of peer-to-peer social communication and the bonding force of popular culture. Although so much of what kids are doing online may look trivial and frivolous, what they are doing is building the capacity to connect, to communicate, and ultimately, to mobilize.” Because of this power, a legal system that discourages you from commenting on and remixing the first things you love, in communities who love the same thing you do, also discourages you from commenting on and remixing everything else. But what Klonick’s account makes clear is that discouragement can come from platforms as well as directly from governments, whether because of over-active filters such as Content ID that suppress remixes or because of more directly politicized interventions such as those Tufekci discusses.

Shirky’s book, like many of its era, was relatively silent about the role of government in enacting (or suppressing) the changes promoted by people taking advantage of new technological affordances. Consider one of Shirky’s prominent examples of the power of (women) organizing online: a Facebook group organized to fight back against anti-woman violence perpetrated in the Indian city of Mangalore by the religious fundamentalist group Sri Ram Sene. As Shirky tells it, “[p]articipation in the Pink Chaddi [underwear] campaign demonstrated publicly that a constituency of women were willing to counter Sene and wanted politicians and the police to do the same…. [T]he state of Mangalore arrested Muthali and several key members of Sene … as a way of preventing a repeat of the January attacks.” (Emphasis mine.) The story has a happy ending because actual government, not “governance” structures, intervened. How would the content teams at Facebook react if today’s Indian government decided that similar protests were incitements to violence?

The fact that internet intermediaries have governance aspirations without formal government power (or participatory democracy) also directs our attention to the influences on the use of that power. Klonick states that “platforms moderate content because of a foundation in First Amendment norms, corporate responsibility, and at the core, the economic necessity of creating an environment that reflects the expectations of its users. Thus, platforms are motivated to moderate by both the Good Samaritan purpose of § 230, as well as its concerns for free speech.” But note what drops out of that second sentence—explicit acknowledgement of the profit motive, which becomes both a driver of some speech protections and a reason, or an excuse, for some speech suppression. Pressure from advertisers, for example, led YouTube to crack down on “pro-terrorism” speech on the platform. Klonick also argues that “platforms are economically responsive to the expectations and norms of their users,” which leads them “to both take down content their users don’t want to see and keep up as much content as possible,” including by pushing back against government takedown requests. But this seems to me to equivocate about who the relevant “users” are—after all, if you’re not paying for a service, you’re the product it’s selling, and content that advertisers or large copyright owners don’t want to see may be far more vulnerable than content that individual participants don’t want to see.

One question Klonick’s story raised for me, then, was what a different system might look like. What if platforms were run the way public libraries are? Libraries are the real “sharing” economies, and in the US have resisted government surveillance and content filtering as a matter of mission. Similarly, the Archive of Our Own, with which I am involved, has user-centric rules that don’t need to prioritize the preservation of ad revenue. Although these rules are hotly debated within fandom, because what is welcoming to some users can be exclusionary to others, they are distinctively mission-oriented. (I should also concede that size, too, makes a difference—eventually, a large enough community that includes political content will attract government attention; Twitter hasn’t made a profit, but it has received numerous subpoenas and national security letters.)

Klonick suggests that the key to optimal speech regulation for platforms is some sort of participatory reform, perhaps involving both procedural and substantive protections for individual users. In other words, we need to reinvent the democratic state, embedding the user/citizen in a context that she has some realistic chance to affect, at least if she knows her rights and acts in concert with other users. The obvious problem is the one of transition: how will we get from here to there? Klonick understandably doesn’t take up that question in any detail. Absent the coercive power of real law, backed by guns and taxes, it’s hard for me to imagine the transition to participatory platform governance. Moreover, the same dynamics that brought us Citizens United make it hard to imagine that corporate interests—both platform and advertiser—would accede to any such mandates, likely raising First Amendment objections of their own.

Klonick’s article helps to identify how individual speech online is embedded in structures that guide and constrain speakers; its descriptive account will be very useful to understanding these structures. I worry, however, that understanding won’t be enough to save us. We want to think well of our governors; we don’t want to be living in 1984, or Brave New World. But the development of intermediary speech policies tells us, among other things, that we might end up looking from man to pig, and pig to man, and finding it hard to tell the difference.

Disclosure: Kate Klonick is a former student of mine, though this paper comes from her work years later.

Cite as: Rebecca Tushnet, From Status Update to Social Media Contract, JOTWELL (November 29, 2017) (reviewing Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harvard L. Rev. (forthcoming 2017), available at SSRN),