The Journal of Things We Like (Lots)
Select Page

An Argument for the Coherence of Privacy Law

William McGeveran, Privacy and Data Protection Law (2016).

William McGeveran’s new casebook on Privacy and Data Protection Law announces the death of the “death march” that anyone who has ever taught or taken a course in Information Privacy Law has encountered. The death march is the slog in the second half of the semester through a series of similar-but-not-identical federal sectoral statutory regimes, each given just one day of instruction, such as the Privacy Act, FCRA, HIPAA, Gramm Leach Bliley, and FERPA. Professors asked to cover so much substantive law beyond their area of scholarly focus (nobody can focus on all of these) usually resort to choosing only two or three. Even then, the coverage tends to be cursory and unsatisfying.

The death march points to a larger problem: information privacy law doesn’t really exist. At best, privacy law is an assemblage of barely related bits and pieces. The typical privacy course covers constitutional law, a little European Union data protection, a tiny bit of tort, some state law, and the death march of federal statutes. The styles of legal practice covered run the gamut from criminal prosecution and defense, to civil litigation, regulatory practice, corporate governance, and beyond. To justify placing so much in one course, we try futilely to bind together these bits and pieces through broad themes such as harm, social norms, expectations of privacy, and technological change.

My long-held doubt about the coherence of privacy law has led me to teach the course a bit apologetically, feeling like a fraud for pretending to find connections where there are almost none. I’m pleased to report that my belief isn’t universally held: McGeveran’s compelling new casebook is built on the idea that privacy law can be rationalized into a coherent area of practice and pedagogy, one it presents in an organized and tightly woven structure.

I don’t think I’m alone in the belief that privacy law lacks coherence. Daniel Solove, in his magisterial summary of privacy law, Understanding Privacy, argues that rather than give privacy a single, unified definition, the best we can do is identify a Wittgensteinian set of family resemblances of related concerns. Solove’s very good casebook on Information Privacy Law, co-authored with Paul Schwartz, reflects this pragmatic resignation. Their book starts with a long chapter quoting many scholars who cast privacy in different lights and philosophical orientations. Solove and Schwartz don’t do much to try to reconcile these inconsistent voices, suggesting that we ought not try to find any unified theory or consistent coherence in this casebook or this field. Having given up on coherence in chapter one, the rest of the book reads like a series of barely related silos. It’s no wonder that the authors also offer their book sliced into four smaller volumes, which to my mind work better standing on their own.

The other leading, also excellent, casebook, Privacy Law and Society, by Anita Allen and Marc Rotenberg, follows a similar organization, but without the introductory philosophical debate. It too presents privacy law as silos of substance and practice, dividing the field into five broad, but largely disconnected areas: tort, constitutional law, federal statutes, communications privacy, and international law.

McGeveran takes a very different approach. He divides his casebook into three parts, the first two advancing the coherence thesis, both representing refreshingly creative syntheses of privacy law. In Part One, McGeveran provides “Foundations”, which gives a relatively short chapter each on constitutional law, tort law, consumer protection law, and data protection. McGeveran wisely resists the urge to tell any of these four stories at this point in their full depth, delaying parts of each for later in the book. This survey method gives the student a better appreciation for the most important tools in the privacy lawyer’s toolkit; encourages more explicit comparisons between the four categories; and allows for learning through repetition and reinforcement when the topics are revisited later.

The other major innovation is McGeveran’s decision to single out consumer protection law as a distinct area of practice. This builds on work from Solove and Woodrow Hartzog, who have argued that we should treat the jurisprudence of the FTC as a form of common law, and from Danielle Citron, who has pointed to state attorneys general as unheralded great protectors of privacy. McGeveran’s book embraces both arguments, elevating the work of the FTC and state AGs to their due places as primary pillars of U.S. privacy law. This modernizes teaching of the subject, by reflecting what privacy practice has become in the 21st century, with many privacy lawyers advising clients about the FTC far more frequently than they think about tort or constitutional law.

Part Two is even more innovative. It consists of four chapters that follow stages in the “Life Cycle of Data”: “collection”, “processing and use”, “storage and security”, and “disclosures and transfers.” Solove’s influence is again felt here, as these stages echo the major parts of the privacy taxonomy he introduced in Understanding Privacy. Each stage of Part Two introduces new substantive law, but organized around the types of data flows they govern. This prepares students for the issue spotting they will encounter in practice, centering on the data rather than on the artificial boundaries between areas of law. The techie in me appreciates the way this focuses student attention on the broad theme of the impact of technology on privacy.

Because these two parts are so innovative and successful, they serve as the spoonfuls of sugar that help the death march of Part Three go down (although admittedly even this part was still a bit of a slog when I taught from the book this past fall). Students are primed by this point to place statutes like FERPA or HIPAA into the legal framework of Part One and the data lifecycle of Part Two, making them reinforcing examples of the coherent whole rather than disconnected silos. This also reduces the costs (and the guilt) for instructors of cutting sections of the death march. They understand that, thanks to the foundational structures of Part One and Two, their students will be better equipped to encounter, say, educational privacy for the first time on the job.

Finally, as a work of scholarship, not merely pedagogy, McGeveran’s argument for the coherence of privacy law might be an important marker in the evolution of our still relatively young field. Roscoe Pound said that Warren & Brandeis did “nothing less than add a chapter to our law,” a quote well-loved by privacy law scholars. William Prosser has been credited for taking the next step, turning Warren and Brandeis’s concerns into concrete legal doctrine, in the form of the four privacy torts.

This book is positively Prosserian in its aspirations. McGeveran attempts to organize, rationalize, and lend coherence to a messy, incoherent set of fields that we’ve adopted the habit of placing under one label, even if they do not deserve it. I’m not entirely convinced that he has succeeded, that there is something singular and coherent called privacy law, but this book is the best argument for the proposition I have seen. And as a teacher, it is refreshing to leaven my skepticism with this well-designed, compelling new classroom tool.

Cite as: Paul Ohm, An Argument for the Coherence of Privacy Law, JOTWELL (May 22, 2018) (reviewing William McGeveran, Privacy and Data Protection Law (2016)), https://cyber.jotwell.com/an-argument-for-the-coherence-of-privacy-law/.

Black Box Stigmatic Harms (and how to Stop Them)

Margaret Hu, Big Data Blacklisting, 67 U. Fla. L. Rev. 1735 (2016).

There is a remarkable body of work on the US government’s burgeoning array of high-tech surveillance programs. As Dana Priest and Bill Arkin revealed in their Top Secret America series, there are hundreds of entities which enjoy access to troves of data on US citizens. Ever since the Snowden revelations, this extraordinary power to collate data points about individuals has caused unease among scholars, civil libertarians, and virtually any citizen with a sense of how badly wrong supposedly data-driven decision-making can go.

In Big Data Blacklisting, Margaret Hu comprehensively demonstrates just how well-founded that suspicion is. She shows the high stakes of governmental classifications: No Work, No Vote, No Fly, and No Citizenship lists are among her examples. Persons blackballed by such lists often have no real recourse—they end up trapped in useless intra-agency appeals under the exhaustion doctrine, or stonewalled from discovering the true foundations of the classification by state secrecy and trade secrecy laws. The result is a Kafkaesque affront to basic principles of transparency and due process.

I teach administrative law, and I plan to bring excerpts of Hu’s article into our due process classes on stigmatic harm (to update lessons from cases like Wisconsin v. Constantineau and Paul v. Davis.) What is so evident from Hu’s painstaking work (including her diligent excavation of the origins, methods, and purposes of a mind-boggling alphabet soup of classification programs) is the quaint, even antique, nature of the Supreme Court’s decisionmaking on stigmatic harm. A durable majority on the Court has held that erroneous, government-generated stigma, by itself, is not the type of injury that violates the 5th or 14th Amendment. Only a concrete harm immediately tied to a reputational injury (stigma-plus) raises due process concerns. As Eric Mitnick has observed, “under the stigma-plus standard, the state is free to stigmatize its citizens as potential terrorists, gang members, sex offenders, child abusers, and prostitution patrons, to list just a few, all without triggering due process analysis.” Mitnick catalogs a litany of commentators who characterize this standard as “astonishing,” “puzzling,” “perplexing,” “cavalier,” “wholly startling,” “disturbing,” “odious,” “distressingly fast and loose,” “disingenuous,” “ill-conceived,” an “affront[] [to] common sense,” “muddled and misleading,” “peculiar,” “baroque,” “incoherent,” and my personal favorite, “Iago-like.” Hu shows how high the stakes have become thanks to the Court’s blockage of sensible reform of our procedural due process jurisprudence.

Presented numerous opportunities to do so, the Court simply refuses to deeply consider the cumulative impact of a labyrinth of government classifications. We need legal change here, Hu persuasively argues, because there are so many problems with the analytical capacities of government agencies (and their contractors), as well as the underlying data they are relying on. Cascading, knock-on effects of mistaken classification can be enormous. In area after area, from domestic law enforcement to anti-terrorism to voting roll review, Hu collects studies from experts that indicate not merely one-off misclassifications, but a deeper problem of recurrent error and bias. The database bureaucracy she critiques could become an unchallengeable monolith of corporate and government power arbitrarily arrayed against innocents, which prevents them from challenging their stigmatization both judicially and politically. When the state can simply use software and half-baked algorithms to knock legitimate voters off the rolls, without notice or due process, the very foundations of its legitimacy are shaken. Similarly, a lack of programmatic transparency and evaluative protocols in many settings makes it difficult to see how the traditional touchstones of the legitimacy of the administrative state could possibly be operative in some of the databases Hu describes.

Many scholars in the field of algorithmic accountability have been focused on procedural due process, aimed at giving classified citizens an opportunity to monitor and correct the data stored about them, and the processes used to analyze that data. Hu is generous in her recognition of the scope and detail of that past work. But with the benefit of her comprehensive, trans-substantive critique of big data blacklisting programs, she comes to the conclusion that extant proposals for reform of such programs may not do nearly enough to restore citizens’ footing, vis a vis government, to the level of equality and dignity that ought to prevail in our democracy. Rather, Hu argues that, taken as a whole, the current panoply of big data blacklisting programs offend substantive due process: basic principles that impose duties on government not to treat persons like things.

This is a bold intellectual move that reframes the debate over the surveillance state in an unexpected and clarifying way. Isn’t there something deeply objectionable about the gradual abdication of so many governmental, humanly-judged functions to private sector, algorithmically-processed databases and software—especially when technical complexity is all too often a cloak for careless or reckless action? For someone unfamiliar with the reach, fallibility, and stakes of big data blacklisting, it might seem jarring to contemplate that a pervasive, largely computerized method of classifying citizens might be as objectionable as, say, a law forbidding the teaching of foreign languages, or denying the right to marry to prisoners (other laws found to violate substantive due process). However, Hu has done vital work to develop a comprehensive case against big data blacklisting that makes several of its instantiations seem at least as offensive to constitutional values as those restrictions.

Moreover, when blacklisting itself is so resistant to traditional procedural due process protections (for example, in cases of black box processing), substantive due process claims may be the only way to relieve citizens of burdens it imposes. Democratic processes cannot be expected to protect the discrete, insular minorities targeted unfairly by big data blacklisting. Even worse, these “invisible minorities” may never even be able to figure out exactly what troubling classifications they have been tarred with, impairing their ability to even make a political case for themselves.

Visionary when it was written, Big Data Blacklisting becomes more relevant with each data breach and government overreach in the news. It is agenda-setting work that articulates the problem of government data processing in a new and compelling way. I have rarely read work that so meticulously credits pathbreaking work in the field, while still developing a unique perspective on a cutting edge legal issue. I hope that legal advocacy groups will apply Hu’s ideas in lawsuits against arbitrary government action cloaked in the deceptive raiments of algorithmic precision and data-driven empiricism.

Cite as: Frank Pasquale, Black Box Stigmatic Harms (and how to Stop Them), JOTWELL (April 17, 2018) (reviewing Margaret Hu, Big Data Blacklisting, 67 U. Fla. L. Rev. 1735 (2016)), https://cyber.jotwell.com/black-box-stigmatic-harms-and-how-to-stop-them/.

New Kids on the Blockchain

Bitcoin was created in 2009 by a member of a cryptography mailing list who goes under the pseudonym of Satoshi Nakamoto, and whose identity is still a mystery. The project was designed to become a decentralized, open source, cryptographic method of payment that uses a tamper-free, open ledger to store all transactions, also known as the blockchain. In a field that is replete with hype and shady operators, David Gerard’s book Attack of the 50 Foot Blockchain has become one of the most prominent and needed sceptical voices studying the phenomenon. Do not let the amusing title you deter you; this is a solid book filled with solid and thorough research that goes through all of the most important aspects of cryptocurrencies, and it is one of the most cited take-downs of the technology.

The book covers a wide range of topics on cryptocurrencies and blockchain, and does so in self-contained chapters that can be read almost independently. The book does not follow a strict chronological order. This structure actually makes the book entirely more readable and a delight from cover to cover, not only because of the interesting subject matter, but also because of Gerard’s wit and knowledge.

The work follows three main themes: explaining Bitcoin and unearthing its various problems; the prevalence of fraudulent practices and unsavoury characters in cryptocurrencies, and then explaining blockchains and smart contracts, and their various criticisms.

In the introductory section Gerard does an excellent job of explaining the technology without the usual techno-jargon that surrounds the subject, and goes through the main reasons that proponents advocate the use of Bitcoin. Cryptocurrencies are often offered as a decentralised solution to the excesses incurred by financial institutions and governments. “Be your own bank” is cited as one of the advantages of Bitcoin, but Gerard accurately describes the various problems that this presents. Being your own bank means requiring security fit for a bank, which most people do not have. Moreover, some of the characteristics present in Bitcoin make it particularly unsuitable as a means of payment. Bitcoin is based on scarcity; only 21 million coins will ever be mined, so there is a strong incentive to hoard coins and hold. Similarly, cryptocurrency transactions are irreversible; if you lose coins in a hack, or make a transaction mistake, the coins are gone forever.

In the chapters dealing with fraud, Gerard does an excellent job of going through the dark side of cryptocurrencies. Cryptocurrencies rely on intermediaries, either exchanges that will accept your “fiat” currency and exchange it into digital currency, or “wallets”, where people can store their coins. The problem is that this unregulated space attracted fraudsters and amateurs in equal measure, and during its short history the space has been filled with Ponzi schemes, con-men, and manipulators. Gerard also describes the use of Bitcoin in the Dark Web, where it is the currency of choice of various illegal businesses.

But it is in his criticism of the blockchain technology where the book really shines. Even vocal Bitcoin critics used to think that that even if cryptocurrencies fail, the underlying blockchain technology would remain and become an important contribution to the way in which online transactions are made. Gerard became one of the first critics of the blockchain itself.

The blockchain is an immutable and decentralised record of all of the transactions that requires no trust in an intermediary. This is supposed to prove useful in any situation where a trustless system is required. But as Gerard points out, there are not a lot of situations when this is even the case, and most instances presented by blockchain advocates are not necessary. The book describes two main issues with using blockchain in a business environment. Firstly, decentralization is always expensive; there is a reason why many companies have been moving towards centralization of network services through the hiring of cloud providers. Decentralization means that you have to make sure that everyone is using the same protocols and compatible systems, but also you have to account for redundancies as you have to rely on services that are not always available, this results in slower and more cumbersome networks that spend more energy to produce a similar result. Secondly, if data management is a problem in your business, then adding a blockchain won’t make the problem go away. On the contrary, he sets out a number of questions that must be asked whenever anyone is thinking of implementing a blockchain to existing business models, including whether the technology can scale, and whether a centralised system will work just as well.

Finally, the book analyses smart contracts, which are contracts conducted digitally through a combination of cryptocurrencies and tokens recorded on a blockchain. The idea is that the parties to a contract code terms and conditions into an immutable token written in computer code which defines the parameters of the contract (conditions, payment, operational parameters), and those who want to transact with each other will write another token that will meet those parameters, at which point the payment is made and the electronic contract concluded. This contract is immutable and irrevocable.

Gerard accurately points out that this combination of immutability and irrevocability are toxic in a legal environment, as any error in the code can lead to nefarious legal consequences. Traditional contracts rely on human intent, and if a mistake is made or a conflict arises, the parties can go to court. But in a smart contract, the code is the last word, and there is no recourse in case of an error or a conflict other than trying to re-write the blockchain, which is not possible unless a majority of participants in the scheme agree to change the code.

This book is a must-read for anyone interested in an easy-to-read and enjoyable criticism of cryptocurrencies and the blockchain. It is a testament of the strength of the ideas presented that we are just now starting to undergo a much-needed check on the blockchain hype from various quarters. Even if cryptocurrencies manage to get past this early stage unscathed, it will be books like this one that will help to narrow the focus away from the narrative of bubbles and easy gains.

Cite as: Andres Guadamuz, New Kids on the Blockchain, JOTWELL (April 3, 2018) (reviewing David Gerard, Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Ethereum & Smart Contracts (2017)), https://cyber.jotwell.com/new-kids-on-the-blockchain/.

Governing The New Governors and Their Speech

Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech RegulationU.C. Davis L. Rev. (forthcoming 2018), available at SSRN.

Jack Balkin is one of the leading thinkers and visionaries in the fields of information and cyber law. Every one of his scholarly contributions must be closely read. His recent article, Free Speech in the Algorithmic Society is no exception. It is highly recommended to those interested in fully understanding the current and future tensions between emerging technologies and human rights. The article also provides numerous gems – well-structured statements that eloquently articulate the central challenges of the day, some of which are quoted below.

The article starts off by introducing and defining the “Algorithmic Society” as one that “facilitates new forms of surveillance, control, discrimination and manipulation by both government and by private companies.” As before, society is driven by those seeking fame and fortune. However, much has changed. For instance, Balkin lists the four main sources of wealth the digital age brings about as “intellectual property, fame, information security and Big Data.” To achieve such wealth in this society, individuals are subjected to being governed by algorithms. At the same time, firms and governments achieve “practical omniscience”, while not only knowing what is happening but often accurately predicting what will happen next. These enhanced abilities, Balkin warns, lead to power asymmetries between groups of people (and not only between individuals and technologies) and generate several substantial challenges.

The article follows Balkin’s earlier scholarship which addressed the changing role of free speech doctrines and the First Amendment in the digital age, and the way they apply to the Internet titans. Indeed, Balkin explains that the central constitutional questions of this age will be those related to free speech and freedom of expression. The “Frightful Five” (and any future giants that might emerge) will cry for free speech protection to fend off intervention in their platforms and business models. Yet, at the same time, they will shrug off claims that they must comply with free speech norms themselves, while noting that they are merely private parties to whom these arguments do not pertain.

Continuing this line of scholarship, “Free Speech in the Algorithmic Society” introduces a rich discussion, which spans across several key topics, starting with the rise of “information fiduciaries”. These, Balkin defines, should include digital entities, which collect vast amounts of personal data about their users yet offer very limited insights as to their internal operations. Naturally, this definition includes leading search engines and social media platforms. Balkin concludes that information fiduciaries should be subjected to some of the duties classic fiduciaries were subjected to. To summarize their central obligation, Balkin states that they must not “act like con artists – inducing trust in their end users to obtain personal information and then betraying end users…”. Clearly, articulating this powerful obligation in “legalese” will prove to be a challenge.

The article also introduces the notion of “algorithmic nuisance”. This concept is important when addressing entities that have not entered a contractual relationship with individuals, yet can potentially negatively impact them. Balkin explains that these entities rely on algorithmic processes to make judgments about individuals at important and even crucial junctures. Such reliance – when extensive – inflicts costs and side effects on those subjected to the judgment. This is especially true of individuals singled out as risky, due to error. Balkin explains such individuals may be subjected to discrimination and manipulation. Furthermore, some people will be pressured to “conform their lives to the requirements of the algorithm,” thus undermining their personal autonomy. To limit these problems, Balkin suggests that such “nuisance” be treated as other forms of nuisances in public and private law, while drawing an interesting comparison to pollution and environmental challenges. As with pollution, Balkin suggests that those causing algorithmic nuisance be forced to “internalize the costs they shift onto others”. Balkin moves on to apply the concepts of “information fiduciaries” and “algorithmic nuisance” to practical examples such as smart appliances and personal robots.

The article’s next central point pertains to “New School Speech Regulation.” By this, Balkin refers to the dominant measures for curtailing speech in the digital age. As opposed to previous forms of speech regulation which addressed the actual speaker, today’s measures focus on dominant digital intermediaries, which control the flow of information to and from users. Balkin explains that regulating such entities is now “attractive to nation states” and goes on to detail the various ways this could be done. It should be noted that the analysis is quite U.S.-specific. Outside the U.S., nations are often frustrated by their inability to regulate the powerful (often U.S.-based) online intermediaries, and therefore the analysis of this issue is substantially different.

Beyond the actions of the state, Balkin points out that these online intermediaries, at their discretion, may take down materials which they consider abusive and violate their policies. Balkin notes that users “resent” the fact that the criteria are at times hidden and the measures applied arbitrarily. Yet these steps are often welcomed by users. At times, these steps might even prove efficient (to borrow from the outcomes of some analyses examining the actions of the company towns of previous decades– see my discussion here). Furthermore, relying on broad language to take assumedly arbitrary actions allows firms to punish “bad actors” whose actions are clearly frowned upon by the crowd, yet cannot be easily tied to an existing prohibition (if merely a detailed list of forbidden actions is strictly relied upon)– an important right to retain in an ever-changing digital environment.

Balkin further explains that the noted forms of speech regulation are closely related, and together form three important forces shaping the individual’s ability to speak online: (1) state regulation of speech; (2) the intermediary’s governance attempts, and (3) the government’s attempts to regulate speech by influencing the intermediary. This important triangular taxonomy is probably the article’s most important contribution and must be considered when facing similar questions. Balkin later demonstrates how these forces unfold when examining the test cases of “The Right to Be Forgotten” and “Fake News.”

What can be done to limit the concerns here noted? Balkin does not believe these problems can solve themselves via market forces. He explains that individuals are limited to signaling their discontent with their “voice,” rather than by “exiting” (using the terminology introduced by Hirschman) – and the power of their voice is quite limited. It should be noted that some other forms of limited signaling might still unfold, such as reducing activity within a digital platform. Yet it is possible that such signaling will still prove insufficient. Rather than relying on markets or calling on regulators to resolve these matters, Balkin argues that change must come from within the companies themselves – by them understanding that they are now entities with obligations to promote free speech on a global level. One can only hope that this wish will be fulfilled. Reading this article and spreading its vision, with hope that it would make its way to the leaders of today’s technology giants, will certainly prove to be an important step forward.

Cite as: Tal Zarsky, Governing The New Governors and Their Speech, JOTWELL (February 13, 2018) (reviewing Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech RegulationU.C. Davis L. Rev. (forthcoming 2018), available at SSRN), https://cyber.jotwell.com/governing-new-governors-speech/.

Money For Your Life: Understanding Modern Privacy

Stacy-Ann Elvy, Paying for Privacy and the Personal Data Economy, 117 Colum. L. Rev. 1369 (2017).

The commercial law of privacy has long occupied a relatively marginal place in modern legal scholarship, situated in gaps among doctrinal exposition, critical conceptual elaboration, and economically-motivated modeling. Much of the explanation for the omission is surely technological. Until Internet technologies came along in the mid-1990s, it was difficult to turn private information into a “thing” that was both technically and economically worth buying and selling.

Technology and markets have passed the point of no return on that score. Claude Shannon, credited as the author of the insight that all information can be converted into digits, has met Adam Smith. Yet relevant legal scholarship has not quite found its footing. Paying for Privacy and the Personal Data Economy, from Stacy-Ann Elvy, offers a novel way forward. Professor Elvy’s article offers a nifty, highly concrete, and eminently useful framework for thinking about the commercial law of things that consist of assets derived from consumers’ private information. It is not only the case that commercial law is one of the legally-relevant attributes of privacy and privacy practices. Privacy can be thought of as a mode of commercial law.

Paying for Privacy lays out its argument in a series of simple steps. It begins with a brief review of the emergence of the now-familiar Internet of Things, network-enabled everyday objects, industrial devices, and related technologies that increasingly permeate and collect data concerning numerous aspects of individuals’ daily lives. That review is pertinent not merely to common claims about the urgency of privacy regulation but also and more importantly to the premise that the supply of data-collecting technologies by industry (with accompanying privacy-implicating features) is likely to lead soon to increased demand by consumers for privacy-mediating, privacy-regulating, and privacy-protecting instruments.

The supply/demand metaphor is purposeful, if somewhat speculative, for it leads to a thorough and useful description and taxonomy of instruments currently on offer. Those include “traditional” privacy models involving personal data traded for “free” services (such as Facebook) and “freemium” services (such as LinkedIn) that offer both subscription-based and “free” versions of their services, harvesting money from subscribers (and advertisers and partners) and money and data from the free users. More recent PFP or “Pay For Privacy” models include newer firms offering multiple versions of “pay for privacy” services. Those include “privacy as a luxury,” in which providers offer added privacy controls for users in exchange for higher payments, and privacy discounts, by which users get cheaper versions of services if they agree to participate in data monitoring and collection. Switching perspectives from the service to the consumer yields a series of models collected as the PDE, or “Personal Data Economy.” Those include the “data insights model,” companies that enable individual consumers to monitor and aggregate private information about themselves, perhaps for their own use and perhaps to monetize by offering to third parties. In the related “data transfer model,” companies broker markets in which consumers voluntarily collect and contribute data about themselves, making it available for transfer (typically, purchase) by third parties.

The taxonomy is only a snapshot of current practices. This field seems to be so dynamic that inevitably many of the details in the article will be superseded, no doubt sooner rather than later. But the taxonomy helpfully reveals the two-sided character of privacy commerce. Rounding out that basic insight, one might add that there are privacy sellers and privacy buyers, privacy borrowers and privacy lenders, privacy principals and privacy agents, privacy capital and privacy debt, privacy currency and privacy assets. There are secondary markets and tertiary markets. As Professor Elvy notes, the list of privacy intermediaries includes privacy ratings firms – firms that play much the same role as the bond ratings firms that participated so enthusiastically (and eventually, so devastatingly) in the subprime mortgage market of the early 2000s.

Having laid out this framework, in the rest of the article Professor Elvy thoughtfully parses the weaknesses of the commercial law of privacy and develops a counterpart set of prescriptions and recommendations for further evaluation and possible implementation. All of this is admirably immediate and concrete.

Her critique is linked model by model to the taxonomy; the review below condenses it in the interest of space. First, not all consumers have equal or fair opportunities to collect and market their private data. To some significant degree, and for reasons that may be beyond their control or influence, those consumers either cannot participate in the wealth-creating dimensions of privacy or, because of social, economic, or cultural vulnerabilities (Professor Elvy highlights children and tenants), are effectively coerced into participating. Second, the article repeats, with helpful added doses of commercial law context, the widespread contract law critique that consumers are presented with vague, illusory, and incomplete “choices” in respect of collection, aggregation, and use of private data. Third and fourth (to combine two categories of critique offered in the article), current market and legal understandings of privacy as commercial law treat privacy primarily as what one might call an “Article 2” asset, that is, in terms of sales of things. Overlooked in this developing commercial market is privacy as what one might call an “Article 9” asset, that is, as a source of security and securitization. The potentially predatory and discriminatory implications of that second character should be obvious to anyone with a passing familiarity with the history of consumer lending, and Professor Elvy hammers on those.

Paying for Privacy concludes with a review of the fragmented legal landscape for addressing these problems and a complementary summary of recommendations for improving the prospects of consumers while preserving valuable aspects of both PFP and PDE models. Professor Elvy nods in the direction of COPPA (the Children’s Online Privacy Protection Act) and the possibility of industry-specific or sector-specific regulation. Most of her energy is directed to clarifying the jurisdiction of the Federal Trade Commission with respect to PDE models to deal with unfair trade practices regarding privacy that do not fit into traditional or accepted models of harm addressable by the FTC. All of this has the air of the technical, but its broader substantive import should not be overlooked. Paying for Privacy serves as a helpful entrée to a newer, broader – and difficult — vision of privacy’s future.

Cite as: Michael Madison, Money For Your Life: Understanding Modern Privacy, JOTWELL (January 8, 2018) (reviewing Stacy-Ann Elvy, Paying for Privacy and the Personal Data Economy, 117 Colum. L. Rev. 1369 (2017)), https://cyber.jotwell.com/money-life-understanding-modern-privacy/.

The Section Formerly Known As Cyber

We’ve moved! The Cyberlaw section of Jotwell is now the Technology Law section. Two trends in legal scholarship since Jotwell’s launch drove the decision. First, the “cyber-” prefix is no longer strongly associated with the broader field of Internet law. Instead, it tends to refer to specific subfields, like cybercrime and cybersecurity. Those are part of our beat, but hardly all of it. Second, scholars and reviewers have expanded their own interests outwards, using similar intellectual tools to study drones, robotics, and other technological topics. Our new name recognizes these shifts. We’re keeping the same URLs, so all the archives and new reviews will still be at cyber.jotwell.com. And everything else about the section remains the same, including our hard-working contributors. We look forward to sharing with you many more things we like (lots).

James Grimmelmann
Margot Kaminski
Jotwell Technology Law Section co-editors
A. Michael Froomkin
Jotwell Editor-in-Chief

From Status Update to Social Media Contract

Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harvard L. Rev. (forthcoming 2017), available at SSRN.

Under current US First Amendment jurisprudence, the government can do very little to regulate speech online. It can penalize fraud and certain other kinds of false or potentially misleading speech; direct true threats; and infringement of intellectual property rights and related speech. But it cannot penalize most harassment, hate speech, falsity, and other speech that does immediate harm. Nor can the government generally bar particular speakers. Last Term, the Supreme Court struck down a provision of state law that tried to prevent convicted sex offenders from participating in “social media” where minors might also be participating.

There are good reasons for most of the limits the courts have imposed on the government’s speech-regulating powers—yet those limits have left a regulatory vacuum into which powerful private entities have stepped to regulate the speech of US social media users, suppressing a lot of speech that the government can’t, and protecting other speech despite their power to suppress it. The limits these intermediaries impose, with some important exceptions, look very similar whether the speech comes from the US or from a country that imposes heavier burdens on intermediaries to control the speech of their users. Klonick’s fascinating paper explores the evolution of speech regulation policies at major social media companies, particularly Twitter and Facebook, along with Alphabet’s (Google’s) YouTube.

Klonick finds “marked similarities to legal or governance systems with the creation of a detailed list of rules, trained human decision-making to apply those rules, and reliance on a system of external influence to update and amend those rules.” One lesson from her story may be the free speech version of ontogeny recapitulating phylogeny: regardless of what the underlying legal structure is, or whether an institution is essentially inventing a structure from scratch, speech regulations pose standard issues of definition (defamation and hate speech are endlessly flexible, not to mention intellectual property infringements), enforcement (who will catch the violators?), and equity/fairness (who will watch the watchmen?).

Klonick’s research also provides important insights on the relative roles of algorithms and human review in detecting and deterring unwanted content. While her article focuses on the guidelines followed by human decision-makers, those fit into a larger context of partially automated screening. Automated screening for child pornography seems to be a relative success story, as she explains. However, as many interested parties have pointed out in response to the Copyright Office’s inquiry on §512’s safe harbors and private content protection mechanisms, even with automated enforcement and “claiming” by putative copyright owners via Content ID, algorithms cannot avoid problems of judgment and equitable treatment, especially when some copyright owners have negotiated special rights to override the DMCA process, and keep contested content down regardless of its fair use status, once it’s been identified by Content ID.

Klonick’s account can also usefully be read alongside Zeynep Tufekci’s Twitter and Tear Gas: The Power and Fragility of Networked Protest. Tufekci covers some aspects of speech policies that are particularly troubling, including the misuse of Facebook’s “real name” policy to suppress activists in countries where using a formal name could potentially be deadly; targeted, state-supported attacks on activists that involve reporting them for “abuse” and hate speech; and content moderation that can be politically ignorant, or worse: “in almost any country with deep internal conflict, the types of people who are most likely to be employed by Facebook are often from one side of the conflict—the side with more power and privileges.” Facebook’s team overseeing Turkish content, for example, is in Dublin, disadvantaging non-English speakers and women (whose families are less likely to be willing to relocate for their jobs). Similarly, Facebook’s response to the real-name problem is to allow use of another name when it’s in common use by the speaker, but that usually requires people to provide documents such as school IDs. As Tufekci points out, documents using an alternate identity are most likely to be available to people in relatively privileged positions in developed countries, thus muting their protest but leaving similar people without such forms of ID exposed.

These details of implementation are far more than trivial. And Tufekci’s warning that governments quickly learn how to use, and misuse, platform mechanisms for their own benefit is a vital one. The extent to which an abuse team can be manipulated will, I expect, soon become a separate challenge for the content policy teams Klonick documents—if they decide to resist that manipulation, which is not guaranteed. Some of these techniques, moreover, resist handling by an abuse team even when identified. When government-backed teams overwhelm social media with trivialities in order to distract from a potentially important political event, as is apparently common in China, what policies and algorithms could identify the pattern, much less sort the wheat from the chaff?

Along with this comparison, Klonick’s piece offers the opportunity to revisit some relatively recent techno-optimists—West Coast code has started to look in places more like outsourced Filipino or Indian area codes, so what does that mean for internet governance? Consider Clay Shirky’s Cognitive Surplus: Creativity and Generosity in a Connected Age, a witty book whose examples of user-generated activism now seem dated, only seven years later, with the rise of “fake news” disseminated by foreign content farms, GamerGate, and revenge porn. It’s still true that, as Joi Ito wrote, “you should never underestimate the power of peer-to-peer social communication and the bonding force of popular culture. Although so much of what kids are doing online may look trivial and frivolous, what they are doing is building the capacity to connect, to communicate, and ultimately, to mobilize.” Because of this power, a legal system that discourages you from commenting on and remixing the first things you love, in communities who love the same thing you do, also discourages you from commenting on and remixing everything else. But what Klonick’s account makes clear is that discouragement can come from platforms as well as directly from governments, whether because of over-active filters such as Content ID that suppress remixes or because of more directly politicized interventions such as those Tufekci discusses.

Shirky’s book, like many of its era, was relatively silent about the role of government in enacting (or suppressing) the changes promoted by people taking advantage of new technological affordances. Consider one of Shirky’s prominent examples of the power of (women) organizing online: a Facebook group organized to fight back against anti-woman violence perpetrated in the Indian city of Mangalore by the religious fundamentalist group Sri Ram Sene. As Shirky tells it, “[p]articipation in the Pink Chaddi [underwear] campaign demonstrated publicly that a constituency of women were willing to counter Sene and wanted politicians and the police to do the same…. [T]he state of Mangalore arrested Muthali and several key members of Sene … as a way of preventing a repeat of the January attacks.” (Emphasis mine.) The story has a happy ending because actual government, not “governance” structures, intervened. How would the content teams at Facebook react if today’s Indian government decided that similar protests were incitements to violence?

The fact that internet intermediaries have governance aspirations without formal government power (or participatory democracy) also directs our attention to the influences on the use of that power. Klonick states that “platforms moderate content because of a foundation in First Amendment norms, corporate responsibility, and at the core, the economic necessity of creating an environment that reflects the expectations of its users. Thus, platforms are motivated to moderate by both the Good Samaritan purpose of § 230, as well as its concerns for free speech.” But note what drops out of that second sentence—explicit acknowledgement of the profit motive, which becomes both a driver of some speech protections and a reason, or an excuse, for some speech suppression. Pressure from advertisers, for example, led YouTube to crack down on “pro-terrorism” speech on the platform. Klonick also argues that “platforms are economically responsive to the expectations and norms of their users,” which leads them “to both take down content their users don’t want to see and keep up as much content as possible,” including by pushing back against government takedown requests. But this seems to me to equivocate about who the relevant “users” are—after all, if you’re not paying for a service, you’re the product it’s selling, and content that advertisers or large copyright owners don’t want to see may be far more vulnerable than content that individual participants don’t want to see.

One question Klonick’s story raised for me, then, was what a different system might look like. What if platforms were run the way public libraries are? Libraries are the real “sharing” economies, and in the US have resisted government surveillance and content filtering as a matter of mission. Similarly, the Archive of Our Own, with which I am involved, has user-centric rules that don’t need to prioritize the preservation of ad revenue. Although these rules are hotly debated within fandom, because what is welcoming to some users can be exclusionary to others, they are distinctively mission-oriented. (I should also concede that size, too, makes a difference—eventually, a large enough community that includes political content will attract government attention; Twitter hasn’t made a profit, but it has received numerous subpoenas and national security letters.)

Klonick suggests that the key to optimal speech regulation for platforms is some sort of participatory reform, perhaps involving both procedural and substantive protections for individual users. In other words, we need to reinvent the democratic state, embedding the user/citizen in a context that she has some realistic chance to affect, at least if she knows her rights and acts in concert with other users. The obvious problem is the one of transition: how will we get from here to there? Klonick understandably doesn’t take up that question in any detail. Absent the coercive power of real law, backed by guns and taxes, it’s hard for me to imagine the transition to participatory platform governance. Moreover, the same dynamics that brought us Citizens United make it hard to imagine that corporate interests—both platform and advertiser—would accede to any such mandates, likely raising First Amendment objections of their own.

Klonick’s article helps to identify how individual speech online is embedded in structures that guide and constrain speakers; its descriptive account will be very useful to understanding these structures. I worry, however, that understanding won’t be enough to save us. We want to think well of our governors; we don’t want to be living in 1984, or Brave New World. But the development of intermediary speech policies tells us, among other things, that we might end up looking from man to pig, and pig to man, and finding it hard to tell the difference.


Disclosure: Kate Klonick is a former student of mine, though this paper comes from her work years later.

Cite as: Rebecca Tushnet, From Status Update to Social Media Contract, JOTWELL (November 29, 2017) (reviewing Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harvard L. Rev. (forthcoming 2017), available at SSRN), https://cyber.jotwell.com/from-status-update-to-social-media-contract/.

Rules for Digital Radicals

In 1971, activist and community organizer Saul Alinsky summarized lessons from a lifetime of organizing in his book, Rules for Radicals: A Pragmatic Primer for Realistic Radicals. Published in what would be the twilight of his life, Rules for Radicals was in many ways a tactical field guide for those seeking to instigate widespread social change. It still influences social movements on both the left and right. And yet, today’s wired world is much different—and more dynamic—than Alinsky’s pre-internet society, which relied largely on centralized forms of mass communication.

Now, both activists and governments operate under a new set of diffuse structures and communication mediums. Twitter, Facebook, and the like alter the terms of engagement for public protest and participatory democracy. And Zeynep Tufekci’s new book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, helps us understand precisely how networked communications can amplify social movements, at the same time that it provides important notes of caution. In this way, while written as an accessible scholarly account rather than an operation manual, Tufekci’s book provides rules—or at least guideposts—for digital radicals.

Through detailed analysis of contemporary movements such as Occupy, Black Lives Matters, and the Gezi Park protests, coupled with comparisons to historical movements, such as the Civil Rights movement of the 1950s and 1960s, Tufekci develops a framework for understanding how modern movements can exploit—and be exploited by—digital communication technologies.

What she highlights is that though social media permits movements to galvanize supporters quickly, helping them organize massive public protests in short order, something is lost in terms of internal, deliberative structure that a movement may need in order to survive down the stretch. Tufekci labels the collective bonds and capabilities developed through the constant maintenance of organizational communities “network internalities.” Internal organizational contestation has long-term value.

Tufekci analogizes the work of developing network internalities to the importance of building muscles for long term durability. For example, she compares the March on Washington, which took months to plan and helped create enduring movement capacity through both formal and informal institutions, with the 2013 Gezi Park protest in Turkey. The Gezi Park protests were spawned almost overnight and helped generate a strong protest culture but, unfortunately, did not translate into a sustained political movement (yet).

In other words, while the ability to organize rapidly is no doubt a real asset afforded by digital communication tools, it comes with attendant limitations—organizational structures only start to be developed after the movement’s first big moment, and often too late. Today’s movements may lack the organizational structure for making collective decisions, limiting their ability to make tactical shifts as the protests unfold.

Perhaps even more significantly, quickly organized protests may fail to signal any long-lasting organizational capacity or threat to those in power. For Tufekci, social movements are only as powerful as the capacities that they signal. She identifies three principal, but non-exclusive, capacities that are critical to movements’ success—narrative capacity (the ability to get the public’s attention and tell the movement’s story), disruptive capacity (the ability to interrupt the government’s business as usual), and electoral capacity (the ability to credibly endanger politicians’ electoral prospects).

As to each one, if a movement is able to organize massive amounts of people into a one-day protest, for example, the humongous Women’s March that followed Donald Trump’s inauguration, but that massive protest does not credibly signal a threat to the government’s electoral chances, the impact of the protest is greatly diluted and permits the government to ignore, rather than engage and potentially overreact to, the protest. Underscoring Tufekci’s point that participatory tactics are only as impactful as the capabilities they signal, Ben Wikler, a leader at MoveOn, recently implored people activated by Republican efforts to unwind the Affordable Care Act NOT to call congresspeople who didn’t represent them. Otherwise, the strength of the signal provided by calls could be weakened and interpreted as not posing electoral capacity.

In the midst of developing her helpful capacity-signals taxonomy for analyzing movements’ strengths, Tufekci foregrounds that although social media holds great promise in that it enables movements to circumvent traditional forms of media and gain direct attention for their respective causes, new forms of censorship are also being deployed. That is, governments and those in power are not sitting idly by—they too have in many instances embraced social media and used it to discredit mediums used by activists through the spread of fake news and conspiracy theories. Those in power are actively engaged in diminishing the attention movements receive.

But here, though an academic book rather than a practical field guide, Tufekci’s thorough analysis nevertheless might have benefited a bit from the inclusion of—or gesture toward—some tactical solutions, akin to the approach utilized by Alinsky. Tufekci’s lament of misinformation’s role in hampering social movements might have been accompanied by reference to particular suggestions activists could employ to provide their social media posts with credibility. For instance, the Witness organization, which trains activists on how to use video to protect human rights, instructs activists to set the date and time on their cameras and to capture contextualizing details from the scene, both of which verify the authenticity of the images.

But aside from a handful of missed opportunities to make the lessons from her analysis more concrete (which may have been outside the scope of an academic project), Tufekci’s book is a critical contribution for those seeking to understand how to best leverage social media for social change. While lauding movement activists’ integrity and commitment to participatory forms of engagement that involve many, Tufekci also gently nudges today’s activists to consider whether digital technologies can be utilized more efficiently and with longer-lasting effect. The book lives up to its title—highlighting networked activism’s power and, equally if not more importantly, uncovering its weaknesses so that they may be overcome.

Cite as: Scott Skinner-Thompson, Rules for Digital Radicals, JOTWELL (November 14, 2017) (reviewing Zeynep Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest (2017)), https://cyber.jotwell.com/rules-for-digital-radicals/.

The Answer to the Machine is in the Rule of Law?

Mirielle Hildebrandt, Law as Computation in the Era of Artificial Legal Intelligence. Speaking Law to the Power of StatisticsU. Toronto L. J. (2017), available at SSRN.

Mireille Hildebrandt’s forthcoming article is a companion piece to her Chorley Lecture of 2015.1 In the earlier piece, she highlights the relationship between the ‘deep structure of modern law’ and the printing press and written text – building on this a case concerning constitutional democracy and transparency, both in the world of print and the world of electronic data. In this new paper, the emphasis is on law as computation – as compared with law as information in the earlier lecture.

Machine learning is often discussed as an opportunity for legal practice and adjudication, but what will that mean? Hildebrandt highlights how machine learning in the context of law is primarily a simulation of human reasoning found in written legal text; one needs to identify how law is associated with ‘meaningful information’ rather than information simpliciter. Key concerns with applying machine learning in law include the catch-22 of deskilled lawyers becoming unable to verify a machine’s output, and various ways in which such systems can be opaque.

Hildebrandt hopes that we can ‘speak law to the power of statistics’ and argues that machine learning and related practices and technologies ‘may contribute to better informed legal reasoning – if done well’. There is an interesting and healthy scepticism about the funding of current efforts and what this might mean for the consequences of what may be reported as innovation. Much of this relates, of course, to the driving factors around innovation in the legal profession and the changing ‘law firm’. The work therefore also sits within the body of literature now interrogating algorithmic governance (e.g. Kathy O’Neal’s Weapons of Math Destruction, Frank Pasquale’s The Black Box Society, and, more recently, the question of whether data protection law might provide a remedy for such concerns in Lilian Edwards and Michael Veale’s Slave to the Algorithm? Why a ‘Right to Explanation’ is Probably Not The Remedy You are Looking For).

Provocatively, Hildebrandt wonders whether the result of a certain type of interdisciplinary engagement is that law is simply treated as one kind of regulation e.g. in the mind of the law-and-economics scholar; this is contrasted with a (perhaps deliberately idealised) lawyer as the ‘dignified steward of individual justice and societal order’. Her response, which may resonate with many legal scholars, is to draw upon Neil MacCormick’s presentation of law as an ‘argumentative’ discipline (MacCormick did, as Hildebrandt does, engage with speech act theory as a means of understanding legal reasoning). The challenge, then, is to identify the way(s) to test and contest emerging forms of decision making, and to ensure that the relevant people are equipped with the skills and/or the nous to ask searching questions and to scrutinise the systems that we are rapidly putting in place.

This draft paper will appear in a much-anticipated issue of the University of Toronto Law Journal. Already, the Canadian journal has contributed to a debate around the legal singularity (of interest even if you think that the legal singularity is about as likely as The Singularity itself), in a special issue on artificial intelligence, big data and the law; the forthcoming issue, based around a March 2017 symposium, includes further contributions on democratic oversight and the future of legal education. Indeed, that question of how future lawyers will be trained is something that Hildebrandt ruminates upon in her article and struck a chord with this reviewer (currently working in a legal system where the training of solicitors is about to undergo significant change. If the next generation of lawyers and legal researchers is to be able to take on the socially important challenges outlined by Hildebrandt (especially in countering the arms race between those with the requisite resources and motivations), we may need to think a bit harder about the shape of the law school.

  1. Published as Mireille Hildebrandt, Law as Information in the Era of Data-Driven Agency, 79 The Modern L. Rev. 1 (2016). []
Cite as: Daithí Mac Síthigh, The Answer to the Machine is in the Rule of Law?, JOTWELL (October 2, 2017) (reviewing Mirielle Hildebrandt, Law as Computation in the Era of Artificial Legal Intelligence. Speaking Law to the Power of StatisticsU. Toronto L. J. (2017), available at SSRN), https://cyber.jotwell.com/the-answer-to-the-machine-is-in-the-rule-of-law/.

Democracy Unchained

K. Sabeel Rahman, Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy, 39 Cardozo L. Rev. 5 (forthcoming, 2017), available at SSRN.

In the mid-2000s, digital activists spearheaded the net neutrality movement to ensure fair treatment of the customers of Internet Service Providers (ISPs), as well as to protect the companies trying to reach them. Net neutrality rules limit or ban preferential treatment; for example, they might prevent an ISP like Comcast from offering exclusive access to Facebook and its partner sites on a “Free Basics” plan. Such rules have a sad and tortuous history in the US: rebuffed under Bush, long delayed and finally adopted by Obama’s FCC, and now in mortal peril thanks to Donald Trump’s elevation of Ajit Pai to be chairman of the Commission. But net neutrality as a popular principle has had more success, animating mass protests and even comedy shows. It has also given long-suffering cable customers a way of politicizing their personal struggles with haughty monopolies.

But net neutrality activists missed two key opportunities. They often failed to explain how far the neutrality principle should extend, as digital behemoths like Google, Facebook, Apple, Microsoft, and Amazon wielded extraordinary power over key nodes of the net. Some commentators derided calls for “search neutrality” or “app store neutrality;” others saw such measures as logical next steps for a digital New Deal. Moreover, they did not adequately address key economic arguments. Neoliberal commentators insisted that the US would only see rapid advances in speed and quality of service if ISPs could recoup investment by better monetizing traffic. Progressives argued that “something is better than nothing;” a program like “Free Basics” probably benefits the disadvantaged more than no access at all.

In his Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy, K. Sabeel Rahman offers a theoretical framework to address these concerns. He offers a “definition of infrastructural goods and services” and a “toolkit of public utility-inspired regulatory strategies” that is a way to “diagnose and respond to new forms of private power in a changing economy,” including powerful internet platforms. He also gives a clear sense of why the public interest in regulating large internet firms should trump investors’ arguments for untrammeled rights to profits—and demands “public options” for those unable to afford access to privately controlled infrastructure.

Law’s treatment of infrastructure has been primarily economic in orientation. For example, Brett Frischmann’s magnum opus, Infrastructure: The Social Value of Shared Resources, offered a sophisticated theory of the spillover benefits of transportation, communication, environmental, and other forms of infrastructure, building on economists’ analyses of topics like externalities and congestion costs. Rahman complements this work by highlighting political and moral dimensions of infrastructure. The early 21st century Progressive movement did not seek to regulate utilities simply because a large firm may not be efficient. They also worried directly about the power exercised by such firms: their ability to influence politicians, take an outsized share of GDP, and sandbag both rival firms and political opponents. As Rahman explains, “Industries triggered public utility regulation when there was a combination of economies of scale limiting ordinary accountability through market competition, and a moral or social importance that made the industries too vital to be left to the whims of the market or the control of a handful of private actors.”

Identifying the list of “foundational goods and services” meriting direct utility regulation is inevitably a mix of politics, science, and law. Determining, for example, whether broadband internet should be treated in a manner similar to telephone service, depends on scientific analysis (e.g., might it soon become easier to provide internet over electric lines to complement existing cable), political mandates (e.g., voters electing Republicans at this point may be assumed not to prioritize broadband regulation, as party lines on the issue are relatively clear), and legal judgments (e.g., is broadband so similar to wireline service that it would defeat the purpose of the relevant statutes to treat it far differently). This delicate balance of the “three cultures” of science, democracy, and law, means that the scope of utilities regulation will always be somewhat in flux. While the federal government is, today, chipping away at the category, future administrations may revive and expand it. If so, they will benefit from Rahman’s rigorous definition of infrastructure as “those goods and services which (i) have scale effects in their production or provision suggesting the need for some degree of market or firm concentration; (ii) unlock and enable a wide variety of downstream economic and social activities for those with access to the good or service; and (iii) place users in a position of potential subordination, exploitation, or vulnerability if their access to these goods or services is curtailed.”

Not just the scope, but also the content of public utility regulation has also evolved over time. As Rahman relates, three broad categories of regulation can provide a “21st century framework for public utility regulation:”

1) [F]irewalling core necessities away from behaviors and practices that might contaminate the basic provision of these goods and services—including through structural limits on the corporate organization and form of firms that provide infrastructural goods;

2) [I]mposing public obligations on infrastructural firms, whether negative obligations to prevent discrimination or unfair disparities in prices, or positive obligations to pro-actively provide equal, affordable, and accessible services to under-served constituencies; and

3) [C]reating public options, state-chartered, cheaper, basic versions of these services that would offer an alternative to exploitative private control in markets otherwise immune to competitive pressures.

These three approaches (“firewalls”, “public obligations” and “public options”) have all helped increase the accountability of private powers in the past (as Robert Lee Hale’s work, praised as an inspiration in Rahman’s, has shown). Cable firms cannot charge you a higher rate because they dislike your politics. Nor can they squeeze businesses that they want to purchase, charging higher and higher rates to an acquisition target until it relents. Nor should regulators look kindly on holding companies that would more ruthlessly financialize essential services (or the horizontal shareholding that functions similarly to such holding companies.).

There are many legal scholars working in fields like communications law, banking law, and cyberlaw, who identify the limits of dominant regulatory approaches, but are researching in isolation. Rahman’s article provides a unifying framework for them to learn from one another, and should catalyze important interdisciplinary work. For example, it is well past time for those writing about search engines to explore how principles of net neutrality could translate into robust principles of search neutrality. The European Commission has documented Google’s abuse of its dominant position in shopping services. Subsequent remedial actions should provide many opportunities for the imposition of public obligations (such as commitments to display at least some non-Google-owned properties prominently in contested search engine results pages) and firewalling (which might involve stricter merger review when a megafirm makes yet another acquisition).

Rahman also shows a critical complementarity between competition law and public utility regulation. Antitrust concepts can help policymakers assess when a field has become concentrated enough to merit regulatory attention. Both judgments and settlements arising out of particular cases could inform the work of, say, a future “Federal Search Commission,” which could complement the Federal Communications Commission. The same problem of “bigness” that can allow a megafirm to abuse its platform by squeezing rivals, also creates opportunities to abuse users. Just as the Consumer Financial Protection Bureau serves a vital function

Many large internet platforms are now leveraging data advantage into profits, and profits into further domination of advertising markets. The dynamic is self-reinforcing: more data means providing better, more targeted services, which in turn attracts a larger customer base, which offers even more opportunities to collect data. Once a critical mass of users is locked in, the dominant platform can chisel away at both consumer and producer surplus. For example, under pressure from investors to decrease its operating losses, Uber has increased its cut from drivers’ earnings and has price discriminated against certain riders based on algorithmic assessments of their ability and willingness to pay. The same model is now undermining Google’s utility (as ads crowd out other information), and Facebook’s privacy policies (which get more egregiously one-sided the more the social network’s domination expands).

Rahman offers us a rigorous way of recognizing such platform power, offering a tour de force distillation of cutting edge social science and critical algorithm studies. Industries ranging from internet advertising to health care could benefit from a public utility-centered approach. This is work that could lead to fundamental reassessments of contemporary regulatory approaches. It is exactly the type of research that state, federal, and international authorities should consult as they try to rein in the power of many massive firms in our increasingly concentrated, winner-take-all economy.

Cite as: Frank Pasquale, Democracy Unchained, JOTWELL (August 17, 2017) (reviewing K. Sabeel Rahman, Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy, 39 Cardozo L. Rev. 5 (forthcoming, 2017), available at SSRN), https://cyber.jotwell.com/democracy-unchained/.