Bitcoin was created in 2009 by a member of a cryptography mailing list who goes under the pseudonym of Satoshi Nakamoto, and whose identity is still a mystery. The project was designed to become a decentralized, open source, cryptographic method of payment that uses a tamper-free, open ledger to store all transactions, also known as the blockchain. In a field that is replete with hype and shady operators, David Gerard’s book Attack of the 50 Foot Blockchain has become one of the most prominent and needed sceptical voices studying the phenomenon. Do not let the amusing title you deter you; this is a solid book filled with solid and thorough research that goes through all of the most important aspects of cryptocurrencies, and it is one of the most cited take-downs of the technology.
The book covers a wide range of topics on cryptocurrencies and blockchain, and does so in self-contained chapters that can be read almost independently. The book does not follow a strict chronological order. This structure actually makes the book entirely more readable and a delight from cover to cover, not only because of the interesting subject matter, but also because of Gerard’s wit and knowledge.
The work follows three main themes: explaining Bitcoin and unearthing its various problems; the prevalence of fraudulent practices and unsavoury characters in cryptocurrencies, and then explaining blockchains and smart contracts, and their various criticisms.
In the introductory section Gerard does an excellent job of explaining the technology without the usual techno-jargon that surrounds the subject, and goes through the main reasons that proponents advocate the use of Bitcoin. Cryptocurrencies are often offered as a decentralised solution to the excesses incurred by financial institutions and governments. “Be your own bank” is cited as one of the advantages of Bitcoin, but Gerard accurately describes the various problems that this presents. Being your own bank means requiring security fit for a bank, which most people do not have. Moreover, some of the characteristics present in Bitcoin make it particularly unsuitable as a means of payment. Bitcoin is based on scarcity; only 21 million coins will ever be mined, so there is a strong incentive to hoard coins and hold. Similarly, cryptocurrency transactions are irreversible; if you lose coins in a hack, or make a transaction mistake, the coins are gone forever.
In the chapters dealing with fraud, Gerard does an excellent job of going through the dark side of cryptocurrencies. Cryptocurrencies rely on intermediaries, either exchanges that will accept your “fiat” currency and exchange it into digital currency, or “wallets”, where people can store their coins. The problem is that this unregulated space attracted fraudsters and amateurs in equal measure, and during its short history the space has been filled with Ponzi schemes, con-men, and manipulators. Gerard also describes the use of Bitcoin in the Dark Web, where it is the currency of choice of various illegal businesses.
But it is in his criticism of the blockchain technology where the book really shines. Even vocal Bitcoin critics used to think that that even if cryptocurrencies fail, the underlying blockchain technology would remain and become an important contribution to the way in which online transactions are made. Gerard became one of the first critics of the blockchain itself.
The blockchain is an immutable and decentralised record of all of the transactions that requires no trust in an intermediary. This is supposed to prove useful in any situation where a trustless system is required. But as Gerard points out, there are not a lot of situations when this is even the case, and most instances presented by blockchain advocates are not necessary. The book describes two main issues with using blockchain in a business environment. Firstly, decentralization is always expensive; there is a reason why many companies have been moving towards centralization of network services through the hiring of cloud providers. Decentralization means that you have to make sure that everyone is using the same protocols and compatible systems, but also you have to account for redundancies as you have to rely on services that are not always available, this results in slower and more cumbersome networks that spend more energy to produce a similar result. Secondly, if data management is a problem in your business, then adding a blockchain won’t make the problem go away. On the contrary, he sets out a number of questions that must be asked whenever anyone is thinking of implementing a blockchain to existing business models, including whether the technology can scale, and whether a centralised system will work just as well.
Finally, the book analyses smart contracts, which are contracts conducted digitally through a combination of cryptocurrencies and tokens recorded on a blockchain. The idea is that the parties to a contract code terms and conditions into an immutable token written in computer code which defines the parameters of the contract (conditions, payment, operational parameters), and those who want to transact with each other will write another token that will meet those parameters, at which point the payment is made and the electronic contract concluded. This contract is immutable and irrevocable.
Gerard accurately points out that this combination of immutability and irrevocability are toxic in a legal environment, as any error in the code can lead to nefarious legal consequences. Traditional contracts rely on human intent, and if a mistake is made or a conflict arises, the parties can go to court. But in a smart contract, the code is the last word, and there is no recourse in case of an error or a conflict other than trying to re-write the blockchain, which is not possible unless a majority of participants in the scheme agree to change the code.
This book is a must-read for anyone interested in an easy-to-read and enjoyable criticism of cryptocurrencies and the blockchain. It is a testament of the strength of the ideas presented that we are just now starting to undergo a much-needed check on the blockchain hype from various quarters. Even if cryptocurrencies manage to get past this early stage unscathed, it will be books like this one that will help to narrow the focus away from the narrative of bubbles and easy gains.
Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation
, U.C. Davis L. Rev.
(forthcoming 2018), available at SSRN
Jack Balkin is one of the leading thinkers and visionaries in the fields of information and cyber law. Every one of his scholarly contributions must be closely read. His recent article, Free Speech in the Algorithmic Society is no exception. It is highly recommended to those interested in fully understanding the current and future tensions between emerging technologies and human rights. The article also provides numerous gems – well-structured statements that eloquently articulate the central challenges of the day, some of which are quoted below.
The article starts off by introducing and defining the “Algorithmic Society” as one that “facilitates new forms of surveillance, control, discrimination and manipulation by both government and by private companies.” As before, society is driven by those seeking fame and fortune. However, much has changed. For instance, Balkin lists the four main sources of wealth the digital age brings about as “intellectual property, fame, information security and Big Data.” To achieve such wealth in this society, individuals are subjected to being governed by algorithms. At the same time, firms and governments achieve “practical omniscience”, while not only knowing what is happening but often accurately predicting what will happen next. These enhanced abilities, Balkin warns, lead to power asymmetries between groups of people (and not only between individuals and technologies) and generate several substantial challenges.
The article follows Balkin’s earlier scholarship which addressed the changing role of free speech doctrines and the First Amendment in the digital age, and the way they apply to the Internet titans. Indeed, Balkin explains that the central constitutional questions of this age will be those related to free speech and freedom of expression. The “Frightful Five” (and any future giants that might emerge) will cry for free speech protection to fend off intervention in their platforms and business models. Yet, at the same time, they will shrug off claims that they must comply with free speech norms themselves, while noting that they are merely private parties to whom these arguments do not pertain.
Continuing this line of scholarship, “Free Speech in the Algorithmic Society” introduces a rich discussion, which spans across several key topics, starting with the rise of “information fiduciaries”. These, Balkin defines, should include digital entities, which collect vast amounts of personal data about their users yet offer very limited insights as to their internal operations. Naturally, this definition includes leading search engines and social media platforms. Balkin concludes that information fiduciaries should be subjected to some of the duties classic fiduciaries were subjected to. To summarize their central obligation, Balkin states that they must not “act like con artists – inducing trust in their end users to obtain personal information and then betraying end users…”. Clearly, articulating this powerful obligation in “legalese” will prove to be a challenge.
The article also introduces the notion of “algorithmic nuisance”. This concept is important when addressing entities that have not entered a contractual relationship with individuals, yet can potentially negatively impact them. Balkin explains that these entities rely on algorithmic processes to make judgments about individuals at important and even crucial junctures. Such reliance – when extensive – inflicts costs and side effects on those subjected to the judgment. This is especially true of individuals singled out as risky, due to error. Balkin explains such individuals may be subjected to discrimination and manipulation. Furthermore, some people will be pressured to “conform their lives to the requirements of the algorithm,” thus undermining their personal autonomy. To limit these problems, Balkin suggests that such “nuisance” be treated as other forms of nuisances in public and private law, while drawing an interesting comparison to pollution and environmental challenges. As with pollution, Balkin suggests that those causing algorithmic nuisance be forced to “internalize the costs they shift onto others”. Balkin moves on to apply the concepts of “information fiduciaries” and “algorithmic nuisance” to practical examples such as smart appliances and personal robots.
The article’s next central point pertains to “New School Speech Regulation.” By this, Balkin refers to the dominant measures for curtailing speech in the digital age. As opposed to previous forms of speech regulation which addressed the actual speaker, today’s measures focus on dominant digital intermediaries, which control the flow of information to and from users. Balkin explains that regulating such entities is now “attractive to nation states” and goes on to detail the various ways this could be done. It should be noted that the analysis is quite U.S.-specific. Outside the U.S., nations are often frustrated by their inability to regulate the powerful (often U.S.-based) online intermediaries, and therefore the analysis of this issue is substantially different.
Beyond the actions of the state, Balkin points out that these online intermediaries, at their discretion, may take down materials which they consider abusive and violate their policies. Balkin notes that users “resent” the fact that the criteria are at times hidden and the measures applied arbitrarily. Yet these steps are often welcomed by users. At times, these steps might even prove efficient (to borrow from the outcomes of some analyses examining the actions of the company towns of previous decades– see my discussion here). Furthermore, relying on broad language to take assumedly arbitrary actions allows firms to punish “bad actors” whose actions are clearly frowned upon by the crowd, yet cannot be easily tied to an existing prohibition (if merely a detailed list of forbidden actions is strictly relied upon)– an important right to retain in an ever-changing digital environment.
Balkin further explains that the noted forms of speech regulation are closely related, and together form three important forces shaping the individual’s ability to speak online: (1) state regulation of speech; (2) the intermediary’s governance attempts, and (3) the government’s attempts to regulate speech by influencing the intermediary. This important triangular taxonomy is probably the article’s most important contribution and must be considered when facing similar questions. Balkin later demonstrates how these forces unfold when examining the test cases of “The Right to Be Forgotten” and “Fake News.”
What can be done to limit the concerns here noted? Balkin does not believe these problems can solve themselves via market forces. He explains that individuals are limited to signaling their discontent with their “voice,” rather than by “exiting” (using the terminology introduced by Hirschman) – and the power of their voice is quite limited. It should be noted that some other forms of limited signaling might still unfold, such as reducing activity within a digital platform. Yet it is possible that such signaling will still prove insufficient. Rather than relying on markets or calling on regulators to resolve these matters, Balkin argues that change must come from within the companies themselves – by them understanding that they are now entities with obligations to promote free speech on a global level. One can only hope that this wish will be fulfilled. Reading this article and spreading its vision, with hope that it would make its way to the leaders of today’s technology giants, will certainly prove to be an important step forward.
Cite as: Tal Zarsky, Governing The New Governors and Their Speech
(February 13, 2018) (reviewing Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation
, U.C. Davis L. Rev.
(forthcoming 2018), available at SSRN), https://cyber.jotwell.com/governing-new-governors-speech/
The commercial law of privacy has long occupied a relatively marginal place in modern legal scholarship, situated in gaps among doctrinal exposition, critical conceptual elaboration, and economically-motivated modeling. Much of the explanation for the omission is surely technological. Until Internet technologies came along in the mid-1990s, it was difficult to turn private information into a “thing” that was both technically and economically worth buying and selling.
Technology and markets have passed the point of no return on that score. Claude Shannon, credited as the author of the insight that all information can be converted into digits, has met Adam Smith. Yet relevant legal scholarship has not quite found its footing. Paying for Privacy and the Personal Data Economy, from Stacy-Ann Elvy, offers a novel way forward. Professor Elvy’s article offers a nifty, highly concrete, and eminently useful framework for thinking about the commercial law of things that consist of assets derived from consumers’ private information. It is not only the case that commercial law is one of the legally-relevant attributes of privacy and privacy practices. Privacy can be thought of as a mode of commercial law.
Paying for Privacy lays out its argument in a series of simple steps. It begins with a brief review of the emergence of the now-familiar Internet of Things, network-enabled everyday objects, industrial devices, and related technologies that increasingly permeate and collect data concerning numerous aspects of individuals’ daily lives. That review is pertinent not merely to common claims about the urgency of privacy regulation but also and more importantly to the premise that the supply of data-collecting technologies by industry (with accompanying privacy-implicating features) is likely to lead soon to increased demand by consumers for privacy-mediating, privacy-regulating, and privacy-protecting instruments.
The supply/demand metaphor is purposeful, if somewhat speculative, for it leads to a thorough and useful description and taxonomy of instruments currently on offer. Those include “traditional” privacy models involving personal data traded for “free” services (such as Facebook) and “freemium” services (such as LinkedIn) that offer both subscription-based and “free” versions of their services, harvesting money from subscribers (and advertisers and partners) and money and data from the free users. More recent PFP or “Pay For Privacy” models include newer firms offering multiple versions of “pay for privacy” services. Those include “privacy as a luxury,” in which providers offer added privacy controls for users in exchange for higher payments, and privacy discounts, by which users get cheaper versions of services if they agree to participate in data monitoring and collection. Switching perspectives from the service to the consumer yields a series of models collected as the PDE, or “Personal Data Economy.” Those include the “data insights model,” companies that enable individual consumers to monitor and aggregate private information about themselves, perhaps for their own use and perhaps to monetize by offering to third parties. In the related “data transfer model,” companies broker markets in which consumers voluntarily collect and contribute data about themselves, making it available for transfer (typically, purchase) by third parties.
The taxonomy is only a snapshot of current practices. This field seems to be so dynamic that inevitably many of the details in the article will be superseded, no doubt sooner rather than later. But the taxonomy helpfully reveals the two-sided character of privacy commerce. Rounding out that basic insight, one might add that there are privacy sellers and privacy buyers, privacy borrowers and privacy lenders, privacy principals and privacy agents, privacy capital and privacy debt, privacy currency and privacy assets. There are secondary markets and tertiary markets. As Professor Elvy notes, the list of privacy intermediaries includes privacy ratings firms – firms that play much the same role as the bond ratings firms that participated so enthusiastically (and eventually, so devastatingly) in the subprime mortgage market of the early 2000s.
Having laid out this framework, in the rest of the article Professor Elvy thoughtfully parses the weaknesses of the commercial law of privacy and develops a counterpart set of prescriptions and recommendations for further evaluation and possible implementation. All of this is admirably immediate and concrete.
Her critique is linked model by model to the taxonomy; the review below condenses it in the interest of space. First, not all consumers have equal or fair opportunities to collect and market their private data. To some significant degree, and for reasons that may be beyond their control or influence, those consumers either cannot participate in the wealth-creating dimensions of privacy or, because of social, economic, or cultural vulnerabilities (Professor Elvy highlights children and tenants), are effectively coerced into participating. Second, the article repeats, with helpful added doses of commercial law context, the widespread contract law critique that consumers are presented with vague, illusory, and incomplete “choices” in respect of collection, aggregation, and use of private data. Third and fourth (to combine two categories of critique offered in the article), current market and legal understandings of privacy as commercial law treat privacy primarily as what one might call an “Article 2” asset, that is, in terms of sales of things. Overlooked in this developing commercial market is privacy as what one might call an “Article 9” asset, that is, as a source of security and securitization. The potentially predatory and discriminatory implications of that second character should be obvious to anyone with a passing familiarity with the history of consumer lending, and Professor Elvy hammers on those.
Paying for Privacy concludes with a review of the fragmented legal landscape for addressing these problems and a complementary summary of recommendations for improving the prospects of consumers while preserving valuable aspects of both PFP and PDE models. Professor Elvy nods in the direction of COPPA (the Children’s Online Privacy Protection Act) and the possibility of industry-specific or sector-specific regulation. Most of her energy is directed to clarifying the jurisdiction of the Federal Trade Commission with respect to PDE models to deal with unfair trade practices regarding privacy that do not fit into traditional or accepted models of harm addressable by the FTC. All of this has the air of the technical, but its broader substantive import should not be overlooked. Paying for Privacy serves as a helpful entrée to a newer, broader – and difficult — vision of privacy’s future.
We’ve moved! The Cyberlaw section of Jotwell is now the Technology Law section. Two trends in legal scholarship since Jotwell’s launch drove the decision. First, the “cyber-” prefix is no longer strongly associated with the broader field of Internet law. Instead, it tends to refer to specific subfields, like cybercrime and cybersecurity. Those are part of our beat, but hardly all of it. Second, scholars and reviewers have expanded their own interests outwards, using similar intellectual tools to study drones, robotics, and other technological topics. Our new name recognizes these shifts. We’re keeping the same URLs, so all the archives and new reviews will still be at cyber.jotwell.com. And everything else about the section remains the same, including our hard-working contributors. We look forward to sharing with you many more things we like (lots).
Jotwell Technology Law Section co-editors
A. Michael Froomkin
Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech
, 131 Harvard L. Rev.
(forthcoming 2017), available at SSRN
Under current US First Amendment jurisprudence, the government can do very little to regulate speech online. It can penalize fraud and certain other kinds of false or potentially misleading speech; direct true threats; and infringement of intellectual property rights and related speech. But it cannot penalize most harassment, hate speech, falsity, and other speech that does immediate harm. Nor can the government generally bar particular speakers. Last Term, the Supreme Court struck down a provision of state law that tried to prevent convicted sex offenders from participating in “social media” where minors might also be participating.
There are good reasons for most of the limits the courts have imposed on the government’s speech-regulating powers—yet those limits have left a regulatory vacuum into which powerful private entities have stepped to regulate the speech of US social media users, suppressing a lot of speech that the government can’t, and protecting other speech despite their power to suppress it. The limits these intermediaries impose, with some important exceptions, look very similar whether the speech comes from the US or from a country that imposes heavier burdens on intermediaries to control the speech of their users. Klonick’s fascinating paper explores the evolution of speech regulation policies at major social media companies, particularly Twitter and Facebook, along with Alphabet’s (Google’s) YouTube.
Klonick finds “marked similarities to legal or governance systems with the creation of a detailed list of rules, trained human decision-making to apply those rules, and reliance on a system of external influence to update and amend those rules.” One lesson from her story may be the free speech version of ontogeny recapitulating phylogeny: regardless of what the underlying legal structure is, or whether an institution is essentially inventing a structure from scratch, speech regulations pose standard issues of definition (defamation and hate speech are endlessly flexible, not to mention intellectual property infringements), enforcement (who will catch the violators?), and equity/fairness (who will watch the watchmen?).
Klonick’s research also provides important insights on the relative roles of algorithms and human review in detecting and deterring unwanted content. While her article focuses on the guidelines followed by human decision-makers, those fit into a larger context of partially automated screening. Automated screening for child pornography seems to be a relative success story, as she explains. However, as many interested parties have pointed out in response to the Copyright Office’s inquiry on §512’s safe harbors and private content protection mechanisms, even with automated enforcement and “claiming” by putative copyright owners via Content ID, algorithms cannot avoid problems of judgment and equitable treatment, especially when some copyright owners have negotiated special rights to override the DMCA process, and keep contested content down regardless of its fair use status, once it’s been identified by Content ID.
Klonick’s account can also usefully be read alongside Zeynep Tufekci’s Twitter and Tear Gas: The Power and Fragility of Networked Protest. Tufekci covers some aspects of speech policies that are particularly troubling, including the misuse of Facebook’s “real name” policy to suppress activists in countries where using a formal name could potentially be deadly; targeted, state-supported attacks on activists that involve reporting them for “abuse” and hate speech; and content moderation that can be politically ignorant, or worse: “in almost any country with deep internal conflict, the types of people who are most likely to be employed by Facebook are often from one side of the conflict—the side with more power and privileges.” Facebook’s team overseeing Turkish content, for example, is in Dublin, disadvantaging non-English speakers and women (whose families are less likely to be willing to relocate for their jobs). Similarly, Facebook’s response to the real-name problem is to allow use of another name when it’s in common use by the speaker, but that usually requires people to provide documents such as school IDs. As Tufekci points out, documents using an alternate identity are most likely to be available to people in relatively privileged positions in developed countries, thus muting their protest but leaving similar people without such forms of ID exposed.
These details of implementation are far more than trivial. And Tufekci’s warning that governments quickly learn how to use, and misuse, platform mechanisms for their own benefit is a vital one. The extent to which an abuse team can be manipulated will, I expect, soon become a separate challenge for the content policy teams Klonick documents—if they decide to resist that manipulation, which is not guaranteed. Some of these techniques, moreover, resist handling by an abuse team even when identified. When government-backed teams overwhelm social media with trivialities in order to distract from a potentially important political event, as is apparently common in China, what policies and algorithms could identify the pattern, much less sort the wheat from the chaff?
Along with this comparison, Klonick’s piece offers the opportunity to revisit some relatively recent techno-optimists—West Coast code has started to look in places more like outsourced Filipino or Indian area codes, so what does that mean for internet governance? Consider Clay Shirky’s Cognitive Surplus: Creativity and Generosity in a Connected Age, a witty book whose examples of user-generated activism now seem dated, only seven years later, with the rise of “fake news” disseminated by foreign content farms, GamerGate, and revenge porn. It’s still true that, as Joi Ito wrote, “you should never underestimate the power of peer-to-peer social communication and the bonding force of popular culture. Although so much of what kids are doing online may look trivial and frivolous, what they are doing is building the capacity to connect, to communicate, and ultimately, to mobilize.” Because of this power, a legal system that discourages you from commenting on and remixing the first things you love, in communities who love the same thing you do, also discourages you from commenting on and remixing everything else. But what Klonick’s account makes clear is that discouragement can come from platforms as well as directly from governments, whether because of over-active filters such as Content ID that suppress remixes or because of more directly politicized interventions such as those Tufekci discusses.
Shirky’s book, like many of its era, was relatively silent about the role of government in enacting (or suppressing) the changes promoted by people taking advantage of new technological affordances. Consider one of Shirky’s prominent examples of the power of (women) organizing online: a Facebook group organized to fight back against anti-woman violence perpetrated in the Indian city of Mangalore by the religious fundamentalist group Sri Ram Sene. As Shirky tells it, “[p]articipation in the Pink Chaddi [underwear] campaign demonstrated publicly that a constituency of women were willing to counter Sene and wanted politicians and the police to do the same…. [T]he state of Mangalore arrested Muthali and several key members of Sene … as a way of preventing a repeat of the January attacks.” (Emphasis mine.) The story has a happy ending because actual government, not “governance” structures, intervened. How would the content teams at Facebook react if today’s Indian government decided that similar protests were incitements to violence?
The fact that internet intermediaries have governance aspirations without formal government power (or participatory democracy) also directs our attention to the influences on the use of that power. Klonick states that “platforms moderate content because of a foundation in First Amendment norms, corporate responsibility, and at the core, the economic necessity of creating an environment that reflects the expectations of its users. Thus, platforms are motivated to moderate by both the Good Samaritan purpose of § 230, as well as its concerns for free speech.” But note what drops out of that second sentence—explicit acknowledgement of the profit motive, which becomes both a driver of some speech protections and a reason, or an excuse, for some speech suppression. Pressure from advertisers, for example, led YouTube to crack down on “pro-terrorism” speech on the platform. Klonick also argues that “platforms are economically responsive to the expectations and norms of their users,” which leads them “to both take down content their users don’t want to see and keep up as much content as possible,” including by pushing back against government takedown requests. But this seems to me to equivocate about who the relevant “users” are—after all, if you’re not paying for a service, you’re the product it’s selling, and content that advertisers or large copyright owners don’t want to see may be far more vulnerable than content that individual participants don’t want to see.
One question Klonick’s story raised for me, then, was what a different system might look like. What if platforms were run the way public libraries are? Libraries are the real “sharing” economies, and in the US have resisted government surveillance and content filtering as a matter of mission. Similarly, the Archive of Our Own, with which I am involved, has user-centric rules that don’t need to prioritize the preservation of ad revenue. Although these rules are hotly debated within fandom, because what is welcoming to some users can be exclusionary to others, they are distinctively mission-oriented. (I should also concede that size, too, makes a difference—eventually, a large enough community that includes political content will attract government attention; Twitter hasn’t made a profit, but it has received numerous subpoenas and national security letters.)
Klonick suggests that the key to optimal speech regulation for platforms is some sort of participatory reform, perhaps involving both procedural and substantive protections for individual users. In other words, we need to reinvent the democratic state, embedding the user/citizen in a context that she has some realistic chance to affect, at least if she knows her rights and acts in concert with other users. The obvious problem is the one of transition: how will we get from here to there? Klonick understandably doesn’t take up that question in any detail. Absent the coercive power of real law, backed by guns and taxes, it’s hard for me to imagine the transition to participatory platform governance. Moreover, the same dynamics that brought us Citizens United make it hard to imagine that corporate interests—both platform and advertiser—would accede to any such mandates, likely raising First Amendment objections of their own.
Klonick’s article helps to identify how individual speech online is embedded in structures that guide and constrain speakers; its descriptive account will be very useful to understanding these structures. I worry, however, that understanding won’t be enough to save us. We want to think well of our governors; we don’t want to be living in 1984, or Brave New World. But the development of intermediary speech policies tells us, among other things, that we might end up looking from man to pig, and pig to man, and finding it hard to tell the difference.
Disclosure: Kate Klonick is a former student of mine, though this paper comes from her work years later.
In 1971, activist and community organizer Saul Alinsky summarized lessons from a lifetime of organizing in his book, Rules for Radicals: A Pragmatic Primer for Realistic Radicals. Published in what would be the twilight of his life, Rules for Radicals was in many ways a tactical field guide for those seeking to instigate widespread social change. It still influences social movements on both the left and right. And yet, today’s wired world is much different—and more dynamic—than Alinsky’s pre-internet society, which relied largely on centralized forms of mass communication.
Now, both activists and governments operate under a new set of diffuse structures and communication mediums. Twitter, Facebook, and the like alter the terms of engagement for public protest and participatory democracy. And Zeynep Tufekci’s new book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, helps us understand precisely how networked communications can amplify social movements, at the same time that it provides important notes of caution. In this way, while written as an accessible scholarly account rather than an operation manual, Tufekci’s book provides rules—or at least guideposts—for digital radicals.
Through detailed analysis of contemporary movements such as Occupy, Black Lives Matters, and the Gezi Park protests, coupled with comparisons to historical movements, such as the Civil Rights movement of the 1950s and 1960s, Tufekci develops a framework for understanding how modern movements can exploit—and be exploited by—digital communication technologies.
What she highlights is that though social media permits movements to galvanize supporters quickly, helping them organize massive public protests in short order, something is lost in terms of internal, deliberative structure that a movement may need in order to survive down the stretch. Tufekci labels the collective bonds and capabilities developed through the constant maintenance of organizational communities “network internalities.” Internal organizational contestation has long-term value.
Tufekci analogizes the work of developing network internalities to the importance of building muscles for long term durability. For example, she compares the March on Washington, which took months to plan and helped create enduring movement capacity through both formal and informal institutions, with the 2013 Gezi Park protest in Turkey. The Gezi Park protests were spawned almost overnight and helped generate a strong protest culture but, unfortunately, did not translate into a sustained political movement (yet).
In other words, while the ability to organize rapidly is no doubt a real asset afforded by digital communication tools, it comes with attendant limitations—organizational structures only start to be developed after the movement’s first big moment, and often too late. Today’s movements may lack the organizational structure for making collective decisions, limiting their ability to make tactical shifts as the protests unfold.
Perhaps even more significantly, quickly organized protests may fail to signal any long-lasting organizational capacity or threat to those in power. For Tufekci, social movements are only as powerful as the capacities that they signal. She identifies three principal, but non-exclusive, capacities that are critical to movements’ success—narrative capacity (the ability to get the public’s attention and tell the movement’s story), disruptive capacity (the ability to interrupt the government’s business as usual), and electoral capacity (the ability to credibly endanger politicians’ electoral prospects).
As to each one, if a movement is able to organize massive amounts of people into a one-day protest, for example, the humongous Women’s March that followed Donald Trump’s inauguration, but that massive protest does not credibly signal a threat to the government’s electoral chances, the impact of the protest is greatly diluted and permits the government to ignore, rather than engage and potentially overreact to, the protest. Underscoring Tufekci’s point that participatory tactics are only as impactful as the capabilities they signal, Ben Wikler, a leader at MoveOn, recently implored people activated by Republican efforts to unwind the Affordable Care Act NOT to call congresspeople who didn’t represent them. Otherwise, the strength of the signal provided by calls could be weakened and interpreted as not posing electoral capacity.
In the midst of developing her helpful capacity-signals taxonomy for analyzing movements’ strengths, Tufekci foregrounds that although social media holds great promise in that it enables movements to circumvent traditional forms of media and gain direct attention for their respective causes, new forms of censorship are also being deployed. That is, governments and those in power are not sitting idly by—they too have in many instances embraced social media and used it to discredit mediums used by activists through the spread of fake news and conspiracy theories. Those in power are actively engaged in diminishing the attention movements receive.
But here, though an academic book rather than a practical field guide, Tufekci’s thorough analysis nevertheless might have benefited a bit from the inclusion of—or gesture toward—some tactical solutions, akin to the approach utilized by Alinsky. Tufekci’s lament of misinformation’s role in hampering social movements might have been accompanied by reference to particular suggestions activists could employ to provide their social media posts with credibility. For instance, the Witness organization, which trains activists on how to use video to protect human rights, instructs activists to set the date and time on their cameras and to capture contextualizing details from the scene, both of which verify the authenticity of the images.
But aside from a handful of missed opportunities to make the lessons from her analysis more concrete (which may have been outside the scope of an academic project), Tufekci’s book is a critical contribution for those seeking to understand how to best leverage social media for social change. While lauding movement activists’ integrity and commitment to participatory forms of engagement that involve many, Tufekci also gently nudges today’s activists to consider whether digital technologies can be utilized more efficiently and with longer-lasting effect. The book lives up to its title—highlighting networked activism’s power and, equally if not more importantly, uncovering its weaknesses so that they may be overcome.
Mirielle Hildebrandt, Law as Computation in the Era of Artificial Legal Intelligence. Speaking Law to the Power of Statistics
, U. Toronto L. J.
(2017), available at SSRN.
Mireille Hildebrandt’s forthcoming article is a companion piece to her Chorley Lecture of 2015. In the earlier piece, she highlights the relationship between the ‘deep structure of modern law’ and the printing press and written text – building on this a case concerning constitutional democracy and transparency, both in the world of print and the world of electronic data. In this new paper, the emphasis is on law as computation – as compared with law as information in the earlier lecture.
Machine learning is often discussed as an opportunity for legal practice and adjudication, but what will that mean? Hildebrandt highlights how machine learning in the context of law is primarily a simulation of human reasoning found in written legal text; one needs to identify how law is associated with ‘meaningful information’ rather than information simpliciter. Key concerns with applying machine learning in law include the catch-22 of deskilled lawyers becoming unable to verify a machine’s output, and various ways in which such systems can be opaque.
Hildebrandt hopes that we can ‘speak law to the power of statistics’ and argues that machine learning and related practices and technologies ‘may contribute to better informed legal reasoning – if done well’. There is an interesting and healthy scepticism about the funding of current efforts and what this might mean for the consequences of what may be reported as innovation. Much of this relates, of course, to the driving factors around innovation in the legal profession and the changing ‘law firm’. The work therefore also sits within the body of literature now interrogating algorithmic governance (e.g. Kathy O’Neal’s Weapons of Math Destruction, Frank Pasquale’s The Black Box Society, and, more recently, the question of whether data protection law might provide a remedy for such concerns in Lilian Edwards and Michael Veale’s Slave to the Algorithm? Why a ‘Right to Explanation’ is Probably Not The Remedy You are Looking For).
Provocatively, Hildebrandt wonders whether the result of a certain type of interdisciplinary engagement is that law is simply treated as one kind of regulation e.g. in the mind of the law-and-economics scholar; this is contrasted with a (perhaps deliberately idealised) lawyer as the ‘dignified steward of individual justice and societal order’. Her response, which may resonate with many legal scholars, is to draw upon Neil MacCormick’s presentation of law as an ‘argumentative’ discipline (MacCormick did, as Hildebrandt does, engage with speech act theory as a means of understanding legal reasoning). The challenge, then, is to identify the way(s) to test and contest emerging forms of decision making, and to ensure that the relevant people are equipped with the skills and/or the nous to ask searching questions and to scrutinise the systems that we are rapidly putting in place.
This draft paper will appear in a much-anticipated issue of the University of Toronto Law Journal. Already, the Canadian journal has contributed to a debate around the legal singularity (of interest even if you think that the legal singularity is about as likely as The Singularity itself), in a special issue on artificial intelligence, big data and the law; the forthcoming issue, based around a March 2017 symposium, includes further contributions on democratic oversight and the future of legal education. Indeed, that question of how future lawyers will be trained is something that Hildebrandt ruminates upon in her article and struck a chord with this reviewer (currently working in a legal system where the training of solicitors is about to undergo significant change. If the next generation of lawyers and legal researchers is to be able to take on the socially important challenges outlined by Hildebrandt (especially in countering the arms race between those with the requisite resources and motivations), we may need to think a bit harder about the shape of the law school.
K. Sabeel Rahman, Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy
, 39 Cardozo L. Rev.
5 (forthcoming, 2017), available at SSRN
In the mid-2000s, digital activists spearheaded the net neutrality movement to ensure fair treatment of the customers of Internet Service Providers (ISPs), as well as to protect the companies trying to reach them. Net neutrality rules limit or ban preferential treatment; for example, they might prevent an ISP like Comcast from offering exclusive access to Facebook and its partner sites on a “Free Basics” plan. Such rules have a sad and tortuous history in the US: rebuffed under Bush, long delayed and finally adopted by Obama’s FCC, and now in mortal peril thanks to Donald Trump’s elevation of Ajit Pai to be chairman of the Commission. But net neutrality as a popular principle has had more success, animating mass protests and even comedy shows. It has also given long-suffering cable customers a way of politicizing their personal struggles with haughty monopolies.
But net neutrality activists missed two key opportunities. They often failed to explain how far the neutrality principle should extend, as digital behemoths like Google, Facebook, Apple, Microsoft, and Amazon wielded extraordinary power over key nodes of the net. Some commentators derided calls for “search neutrality” or “app store neutrality;” others saw such measures as logical next steps for a digital New Deal. Moreover, they did not adequately address key economic arguments. Neoliberal commentators insisted that the US would only see rapid advances in speed and quality of service if ISPs could recoup investment by better monetizing traffic. Progressives argued that “something is better than nothing;” a program like “Free Basics” probably benefits the disadvantaged more than no access at all.
In his Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy, K. Sabeel Rahman offers a theoretical framework to address these concerns. He offers a “definition of infrastructural goods and services” and a “toolkit of public utility-inspired regulatory strategies” that is a way to “diagnose and respond to new forms of private power in a changing economy,” including powerful internet platforms. He also gives a clear sense of why the public interest in regulating large internet firms should trump investors’ arguments for untrammeled rights to profits—and demands “public options” for those unable to afford access to privately controlled infrastructure.
Law’s treatment of infrastructure has been primarily economic in orientation. For example, Brett Frischmann’s magnum opus, Infrastructure: The Social Value of Shared Resources, offered a sophisticated theory of the spillover benefits of transportation, communication, environmental, and other forms of infrastructure, building on economists’ analyses of topics like externalities and congestion costs. Rahman complements this work by highlighting political and moral dimensions of infrastructure. The early 21st century Progressive movement did not seek to regulate utilities simply because a large firm may not be efficient. They also worried directly about the power exercised by such firms: their ability to influence politicians, take an outsized share of GDP, and sandbag both rival firms and political opponents. As Rahman explains, “Industries triggered public utility regulation when there was a combination of economies of scale limiting ordinary accountability through market competition, and a moral or social importance that made the industries too vital to be left to the whims of the market or the control of a handful of private actors.”
Identifying the list of “foundational goods and services” meriting direct utility regulation is inevitably a mix of politics, science, and law. Determining, for example, whether broadband internet should be treated in a manner similar to telephone service, depends on scientific analysis (e.g., might it soon become easier to provide internet over electric lines to complement existing cable), political mandates (e.g., voters electing Republicans at this point may be assumed not to prioritize broadband regulation, as party lines on the issue are relatively clear), and legal judgments (e.g., is broadband so similar to wireline service that it would defeat the purpose of the relevant statutes to treat it far differently). This delicate balance of the “three cultures” of science, democracy, and law, means that the scope of utilities regulation will always be somewhat in flux. While the federal government is, today, chipping away at the category, future administrations may revive and expand it. If so, they will benefit from Rahman’s rigorous definition of infrastructure as “those goods and services which (i) have scale effects in their production or provision suggesting the need for some degree of market or firm concentration; (ii) unlock and enable a wide variety of downstream economic and social activities for those with access to the good or service; and (iii) place users in a position of potential subordination, exploitation, or vulnerability if their access to these goods or services is curtailed.”
Not just the scope, but also the content of public utility regulation has also evolved over time. As Rahman relates, three broad categories of regulation can provide a “21st century framework for public utility regulation:”
1) [F]irewalling core necessities away from behaviors and practices that might contaminate the basic provision of these goods and services—including through structural limits on the corporate organization and form of firms that provide infrastructural goods;
2) [I]mposing public obligations on infrastructural firms, whether negative obligations to prevent discrimination or unfair disparities in prices, or positive obligations to pro-actively provide equal, affordable, and accessible services to under-served constituencies; and
3) [C]reating public options, state-chartered, cheaper, basic versions of these services that would offer an alternative to exploitative private control in markets otherwise immune to competitive pressures.
These three approaches (“firewalls”, “public obligations” and “public options”) have all helped increase the accountability of private powers in the past (as Robert Lee Hale’s work, praised as an inspiration in Rahman’s, has shown). Cable firms cannot charge you a higher rate because they dislike your politics. Nor can they squeeze businesses that they want to purchase, charging higher and higher rates to an acquisition target until it relents. Nor should regulators look kindly on holding companies that would more ruthlessly financialize essential services (or the horizontal shareholding that functions similarly to such holding companies.).
There are many legal scholars working in fields like communications law, banking law, and cyberlaw, who identify the limits of dominant regulatory approaches, but are researching in isolation. Rahman’s article provides a unifying framework for them to learn from one another, and should catalyze important interdisciplinary work. For example, it is well past time for those writing about search engines to explore how principles of net neutrality could translate into robust principles of search neutrality. The European Commission has documented Google’s abuse of its dominant position in shopping services. Subsequent remedial actions should provide many opportunities for the imposition of public obligations (such as commitments to display at least some non-Google-owned properties prominently in contested search engine results pages) and firewalling (which might involve stricter merger review when a megafirm makes yet another acquisition).
Rahman also shows a critical complementarity between competition law and public utility regulation. Antitrust concepts can help policymakers assess when a field has become concentrated enough to merit regulatory attention. Both judgments and settlements arising out of particular cases could inform the work of, say, a future “Federal Search Commission,” which could complement the Federal Communications Commission. The same problem of “bigness” that can allow a megafirm to abuse its platform by squeezing rivals, also creates opportunities to abuse users. Just as the Consumer Financial Protection Bureau serves a vital function
Many large internet platforms are now leveraging data advantage into profits, and profits into further domination of advertising markets. The dynamic is self-reinforcing: more data means providing better, more targeted services, which in turn attracts a larger customer base, which offers even more opportunities to collect data. Once a critical mass of users is locked in, the dominant platform can chisel away at both consumer and producer surplus. For example, under pressure from investors to decrease its operating losses, Uber has increased its cut from drivers’ earnings and has price discriminated against certain riders based on algorithmic assessments of their ability and willingness to pay. The same model is now undermining Google’s utility (as ads crowd out other information), and Facebook’s privacy policies (which get more egregiously one-sided the more the social network’s domination expands).
Rahman offers us a rigorous way of recognizing such platform power, offering a tour de force distillation of cutting edge social science and critical algorithm studies. Industries ranging from internet advertising to health care could benefit from a public utility-centered approach. This is work that could lead to fundamental reassessments of contemporary regulatory approaches. It is exactly the type of research that state, federal, and international authorities should consult as they try to rein in the power of many massive firms in our increasingly concentrated, winner-take-all economy.
Cite as: Frank Pasquale, Democracy Unchained
(August 17, 2017) (reviewing K. Sabeel Rahman, Private Power, Public Values: Regulating Social Infrastructure in a Changing Economy
, 39 Cardozo L. Rev.
5 (forthcoming, 2017), available at SSRN), https://cyber.jotwell.com/democracy-unchained/
Orly Lobel, The Law of the Platform
, 101 Minn. L. Rev.
87 (2016), available at SSRN
Until recently, the law of the online platform involved intermediary liability for online content and safe harbors like CDA §230 or DMCA §512. The recent rise of online service platforms, a/k/a the “Uberization of everything,” has challenged this model. What Orly Lobel calls the “platform economy”—which includes the delivery of services (see Task Rabbit), the sharing of assets (see Airbnb), and more—has led to new laws, doctrinal adjustments, and big questions. What happens when the internet meets the localized, physical world? Are these platforms newly disruptive, or old issues in new wrapping? And how do we best design regulations for technological change? The Law of the Platform will appeal to those looking for thoughtful discussion of these questions. It will also appeal, more practically, to those searching for an encyclopedic overview of the fast-developing law in this area, from permitting requirements to employment law to zoning.
Lobel argues that the platform economy represents the “third generation of the Internet”: built on online platforms, but affecting offline service markets. Unlike the first generation of the Web, which connected us to information through search engines, or the second generation, which disrupted publishing, news, music, and retail, the third generation is characterized by “transforming the service economy, allowing greater access to offline exchanges for lower prices.” The platforms do not themselves own the physical assets or hire the labor to which they provide access. Instead, they sell access and information—and desperately try to avoid labels like “employer” or “bank” that might lead to regulation. Lobel maps a number of these digital platforms to their physical world counterparts: Airbnb and VRBO to hotels; Parking Panda to parking sites; Uber and Lyft to taxis; and EatWith to restaurants.
Lobel’s take on these platforms is largely positive. She sees the platform economy as lowering transaction costs and leading to “the market… perfecting.” To share just several of the characteristics Lobel observes: the platform economy creates economies of scale, connecting individuals in huge marketplaces. It reduces waste, and allows the more efficient use of privately owned resources. It allows both supply and demand to be broken down into smaller parts, facilitating smaller exchanges. It allows hyper-customization—you can now rent a “non-smoking, pet-friendly, Kosher, and partially furnished apartment for three nights in a specific neighborhood.” The platform economy reduces intermediation, getting rid of the middleman and thereby lowering costs. And importantly to Lobel, the dynamic ratings that platforms provide can reduce search costs and monitoring costs by providing incentives for good behavior by participants. Coase explained that high transaction costs would in real life prevent many transactions from occurring, but according to Lobel, the logic, technology, and networks of trust that new platforms bring to bear can and do enable these previously lost transactions.
Lobel thus appears in many ways to be a platform optimist. There are indications, however, that such optimism might not be warranted. Uber lost $2 billion in 2015 and $2.8 billion in 2016, subsidizing both sides of transactions to hook drivers with bonuses and riders with cheaper rides. A transportation industry analyst estimated in November 2016 that Uber was covering 60 percent of the cost of each ride. The picture painted by these numbers does not suggest a company that is “the concept of supply and demand embodied,” but rather a behemoth using significant venture capital resources to establish market dominance.
This brings us to the second half of Lobel’s article, on regulation. Lobel asks whether new platforms are successful “because they are introducing new business models… or because they seek regulatory avoidance and generate value from such avoidance.” Again, she seems to side with the platforms, characterizing them as both perfecting existing markets (through competition) and creating new ones (through differentiation). VRBO, Airbnb, and Homeaway are not just substitutes for a hotel, but create a differentiated experience of adventuring at private homes. An Airbnb study in California found that fourteen percent of customers would not have visited San Francisco at all but for an Airbnb stay. And because the rentals are cheaper than hotels, people stay longer and spend more in the local economy. Lobel seems largely convinced that these platforms don’t just lower costs in existing markets, but create new markets as well.
But the billion dollar question (or in Uber’s case, $68 billion) is: are these platforms able to create these new markets because of innovation, or are they lowering costs by cleverly bypassing necessary regulatory regimes? What makes the platform economy legally disruptive is that these companies tend not to fit neatly into existing legal categories in regulated areas, like “employer” or “lender” or “bank.” Whether this is because of the law’s failure to keep pace with technological changes or these companies’ deliberate strategies to evade high-cost regulatory compliance through “sharewashing” is debatable. Back in March, the New York Times disclosed that Uber deliberately tagged and evaded enforcement authorities in Portland, OR; Boston; Paris; Las Vegas; and more. The DOJ is now investigating. But as Lobel points out, some attempts at regulation, like New York City’s taxicab medallion system, seem clearly geared towards protecting incumbents and keeping new actors out.
The middle third of the article taxonomizes the differences between illegitimately protectionist regulation and legitimate regulatory goals and regimes. Lobel divides platform regulations into three categories: (1) permitting, licensing, and price controls; (2) taxation; and (3) broadly speaking, “regulations that are about fairness, externalities, and normative preferences.” Lobel breezes through the tax issues, explaining that questions of collection are “largely technical” and platform providers should be responsible for tax collection for efficiency reasons. In contrast, Lobel characterizes regulations in the first category—permitting, occupational licensing, and price controls—as largely the result of industry capture, where incumbents extract rent at the expense of consumers and competitors (presumably, she’s not a fan of the bar). She argues that we should more directly regulate towards the goals these systems are designed to get at—safety, professionalism, and other forms of consumer protection—rather than using ex ante systems that favor incumbents.
The hardest cases, Lobel argues, are those that revolve around issues of “public welfare in the platform,” such as governing the characteristics and safety levels of particular neighborhoods (zoning) or protecting workers’ rights (employment laws). Her nuanced analysis of zoning regulations calls for empirical evaluation of the safety impact of short-term housing on residential neighborhoods. Her discussion of employment law makes two important observations: one, that the rise of the contingent workforce is not a feature of platforms alone; and two, that the resulting employment law issues—whether an employee is a covered employee or independent contractor—also arise in cases having nothing to do with the platform economy (eg, FedEx in the Ninth Circuit).
In other words, the legal disruption in these areas may have as much to do with the law itself, with older categories that are now breaking down in a number of areas, as with particular disruptive features of the platform economy. Solving these problems requires balancing competing social values, such as fairness with freedom of contract. “The platform provides new opportunities to continue these debates, but it does not transform or transcend these hard choices in any meaningful way.”
The last third of the article ventures into more dangerous territory. Lobel has previously done important work on the relationship between public regulation and private (or public-private) governance. She closes The Law of the Platform by returning to this topic. Where traditional regulation fails, Lobel argues, platforms themselves can through private “regulation” ensure consumer trust and a certain degree of consumer protection. Platforms do this by obtaining insurance, by voluntarily running background checks, and through rating and recording systems that track all transactions on a platform. It is this last form of governance that most excites Lobel, and most worries me.
“The confidence generated by state permitting, occupational licensing, and other regulatory requirements is substitutable with crowd confidence,” Lobel claims. Consumer review systems, Lobel proposes, now serve as a type of governance, forcing transparency better than a command-and-control public regulatory regime. “[W]atchdogging is crowdsourced,” she states. Constant data-gathering means prices will stay updated, and bad actors will quickly be uncovered, protecting consumers and ensuring their trust.
Unfortunately, Lobel does not discuss the downsides of ubiquitous data collection, from creating or exacerbating power disparities, to chilling positive behaviors in addition to negative behaviors, to the economic consequences of hacking. She does not address significant governance concerns—over transparency, discrimination, and self-serving behavior—that come from having this data housed in private, not regulatory or public, hands. And she does not discuss the economic or normative costs of business models formed on selling that privately gathered data back to government for a range of purposes, from infrastructure improvement to government surveillance.
The article closes with a general paean to dynamic and experimental governance as a better approach than command-and-control rule-making and enforcement. Experimentation (for example, in different localities) and data-gathering in the name of anti-discrimination policies are all well and good, but again there are costs to a more universal shift to softer enforcement that Lobel does not address here. Companies are often inspired to self-regulate because of a background threat of harsher government enforcement. The risk in a larger move towards soft self-governance over government regulation in the area of technological development is that consumer concerns will take a decided backseat under that kind of a regime.
The Law of the Platform is rich, complicated, and raises many questions. Lobel does romanticize the platform, even as she acknowledges public welfare issues. She also romanticizes a lighter regulatory touch in the area of technological development, even while recognizing the legitimacy of a number of consumer concerns. But her discussions throughout of legal disruption and regulatory design make this a piece well worth reading for anyone following changes to technology and the law.
“Welcome to the dark side of Big Data,” growls the last line of the first chapter of Cathy O’Neil’s recent book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. As that sentence (and that subtitle) suggest, this is not a subtle book. O’Neil chronicles harms from the widespread use of machine learning and other big data systems in our society. O’Neil is convinced that something ominous and harmful is afoot, and she lays out a bill of particulars listing dozens of convincing examples.
This is a book that I like (lots) because we need outspoken and authoritative chroniclers of the downsides of big data decisionmaking. It advances a carefully articulated and well-supported argument, delivered with urgency and passion. For readers yearning for a balanced look at both the benefits and the costs of our increasingly automated society, however, keep searching.
If we built a prototype for a qualified critic of big data, her background would look a lot like O’Neil’s: Harvard math PhD, MIT postdoc, Barnard professor, hedge fund quant during the financial crisis, start-up data scientist. Throw in blogger (mathbabe.org) and Occupy organizer for good measure, and you cannot quibble with the credentials. O’Neil is an author who knows what she is talking about, who also happens to be a writer of compelling, clear prose, an evidently skilled interviewer, and a great speaker.
Perhaps most importantly, the book provides legal scholars with a concise and salient label—weapons of math destruction, or WMDs—to describe decisionmaking algorithms possessing three features: opacity, scale, and harm. This label and three-factor test can help us identify and call out particularly worrisome forms of automated decisionmaking.
For example, she seems to worry most—and have the most to say—about so-called “value added modeling” systems for assessing the effectiveness of teachers in public schools. Reformers such as Michelle Rhee, former Chancellor of the DC public schools, spurred by policies such as No Child Left Behind, embraced a data-centric model, which selected which teachers to fire based heavily on the test scores of their students. The affected teachers had little visibility into the magic formulae that decided their fate (opacity); these tests affected thousands of teachers around the country (scale); and good teachers were released from important jobs they loved, depriving their students of their talents (harm). When opacity, scale, and harm align in an algorithmic decisionmaking system, software can worsen inequality and ruin lives.
Building on these factors, O’Neil returns repeatedly to the important role of feedback in exacerbating (and sometimes blunting) the harm of WMDs. If we use the test results of students to identify topics they are not learning, to change what or how we are teaching, this is a positive and virtuous feedback loop, not a WMD. But when we decide to fire the bottom five percent of teachers based on those same scores, we are assuming the validity and accuracy of the test, making it impossible to use feedback to test the strength of those assumptions. The critical role of feedback is an important key insight of the book.
The book brims with other examples of WMDs, devoting considerable attention to criminal recidivism scoring systems, employment screening programs, predictive policing algorithms, and even the U.S. News college ranking formula. O’Neil spends entire chapters covering big data systems that stand in our way of getting a job, succeeding at work, buying insurance, and securing credit.
Legal scholars who write about automated decisionmaking or artificial intelligence may be surprised to see this book reviewed in these pages. O’Neil’s book is long on description with very little attention paid to policy solutions. A book of deep legal scholarship, this is not. As capably as she writes about math and algorithms, O’Neil falters—and I’m guessing she would cop to this—when it comes to law and regulation, mixing equal parts unrealistically optimistic sentiments about laws like FCRA; vague descriptions about the prospect of Constitutional challenges to data practices; and unrealistic calls for new legislation.
Despite these extra-disciplinary shortcomings, this book should be read by legal scholars, who are not likely to already know all the stories in this book and who will find many compelling (if chilling) examples to cite. As one who does not focus on education policy, for example, I was struck by the detailed and personal stories of teachers fired because of the whims of value-added modeling. And even for the old stories I had heard before, I was struck by how well O’Neil tells them, distilling complicated mathematical concepts into easy-to-digest descriptions and using metaphor and analogy with great skill. I will never again think of a model without thinking of O’Neil’s lovely example of the model she uses to select what to cook for dinner for her children.
The book is in parts intemperate. But we live in intemperate times, and the problems with big data call for an intemperate call-to-arms. A more measured book, one which tried to mete out praise and criticism for big data in equal measure, would not have served the same purpose. This book is a counterpoint to the ceaseless big data triumphalism trumpeted by powerful partisans, from Google to the NSA to the U.S. Chamber, who view unfettered and unexamined algorithmic decisionmaking as their entitlement and who view criticism of big data’s promise as an existential threat. It responds as well to big data’s academic cheerleaders, who spread the word about the unalloyed wonderful potential for big data to drive innovation, grow the economy, and save the world. A milquetoast response would have been drowned out by these cheery tales, or worse, co-opted by them.
“See,” big data’s apologists would have exclaimed, “even Cathy O’Neil agrees about big data’s important benefits.” O’Neil is too smart to have written a book that could have been co-opted in this way. “Big Data has plenty of evangelists, but I’m not one of them,” O’Neil proudly proclaims. Neither am I, and I’m glad that we have a thinker and writer like O’Neil shining a light on some of the worst examples of the technological futures we are building.