The Journal of Things We Like (Lots)
Select Page

Copyright, Smart Contracts, and the Blockchain

Balázs Bodó, Daniel Gervais, & João Pedro Quintais, Blockchain and Smart Contracts: The Missing Link in Copyright Licensing?Int'l. J. of L. & Info. Tech. (September 2018).

There has been growing academic interest in the topic of decentralised, distributed open ledger technology—better known as the blockchain (see my last Jot). While the literature has been substantial, the copyright implications of the blockchain have not received as much coverage from the research community, perhaps because the use cases have not been as prevalent in the media. Taking the usual definition of a blockchain as an immutable distributed database, it is easy to imagine some potential uses of the technology for copyright, and for the creative industries as a whole. Blockchain technology has been suggested for management of copyright works through registration, enforcement, and licensing, and also as a business model allowing micropayments and use tracking.

Blockchain and Smart Contracts: The Missing Link in Copyright Licensing? by three academics at the Institute for Information Law at the University of Amsterdam, tackles this subject in excellent fashion. The article has the objective of introducing legal audiences to many of the technologies associated with the blockchain. It goes into more specific treatment of various features, such as distributed ledger technology (DLT), digital tokens, and smart contracts, and the potential uses of these for copyright licensing specifically. The article is divided into three parts: an introduction to the technology, an analysis of its potential use for copyright licensing, and a look at possible problems.

The article explains that DLTs are consensus mechanisms which “ensure that new entries can only be added to this distributed database if they are consistent with earlier records.” (P. 4.) Other technical features include the ability to time-stamp transactions, and the potential to verify ownership of a work through the use of “wallets” and other cryptographic tools. This type of technology can be useful for various copyright test cases, such as allocating rights, registering ownership, and keeping track of expiration. Because you could have an immutable and distributed record of ownership and registration, it would be possible for DLTs to become a useful tool for the management of copyright works by collecting agencies.

Then the article explains the concept of tokenization and the use of digital tokens. Any sort of data can be converted into a digital token, and these can express all sorts of rights. For example, tokenizing rights management information (RMI) could be useful for the expression and management of copyright works through licensing. Further action can be taken through a smart contract, which is software that interacts with the blockchain to execute if-then statements and can also be used for running more complex commands and sub-routines expressing legal concepts. According to the authors, a large number of “dumb transactions” could be taken over by smart contracts, allowing the identification and distribution of royalties, and the payment of such. While the deployment of large-scale smart contract management mechanisms would be very complex, the authors envisage a system by which owners retain control over their own works, and use smart contracts to allocate and distribute rights directly to users by means of these automated transactions.

The article goes into detail on other potential uses, particularly the use of blockchain in registration practices, the potential for solving the orphan works problem, fair remuneration, and allocating rights through RMIs. This is done with both knowledge of the subject as well as rigour in the analysis of potential pitfalls.

The article’s best section is its analysis of the many potential issues that may arise in using DLT and smart contracts in copyright. The authors astutely identify the complex nature of copyright norms, and comment that the many variations from one jurisdiction to another may prove to be too complex for a medium that is looking for ease of execution. The authors comment:

In the case of blockchain it is hard, at least as of 2018, to detect high levels of enthusiasm that would lead, in the short term, to the legal recognition/protection of copyright-replacing blockchain-related technological innovations. (P. 22.)

This matches my own observations about this subject. I have found that while the hype is considerable, there are just too many concerns about the potential uses of blockchain technologies in this area. There are valid concerns about the scalability of the technology, but also about the need to deploy complex technological solutions that could be equally implemented with other existing technology. The blockchain, we are told, can allow authors to publish their work with an immutable record of initial ownership, with automated remuneration awarded. But reality can be quite difficult to match with this vision. For starters, it may be difficult, if not impossible, to match existing rights, exceptions, and limitations in a manner that can be executed in a smart contract; the authors explain the complexity of international copyright law, with mismatched rights and responsibilities across jurisdictions. Similarly, blockchain systems are expensive, and if the market is currently working well with offline and online systems, then it is difficult to see how a cumbersome, slow, and wasteful solution would be adopted. The authors finish the discussion stating that there is a familiar feeling to the blockchain discussion, as DRM (digital rights management) was presented a decade or more ago as the enforcement solution that would end copyright infringement. Needless to say, that was not the case.

The question at the heart of any blockchain implementation always remains the same, what is the problem that you are trying to solve, and is the blockchain the appropriate technology to solve that issue?

Cite as: Andres Guadamuz, Copyright, Smart Contracts, and the Blockchain, JOTWELL (October 29, 2018) (reviewing Balázs Bodó, Daniel Gervais, & João Pedro Quintais, Blockchain and Smart Contracts: The Missing Link in Copyright Licensing?Int'l. J. of L. & Info. Tech. (September 2018)), https://cyber.jotwell.com/copyright-smart-contracts-and-the-blockchain/.

Don’t Believe It If You See It: Deep Fakes and Distrust

Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Cal. L. Rev. __ (forthcoming 2019), available at SSRN.

It’s no secret that the United States and much of the rest of the world are struggling with information and security. The flow of headlines about data breaches, election interference, and misuse of Facebook data show different facets of the problem. Information security professionals often speak in terms of the “CIA Triad”: confidentiality, integrity, and availability. Many recent cybersecurity incidents involve problems of confidentiality, like intellectual property theft or theft of personally identifiable information, or of availability, like distributed denial of service attacks. Many fewer incidents (so far) involve integrity problems—instances in which there is unauthorized alteration of data. One significant example is the Stuxnet attack on Iranian nuclear centrifuges. The attack made some centrifuges spin out of control, but it also involved an integrity problem: the malware reported to the Iranian operators that all was functioning normally, even when it was not. The attack on the integrity of the monitoring systems caused paranoia and a loss of trust in the entire system. That loss of trust is characteristic of integrity attacks and a large part of what makes them so pernicious.

Bobby Chesney and Danielle Citron have posted a masterful foundational piece on a new species of integrity problem that has the potential to take such problems mainstream and, in the process, do great damage to trust in reality itself. In Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, Chesney and Citron explain a range of possible uses for “deep fakes,” a term that originated from imposing celebrities’ faces into porn videos, but that they use to describe “the full range of hyper-realistic digital falsification of images, video, and audio.” (P. 4.)

After explaining the technology that enables the creation of deep fakes, Chesney and Citron spin out a parade of (plausible) horribles resulting from deep fakes. Individual harms could include exploitation and sabotage, such as a fake compromising video of a top draft pick just before a draft. (P. 19.) The equally, if not more, worrisome societal harms from deep fakes include manipulating elections through timely release of damaging videos of a candidate, eroding trust in institutions though compromising videos of their leaders, exacerbating social divisions by releasing videos of police using racial slurs, spurring a public panic with recordings of government officials discussing non-existent disease outbreaks, and jeopardizing national security through videos of U.S. troops perpetrating atrocities. (Pp. 22-27.)

So what can be done? The short answer appears to be not much. The authors conclude that technology for detecting deep fakes won’t save us, or at least won’t save us fast enough. Instead, they “predict,” but don’t necessarily endorse, “the development of a profitable new service: immutable life logs or authentication trails that make it possible for the victim of a deep fake to produce a certified alibi credibly proving that he or she did not do or say the thing depicted.” (P. 54.) This possible “fix” to the problem of deep fakes bears more than a passing resemblance to the idea of “going clear” spun out in Dave Eggers’ book The Circle. (Pp. 239-42.) In the novel, politicians begin wearing 24-hour electronic monitoring and streaming devices to build the public’s trust—and then others are pressured to do the same because, as Eggers puts it, “If you aren’t transparent, what are you hiding?” (P. 241.) When the “cure” for our problems comes from dystopian fiction, one has to wonder whether it’s worse than the disease. Moreover, companies offering total life logs would themselves become ripe targets for hacking (including attacks on confidentiality and integrity) given the tremendous value of the totalizing information they would store.

If tech isn’t the answer, what about law? Chesney and Citron are not optimistic about most legal remedies either. They are pessimistic about the ability of federal agencies, like the Federal Trade Commission or Federal Communications Commission, to regulate our way out of the problem. They do identify ways that criminal and civil remedies may be of some help. Victims could sue deep fake creators for torts like defamation and intentional infliction of emotional distress, and deep fake creators might be criminally prosecuted for things like cyberstalking (18 U.S.C. § 2261A) or impersonation crimes under state law. But, as the authors note, legal redress even under such statutes may be hampered by, for example, the inability to identify deep fake creators, or to gain jurisdiction over them. These statutes also do little do redress the societal, as opposed to individualized, harms from deep fakes.

For deep fakes perpetrated by foreign states or other hostile actors, Chesney and Citron are somewhat more optimistic, highlighting the possibility of military and covert actions, for example, to degrade or destroy the capacity of such actors to produce deep fakes. (Pp. 49-50.) They also suggest a way to ensure that economic sanctions are available for “attempts by foreign entities to inject false information into America’s political dialogue,” including attempts using deep fakes. (P. 53.) These tactics might have some benefit in the short term, but sanctions have not yet stemmed efforts at foreign interference in elections. And efforts to disrupt Islamic State propaganda have shown that attempts at digital disruption of adversaries’ capacities may often prompt a long-running battle of digital whack-a-mole.

One of the paper’s most interesting points is its discussion of another tactic that one might think would help address the deep fake problem, namely, public education. Public education is often understood to help inoculate against cybersecurity problems. For example, teaching people to use complex passwords and not to click on suspicious email attachments bolsters cybersecurity. But Chesney and Citron point out a perverse consequence of educating the public about deep fakes. They call it the “liar’s dividend”: “a skeptical public will be primed to doubt the authenticity of real audio and video evidence,” so those caught engaging in bad acts in authentic audio and video recordings will exploit this skepticism to “try to escape accountability for their actions by denouncing authentic video and audio as deep fakes.” (P. 28.)

Although the paper is mostly profoundly disturbing, Chesney and Citron try to end on a positive note by focusing on the content screening and removal policies of platforms like Facebook. They argue that the companies’ terms of service agreements “will be primary battlegrounds in the fight to minimize the harms that deep fakes may cause,” (P. 56) and urge the platforms to practice “technological due process.” (P. 57.) Facebook, they note, “has stated that it will begin tracking fake videos.” (P. 58.) The ending note of optimism is welcome, but rather underexplored in the current draft, leaving readers hoping for more details on what, when, and how much the platforms might be able and willing to do to prevent the many problems the authors highlight. It also raises fundamental questions about the role of private companies in playing at least arguably public functions. Why should this be the companies’ problem to fix? And if the answer is because they’re the only ones who can, then more basically, how did we come to the point where that is the case, and is that an acceptable place to be?

In writing the first extended legal treatment of deep fakes, Chesney and Citron understandably don’t purport to solve every problem they identify. But in a world plagued by failures of imagination that leave the United States reeling from unexpected attacks—Russian election interference being the most salient—there is tremendous benefit to thoughtful diagnosis of the problems deep fakes will cause. Deep fakes are, as Chesney and Citron’s title suggests, a “looming challenge” in search of solutions.

Cite as: Kristen Eichensehr, Don’t Believe It If You See It: Deep Fakes and Distrust, JOTWELL (September 27, 2018) (reviewing Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Cal. L. Rev. __ (forthcoming 2019), available at SSRN), https://cyber.jotwell.com/dont-believe-it-if-you-see-it-deep-fakes-and-distrust/.

The GDPR’s Version of Algorithmic Accountability

Lilian Edwards and Michael Veale, Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For, 16 Duke L. & Tech. Rev. 18 (2017), available at SSRN.

Scholarship on whether and how to regulate algorithmic decision-making has been proliferating. It addresses how to prevent, or at least mitigate, error, bias and discrimination, and unfairness in algorithmic decisions with significant impacts on individuals. In the United States, this conversation largely takes place in a policy vacuum. There is no federal agency for algorithms. There is no algorithmic due process—no notice and opportunity to be heard—not for government decisions, nor for private companies’. There are—as of yet—no required algorithmic impact assessments (though there are some transparency requirements for government use). All we have is a tentative piece of proposed legislation, the FUTURE of AI Act, that would—gasp!—establish a committee to write a report to the Secretary of Commerce.

Europe, however, is a different story. The General Data Protection Regulation (GDPR) went into direct effect on EU Member States on May 25, 2018. It contains a hotly debated provision, Article 22, that may impose a version of due process on algorithmic decisions that have significant effects on individuals. For those looking to understand how the GDPR impacts algorithms, I recommend Lilian Edwards’ and Michael Veale’s Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For. Edwards and Veale have written the near-comprehensive guide to how EU data protection law might affect algorithmic quality and accountability, beyond individualized due process. For U.S. scholars writing in this area, this article is a must-read.

Discussions of algorithmic accountability in the GDPR have, apart from this piece, largely been limited to the debate over whether or not there is an individual “right to an explanation” of an algorithmic decision. Article 22 of the GDPR places restrictions on companies that employ algorithms without human intervention to make decisions with significant effects on individuals. Companies can deploy such algorithmic decision-making only under certain circumstances (when necessary for contract or subject to explicit consent), and even then only if they adopt “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests.” These “suitable measures” include “at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” They also arguably include a right to obtain an explanation of a particular individualized decision. The debate over this right to an explanation centers on the fact that it appears in a Recital (which, in brief, serves as interpretative guidance), and not in the GDPR’s actual text. The latest interpretative document on the GDPR appears to agree with scholars who argue that a right to an explanation does exist, because it is necessary for individuals to contest algorithmic decisions. This suggests that the right to explanation will be oriented towards individuals, and making algorithmic decisions understandable by (or legible to) an individual person.

Edwards and Veale move beyond all of this. They do engage with the debate about the right to an explanation, pointing out both potential loopholes and the limitations of individualized transparency. They helpfully add to the conversation about the kinds of explanations that could be provided: (A) model-centric explanations that disclose, for example, the family of model, input data, performance metrics, and how the model was tested; and (B) subject-centric explanations that disclose, for example, not just counterfactuals (what would I have to do differently to change the decision?) but the characteristics of others similarly classified, and the confidence the system has in a particular individual outcome. But they worry that an individualized right to an explanation would in practice prove to be a “transparency fallacy”—giving a false sense of individual control over complex and far-reaching systems. They valuably add that the GDPR contains a far broader toolkit for getting at many of the potential problems with algorithmic decision-making. Edwards and Veale observe that the tools of omnibus data protection law—which the U.S. lacks—are tools that can also work in practice to govern algorithms.

First, they point out that the GDPR consists of far more than Article 22 and related transparency rights. This is an important point to make to a U.S. audience, which might otherwise come away from the right to explanation debate believing that in the absence of a right to an explanation, algorithmic decision-making won’t be governed by the GDPR. That conclusion would be wrong. Edwards and Veale point out that the GDPR contains other individual rights—such as the right to erasure, and the right to data portability—that will affect data quality and allow individuals to contest their inclusion in profiling systems, including ones that give rise to algorithmic decision-making. (I was surprised, given concerns over algorithmic error, that they did not also discuss the GDPR’s related right to rectification—the right to correct data held on an individual—which has been included in calls for algorithmic due process by U.S. scholars such as Citron & Pasquale and Crawford & Schultz.) These individual rights potentially give individuals control over their data, and provide transparency into profiling systems beyond an overview of how a particular decision was reached. But there remains the question of whether individuals will invoke these rights.

Edwards and Veale identify that the GDPR goes beyond individual rights to “provide a societal framework for better privacy practices and design.” For example, the GDPR requires something like privacy by design (data protection by design and by default), requiring companies to build data protection principles, such as data minimization and purpose specification, into developing technologies. For high-risk processing, including algorithmic decision-making, the GDPR requires companies to perform (non-public) impact assessments. And the GDPR includes a system for formal co-regulation, nudging companies towards codes of conduct and certification mechanisms. All of these provisions will potentially influence design and best practices in algorithmic decision-making. Edwards and Veale argue that these provisions—aimed at building better systems at the onset, and providing ongoing oversight over systems once deployed—are better suited to governing algorithms than a system of individual rights.

Edwards and Veale are not GDPR apologists. They recognize significant limitations in the law, including the lack of a true class-action mechanism, even where the GDPR contemplates third-party actions by NGOs. They acknowledge that data-protection authorities are often woefully underfunded and understaffed. And, like others, they point out mismatches between the GDPR’s language and current technological and social practices—asking, for example, whether behavioral advertising constitutes an algorithmic “decision.” But they helpfully move the conversation about algorithmic accountability away from the “right to an explanation” and towards the broader regulatory toolkit of the GDPR.

Where the piece falters most is in its almost offhand dismissal of individualized transparency. Some form of transparency will be necessary for the regulatory system that they describe to work—a complex co-regulatory system involving impact assessments, codes of conduct, and self-certification. Without public oversight of some kind, that system may be subject to capture, or at least devoid of important feedback from both civil society and public experts. And, as the ongoing conversation about justifiability shows, both the legitimizing and the dignitary value of individualized decisional transparency cannot be dismissed so lightly.

I wish this piece had a different title. In dismissing the value of an individual right to explanation, the title obscures the valuable work Edwards and Veale do in charting other regulatory approaches in the GDPR. However the right to an explanation debate plays out, they show that unlike in the United States, algorithmic decision-making is in the regulatory crosshairs in the EU.

Cite as: Margot Kaminski, The GDPR’s Version of Algorithmic Accountability, JOTWELL (August 16, 2018) (reviewing Lilian Edwards and Michael Veale, Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For, 16 Duke L. & Tech. Rev. 18 (2017), available at SSRN), https://cyber.jotwell.com/the-gdprs-version-of-algorithmic-accountability/.

The Difference Engine: Perpetuating Poverty Through Algorithms

Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018).

We have a problem with poverty, which we have converted into a problem with poor people. Policymakers tout technology as a way to make social programs more efficient, but they end up encoding the social problems they were designed to solve, thus entrenching poverty and over-policing of the poor. In Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks uses three core examples—welfare reform software in Indiana, homelessness service unification in Los Angeles, and child abuse prediction in Pennsylvania—and shows that while they vary in how screwed up they are (Indiana terribly, Los Angeles a bit, and Pennsylvania very hard to tell), they all rely on assumptions that leave poor people more exposed to coercive state control. That state control both results from and contributes to the assumption that poor people’s problems are their own fault. The book is a compelling read and a distressing work, mainly because I have little faith that the problems Eubanks so persuasively identifies can be corrected.

Eubanks writes:

Across the country, poor and working-class people are targeted by new tools of digital poverty management and face life-threatening consequences as a result. Automated eligibility systems discourage them from claiming public resources that they need to survive and thrive. Complex integrated databases collect their most personal information, with few safeguards for privacy or data security, while offering almost nothing in return. Predictive models and algorithms tag them as risky investments and problematic parents. Vast complexes of social service, law enforcement, and neighborhood surveillance make their every move visible and offer up their behavior for government, commercial, and public scrutiny.

As Eubanks points out, the poor are test subjects because they offer “‘low rights environments’ where there are few expectations of political accountability and transparency.” Even those who do not care about poverty should be paying attention, however, because “systems first designed for the poor will eventually be used on everyone.”

Eubanks’ recommendation, even as more punitive measures are being enacted, is for more resources and fewer requirements. Homelessness isn’t a data problem, it’s a carpentry problem, and a universal basic income or universal health insurance would allocate care far better than a gauntlet of automated forms. Eubanks points out that automation, despite its promised efficiencies, has coincided with kicking people off of assistance programs. In 1973, nearly half of people under the poverty line received AFDC (Aid to Families with Dependent Children), but a decade later that was 30 percent (coinciding with the introduction of the computerized Welfare Management System) and now it’s less than 10 percent. Automated management is a tool of plausible deniability, allowing elites to believe that the most worthy of the poor are being taken care of and that the unworthy don’t deserve care, as evidenced by the fact that they failed to behave as they were asked to do in complying with various requirements to submit information and be subjected to surveillance.

Eubanks begins with the most obvious disaster: Indiana’s expensive contract with IBM to get rid of most caseworkers and automate medical coverage. Thousands of people were wrongly denied coverage, creating trauma for medically vulnerable people even when the denials were ultimately reversed. Indiana’s failure to create a working centralized system led to some backlash. Eubanks quotes people who suggest that the result from the backlash was a hybrid human-computer system, which restored almost enough caseworkers to deal with the people who make noise, but not enough for those who can’t. Of course, human caseworkers have their own problems—accounts of implicit and even explicit racial bias abound—but discrimination is easily ported to statistical models, such that states with higher African-American populations have “tougher rules, more stringent work requirements, and higher sanction rates.” And Indiana’s automated experiment disproportionately drove African Americans off the TANF (Temporary Assistance for Needy Families) rolls, perhaps in part because the system treated any error (including those made by the system itself) as deliberate noncompliance, and many people simply gave up.

The Los Angeles homelessness story is different, but not different enough. It provides a useful contrast of a “progressive” use of data and computerization. The idea was to create “coordinated entry,” so that homeless people who contacted any service provider would be connected with the right resources, sorting between the short-term and long-term homeless, who need different services, some of which can be less than helpful if given to the wrong groups. There’s a lot of good there, including the idea of “housing first”: rather than limiting housing only to those who are sober, employed, etc., the aim is to get people housed because of how hard all those other things are without housing. Eubanks profiles a woman for whom coordinated entry was a godsend.

But Eubanks also identifies two core problems: (1) The system itself is under-resourced; all the coordination in the world won’t help when there are only 10 beds for every 100 people in need of them. (2) The information collected is invasive and contributes to the criminalization and pathologization of poor people. The data are kept with minimal security and no protection against police scrutiny, which is particularly significant because, as Eubanks rephrases Anatole France, “so many of the basic conditions of being homeless—having nowhere to sleep, nowhere to put your stuff, and nowhere to go to the bathroom—are also officially crimes.” Homeless people can rarely pay tickets, and so the unpaid fines turn into warrants (turning into days in jail when they can’t afford bail, even though these kinds of nuisance charges are usually dismissed once in front of a judge). People in the database turn into fugitives.

These two problems reinforce each other. Given the low chance of getting help, people are less willing to explain their circumstances, often stories of escalating misfortune and humiliation, to the representative of the state’s computer. The resource crunch also contributes to workers’ felt imperative to find the most deserving and thus to scrutinize every applicant for appropriate levels of dysfunctionality. Too little trauma, and services might be deemed unnecessary. But too much dysfunctionality can also be disqualifying—the housing authority might determine that a client is incapable of living independently. One group of caseworkers Eubanks discusses “counsel their clients to treat the interview at the housing authority like a court proceeding.” They also see vulnerable clients rejected by landlords; Section 8 vouchers to pay for housing are nice, but still require a willing landlord, and the vouchers expire after six months, meaning that a lot of clients just give up. Meanwhile, “[s]ince 1950, more than 13,000 units of low-income housing have been removed from Skid Row, enough for them all.” It’s also worth noting how much discretion remains with humans, despite the appearance of Olympian objectivity in a housing need score: clients are assessed based on self-reports, and they won’t always tell people they haven’t grown to trust about circumstances bearing on their needs, including trauma.

What really mattered to getting resources devoted to addressing homelessness in Los Angeles, Eubanks argued, was rights, not data. Court rulings found that routine police practices—barring sleeping in public and confiscating and destroying the property of homeless people found in areas where they were considered undesirable—were unconstitutional. Once that happened, tent cities sprung up in places visible to people with money and power. Better data helped in identifying what resources were needed where, but tent cities were the driver of reform.

Finally, the experience of child welfare prediction software in Allegheny County, Pennsylvania, has continuities with and divergences from the other two stories. The software is at the moment used just to back up individual caseworkers’ determinations of whether to further investigate child abuse based on a call to the child welfare hotline, though Eubanks already saw caseworkers tweaking their own estimates of risk to match the model’s, an instance of automation bias that ought to alarm us. Some of the problems were statistical: the number of child deaths and near-deaths in the county is thankfully very low, and you can’t do a good model with a handful of cases a year for a population of 1.23 million.

Setting the base-rate problem aside, you can’t actually measure levels of child abuse. You can measure proxies, such as how many calls to CPS (Child Protective Services) are made and how many children CPS removes from a home. As a result, the automated system ends up predicting “decisions made by the community (which families will be reported to the hotline) and by the agency and the family courts (which children will be removed from their families), not which children will be harmed.” Unfortunately, those proxies are precisely the ones we know are infected with persistent racial and class bias, so that bias is baked into the predictions. This is the same problem explained so well in Cathy O’Neil’s Weapons of Math Destruction, a good book to read along with this one.

In Allegheny County itself, “the great majority of [racial] disproportionality in the county’s child welfare services arises from referral bias, not screening bias.” Sometimes this arises from perceptions of neighborhoods being bad, so the threshold for reporting someone from those neighborhoods is lower—which in the US means minority neighborhoods. But the prediction system “focuses all its predictive power and computational might on call screening, the step it can experimentally control, rather than concentrating on referral, the step where racial disproportionality is actually entering the system.” And it gets worse: the model is evaluated for whether it predicts future referrals. “[T]he activity that introduces the most racial bias into the system is the very way the model defines maltreatment.”

In rural or suburban areas, where witnesses are rarer, no one may call the hotline. Families with enough resources use private services for mental health or addiction treatment and thus don’t create a record available to the state (if they don’t directly talk about child abuse in a way that triggers mandatory reporting). Either way, those disproportionately whiter and wealthier families stay out of the system for conduct that would, if they were visible to the system, increase their risk score. The system can provide very useful services, but those services then become part of the public record, helping define a family as at-risk. A child whose parents were investigated by CPS now has a record of interaction with the system that, when she becomes a mother, will increase her risk score if someone reports her. Likewise, use of public services is coded as a risk factor. A quarter of the predictive variables in the model are “direct measures of poverty”—TANF, SSI (Supplemental Security Income), SNAP (Supplemental Nutrition Assistance Program), and county medical assistance. Another quarter of the predictive variables measure “interaction with juvenile probation” and the child welfare agency itself, when “professional middle-class families have more privacy, interact with fewer mandated reporters, and enjoy more cultural approval of their parenting” than poorer families. Nuisance calls by people with grudges are also a real problem.

Even if that didn’t bother you, consider this: of 15,000 abuse reports in 2016, at its current rate of (proxy-defined) accuracy, the system would produce 3,600 incorrect predictions. And the planned model is supposed to be “run on a daily or weekly basis on all babies born in Allegheny County.” This is a big step forward not just in extending the tech to everyone, but also in commitment to prediction. Prediction is about guessing how poor people might behave in the future based on data from their networks, not just about judging their past individual behavior, and thus it can infect entire communities and generations. At the same time, “digital poorhouses,” as Eubanks calls the networks into which data about poor people are fed, are hard to see and hard to understand, making them harder to organize against.

Eubanks also points out that parents can naturally resent outside scrutiny and often feel that once the child welfare system is involved the standards keep getting raised on them, no matter what they try to do. And caseworkers interpret resistance and resentment as danger signs. While these reactions aren’t directly dependent on the technology, they are human behaviors that change what the technology does in the world.

In theory, big data could increase transparency and decrease discrimination where that comes from the humans in the system. Unfortunately, that doesn’t seem to be what’s happening. Among other things, the purported “transparency” of algorithms, even putting trade secrets aside, is very much a transparency for the elite who can figure the code out, not for ordinary participants in democratic governance, who basically have to take experts’ explanations on faith.

In addition, Eubanks finds:

the philosophy that sees human beings as unknowable black boxes and machines as transparent…deeply troubling. It seems to me a worldview that surrenders any attempt at empathy and forecloses the possibility of ethical development. The presumption that human decision-making is opaque and inaccessible is an admission that we have abandoned a social commitment to try to understand each other. Poor and working-class people in Allegheny County want and deserve more: a recognition of their humanity, an understanding of their context, and the potential for connection and community.

This sounds great, but I wonder if it is fully convincing, in the fallen world in which we live. On the other hand, given that there are other interventions that wouldn’t sort the “worthy” from the “unworthy” in the ways that current underfunded services are forced to do, it is certainly persuasive to argue that we shouldn’t try to move from biased caseworkers to biased algorithms.

Along with non-technical solutions, Eubanks offers some ethics for designers, focusing on whether the tools they make increase the self-determination and agency capabilities of the poor, and whether they’d be tolerated if targeted at the non-poor. I think she’s overly optimistic about the latter criterion, at least as applied to private corporate targeting, which we barely resist. The example of TSA airport screening is also depressing. Perhaps I’d suggest the modification that, if we expect wealthier people to buy their way out of the system, as they can with TSA Pre-check and CLEAR Global Entry (at least if they’re not Muslim), then there is a problem with the system. Informed consent and designing with histories of oppression in mind, rather than assuming that equity and good intentions are the default baselines, are central to her vision of good technological design.

Like the far more caustic Evgeny Morozov, Eubanks contends that we have turned to technology to solve human problems in ways that are both corrupting and self-defeating. And Eubanks doesn’t focus the blame on Silicon Valley. The call for automation is coming from inside the polity. In fact, while IBM comes in for substantial criticism for overpromising in the Indiana example, the real drivers in Eubanks’ story are the policy wonks who are either trying to shrink the system until it can be drowned in the bathtub (Indiana), or sincerely trying to build something helpful while resources are continually being drained from the system (Los Angeles and Pennsylvania).

Ultimately, Eubanks argues, the problem is that we’re in denial about poverty, an experience that will happen to the majority of Americans for at least a year between the ages of 20 and 65, while two-thirds of us will use a means-tested public benefit such as TANF, SNAP, Medicaid, or SSI. But we persist in pretending that poverty is “a puzzling aberration that happens only to a tiny minority of pathological people.” We pass a suffering man on the street and fail to ask him if he needs help. We don’t keep our tormented child in an isolated place, as they do in Omelas. Instead of walking away, we walk by—but we don’t meet each other’s eyes as we do so. This denial is expensive in so many ways—morally, monetarily, and even physically, as we build entire highways, suburbs, private schools, and prisons so that richer people don’t have to share in the lives of poorer people. It rots politics: “people who cannot meet eachothers’ eyes will find it very difficult to collectively govern.” Eubanks asks us to admit that, as Dan Kahan and his colleagues have repeatedly demonstrated in work on cultural cognition, our ideological problems won’t be solved with data, no matter how well formed the algorithm.

Cite as: Rebecca Tushnet, The Difference Engine: Perpetuating Poverty Through Algorithms, JOTWELL (July 18, 2018) (reviewing Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018)), https://cyber.jotwell.com/the-difference-engine-perpetuating-poverty-through-algorithms/.

Power-Lest it Should Be Forgotten

Yik Chan Chin and Changfeng Chen, Internet Governance: Exploration of Power Relationship1 Chinese Law eJournal 34 (2018).

There is a relatively new SSRN source I have found to be very useful: the Chinese Law e-Journal sponsored by the University of Hong Kong Faculty of Law (edited by Fu Hualing and Shitong Qiao, and thus referred to as Fu and Qiao, which appropriately might be translated as a “happy or blessed bridging”). This source is very broad with regard to the subjects it covers—many among them relating to Technology Law—and provides a valuable insight into how mainly but not exclusively Chinese researchers view developments in China and in the world.

Internet Governance: Exploration of Power Relationship, by Yik Chan Chin and Changfeng Chen, is included in this e-Journal, and was presented at the 2017 Giganet Symposium in Geneva in December last year. That symposium was held back to back (“Day Zero”) with the annual meeting of the Internet Governance Forum (IGF), a United Nations forum that sees itself as perhaps the example of a multistakeholder platform for governance. The paper looks at the reality of Internet governance in China, in search for a mechanism that comes close to the IGF’s multistakeholder model. It provides both a valuable account of the realities of Internet governance in China, and a method for thinking about what constitutes power in blends of multistakeholder and directive governance.

The authors describe in detail the Beijing Internet Association (BIA), a body with more than 100 entities, public and private, that acts as an intermediary between government agencies and those entities. The researchers analyzed this association by using social network analysis, questioning the actors in this setting about their interrelations. Their aim is to identify what they call “the significant force in shaping of Internet governance” power in China.

The authors identify power through three methods: (1) by identifying communications structures between the actors (For example: Which actors can communicate directly? Are there nodes that monopolize interactions?); (2) by assessing the capability of actors to act as a broker, i.e. the ability to bring other actors together to act and share information; and finally (3) by “capacity,” defined by the authors as a set of abilities to understand issues and influence interests. Using this methodology, the authors have—not surprisingly in the Chinese context—identified the secretariat of the BIA as the decisive seat of power in that Internet Governance regime.

Nevertheless, they still regard the BIA as a structure that does incorporate multistakeholder interests, even if strongly directed by government and the Party via its secretariat. The BIA, according to their research, builds strongly on social rather than formal or legal binding forces, using coordination rather than directives, indicating a multistakeholder approach. The BIA is perceived as a pragmatic response to the complexities of the Internet, and a result of learning from failures of more directive interventions. The BIA oscillates between being a dissemination and feedback mechanism for government information and directives, and a self-regulatory body with the described elements of multistakeholderism. The authors also point out differences from other multistakeholder concepts, and refer to internal problems of the BIA model, in particular because it seeks to integrate commercially competing interests. They finally discuss the chances of the BIA’s developing a more self-coordinating rather than directive-oriented governance structure.

The description of the BIA in the paper provides useful information for those not too familiar with the more detailed workings of Internet governance mechanisms in China. Some of the problems the BIA is dealing with sound familiar, even if they may have different political connotations, such as the establishment of an “anti-rumor network,” not unlike attempts in other Internet governance structures to address “Fake News” and political manipulation.

Beyond such detailed insights into the realities of Internet regulation in China, the article achieves three things:
(1) It shows that even in China, inclusive governance mechanics are used to address the limitations of direct centralized government regulation of complex technical, economic, and social issues, even if these mechanics leave no doubt where the final decision making power is situated.
(2) While those mechanics might be observed by some as an indicator of a possible global convergence of Internet governance models, this article invites us to refocus on the role of power in current multistakeholder settings.
(3) The article provides us with a tool set that can help us in assessing what constitutes “power” in the context of mixed governance.

Cite as: Herbert Burkert, Power-Lest it Should Be Forgotten, JOTWELL (June 19, 2018) (reviewing Yik Chan Chin and Changfeng Chen, Internet Governance: Exploration of Power Relationship, 1 Chinese Law eJournal 34 (2018)), https://cyber.jotwell.com/power-lest-it-should-be-forgotten/.

An Argument for the Coherence of Privacy Law

William McGeveran, Privacy and Data Protection Law (2016).

William McGeveran’s new casebook on Privacy and Data Protection Law announces the death of the “death march” that anyone who has ever taught or taken a course in Information Privacy Law has encountered. The death march is the slog in the second half of the semester through a series of similar-but-not-identical federal sectoral statutory regimes, each given just one day of instruction, such as the Privacy Act, FCRA, HIPAA, Gramm Leach Bliley, and FERPA. Professors asked to cover so much substantive law beyond their area of scholarly focus (nobody can focus on all of these) usually resort to choosing only two or three. Even then, the coverage tends to be cursory and unsatisfying.

The death march points to a larger problem: information privacy law doesn’t really exist. At best, privacy law is an assemblage of barely related bits and pieces. The typical privacy course covers constitutional law, a little European Union data protection, a tiny bit of tort, some state law, and the death march of federal statutes. The styles of legal practice covered run the gamut from criminal prosecution and defense, to civil litigation, regulatory practice, corporate governance, and beyond. To justify placing so much in one course, we try futilely to bind together these bits and pieces through broad themes such as harm, social norms, expectations of privacy, and technological change.

My long-held doubt about the coherence of privacy law has led me to teach the course a bit apologetically, feeling like a fraud for pretending to find connections where there are almost none. I’m pleased to report that my belief isn’t universally held: McGeveran’s compelling new casebook is built on the idea that privacy law can be rationalized into a coherent area of practice and pedagogy, one it presents in an organized and tightly woven structure.

I don’t think I’m alone in the belief that privacy law lacks coherence. Daniel Solove, in his magisterial summary of privacy law, Understanding Privacy, argues that rather than give privacy a single, unified definition, the best we can do is identify a Wittgensteinian set of family resemblances of related concerns. Solove’s very good casebook on Information Privacy Law, co-authored with Paul Schwartz, reflects this pragmatic resignation. Their book starts with a long chapter quoting many scholars who cast privacy in different lights and philosophical orientations. Solove and Schwartz don’t do much to try to reconcile these inconsistent voices, suggesting that we ought not try to find any unified theory or consistent coherence in this casebook or this field. Having given up on coherence in chapter one, the rest of the book reads like a series of barely related silos. It’s no wonder that the authors also offer their book sliced into four smaller volumes, which to my mind work better standing on their own.

The other leading, also excellent, casebook, Privacy Law and Society, by Anita Allen and Marc Rotenberg, follows a similar organization, but without the introductory philosophical debate. It too presents privacy law as silos of substance and practice, dividing the field into five broad, but largely disconnected areas: tort, constitutional law, federal statutes, communications privacy, and international law.

McGeveran takes a very different approach. He divides his casebook into three parts, the first two advancing the coherence thesis, both representing refreshingly creative syntheses of privacy law. In Part One, McGeveran provides “Foundations”, which gives a relatively short chapter each on constitutional law, tort law, consumer protection law, and data protection. McGeveran wisely resists the urge to tell any of these four stories at this point in their full depth, delaying parts of each for later in the book. This survey method gives the student a better appreciation for the most important tools in the privacy lawyer’s toolkit; encourages more explicit comparisons between the four categories; and allows for learning through repetition and reinforcement when the topics are revisited later.

The other major innovation is McGeveran’s decision to single out consumer protection law as a distinct area of practice. This builds on work from Solove and Woodrow Hartzog, who have argued that we should treat the jurisprudence of the FTC as a form of common law, and from Danielle Citron, who has pointed to state attorneys general as unheralded great protectors of privacy. McGeveran’s book embraces both arguments, elevating the work of the FTC and state AGs to their due places as primary pillars of U.S. privacy law. This modernizes teaching of the subject, by reflecting what privacy practice has become in the 21st century, with many privacy lawyers advising clients about the FTC far more frequently than they think about tort or constitutional law.

Part Two is even more innovative. It consists of four chapters that follow stages in the “Life Cycle of Data”: “collection”, “processing and use”, “storage and security”, and “disclosures and transfers.” Solove’s influence is again felt here, as these stages echo the major parts of the privacy taxonomy he introduced in Understanding Privacy. Each stage of Part Two introduces new substantive law, but organized around the types of data flows they govern. This prepares students for the issue spotting they will encounter in practice, centering on the data rather than on the artificial boundaries between areas of law. The techie in me appreciates the way this focuses student attention on the broad theme of the impact of technology on privacy.

Because these two parts are so innovative and successful, they serve as the spoonfuls of sugar that help the death march of Part Three go down (although admittedly even this part was still a bit of a slog when I taught from the book this past fall). Students are primed by this point to place statutes like FERPA or HIPAA into the legal framework of Part One and the data lifecycle of Part Two, making them reinforcing examples of the coherent whole rather than disconnected silos. This also reduces the costs (and the guilt) for instructors of cutting sections of the death march. They understand that, thanks to the foundational structures of Part One and Two, their students will be better equipped to encounter, say, educational privacy for the first time on the job.

Finally, as a work of scholarship, not merely pedagogy, McGeveran’s argument for the coherence of privacy law might be an important marker in the evolution of our still relatively young field. Roscoe Pound said that Warren & Brandeis did “nothing less than add a chapter to our law,” a quote well-loved by privacy law scholars. William Prosser has been credited for taking the next step, turning Warren and Brandeis’s concerns into concrete legal doctrine, in the form of the four privacy torts.

This book is positively Prosserian in its aspirations. McGeveran attempts to organize, rationalize, and lend coherence to a messy, incoherent set of fields that we’ve adopted the habit of placing under one label, even if they do not deserve it. I’m not entirely convinced that he has succeeded, that there is something singular and coherent called privacy law, but this book is the best argument for the proposition I have seen. And as a teacher, it is refreshing to leaven my skepticism with this well-designed, compelling new classroom tool.

Cite as: Paul Ohm, An Argument for the Coherence of Privacy Law, JOTWELL (May 22, 2018) (reviewing William McGeveran, Privacy and Data Protection Law (2016)), https://cyber.jotwell.com/an-argument-for-the-coherence-of-privacy-law/.

Black Box Stigmatic Harms (and how to Stop Them)

Margaret Hu, Big Data Blacklisting, 67 U. Fla. L. Rev. 1735 (2016).

There is a remarkable body of work on the US government’s burgeoning array of high-tech surveillance programs. As Dana Priest and Bill Arkin revealed in their Top Secret America series, there are hundreds of entities which enjoy access to troves of data on US citizens. Ever since the Snowden revelations, this extraordinary power to collate data points about individuals has caused unease among scholars, civil libertarians, and virtually any citizen with a sense of how badly wrong supposedly data-driven decision-making can go.

In Big Data Blacklisting, Margaret Hu comprehensively demonstrates just how well-founded that suspicion is. She shows the high stakes of governmental classifications: No Work, No Vote, No Fly, and No Citizenship lists are among her examples. Persons blackballed by such lists often have no real recourse—they end up trapped in useless intra-agency appeals under the exhaustion doctrine, or stonewalled from discovering the true foundations of the classification by state secrecy and trade secrecy laws. The result is a Kafkaesque affront to basic principles of transparency and due process.

I teach administrative law, and I plan to bring excerpts of Hu’s article into our due process classes on stigmatic harm (to update lessons from cases like Wisconsin v. Constantineau and Paul v. Davis.) What is so evident from Hu’s painstaking work (including her diligent excavation of the origins, methods, and purposes of a mind-boggling alphabet soup of classification programs) is the quaint, even antique, nature of the Supreme Court’s decisionmaking on stigmatic harm. A durable majority on the Court has held that erroneous, government-generated stigma, by itself, is not the type of injury that violates the 5th or 14th Amendment. Only a concrete harm immediately tied to a reputational injury (stigma-plus) raises due process concerns. As Eric Mitnick has observed, “under the stigma-plus standard, the state is free to stigmatize its citizens as potential terrorists, gang members, sex offenders, child abusers, and prostitution patrons, to list just a few, all without triggering due process analysis.” Mitnick catalogs a litany of commentators who characterize this standard as “astonishing,” “puzzling,” “perplexing,” “cavalier,” “wholly startling,” “disturbing,” “odious,” “distressingly fast and loose,” “disingenuous,” “ill-conceived,” an “affront[] [to] common sense,” “muddled and misleading,” “peculiar,” “baroque,” “incoherent,” and my personal favorite, “Iago-like.” Hu shows how high the stakes have become thanks to the Court’s blockage of sensible reform of our procedural due process jurisprudence.

Presented numerous opportunities to do so, the Court simply refuses to deeply consider the cumulative impact of a labyrinth of government classifications. We need legal change here, Hu persuasively argues, because there are so many problems with the analytical capacities of government agencies (and their contractors), as well as the underlying data they are relying on. Cascading, knock-on effects of mistaken classification can be enormous. In area after area, from domestic law enforcement to anti-terrorism to voting roll review, Hu collects studies from experts that indicate not merely one-off misclassifications, but a deeper problem of recurrent error and bias. The database bureaucracy she critiques could become an unchallengeable monolith of corporate and government power arbitrarily arrayed against innocents, which prevents them from challenging their stigmatization both judicially and politically. When the state can simply use software and half-baked algorithms to knock legitimate voters off the rolls, without notice or due process, the very foundations of its legitimacy are shaken. Similarly, a lack of programmatic transparency and evaluative protocols in many settings makes it difficult to see how the traditional touchstones of the legitimacy of the administrative state could possibly be operative in some of the databases Hu describes.

Many scholars in the field of algorithmic accountability have been focused on procedural due process, aimed at giving classified citizens an opportunity to monitor and correct the data stored about them, and the processes used to analyze that data. Hu is generous in her recognition of the scope and detail of that past work. But with the benefit of her comprehensive, trans-substantive critique of big data blacklisting programs, she comes to the conclusion that extant proposals for reform of such programs may not do nearly enough to restore citizens’ footing, vis a vis government, to the level of equality and dignity that ought to prevail in our democracy. Rather, Hu argues that, taken as a whole, the current panoply of big data blacklisting programs offend substantive due process: basic principles that impose duties on government not to treat persons like things.

This is a bold intellectual move that reframes the debate over the surveillance state in an unexpected and clarifying way. Isn’t there something deeply objectionable about the gradual abdication of so many governmental, humanly-judged functions to private sector, algorithmically-processed databases and software—especially when technical complexity is all too often a cloak for careless or reckless action? For someone unfamiliar with the reach, fallibility, and stakes of big data blacklisting, it might seem jarring to contemplate that a pervasive, largely computerized method of classifying citizens might be as objectionable as, say, a law forbidding the teaching of foreign languages, or denying the right to marry to prisoners (other laws found to violate substantive due process). However, Hu has done vital work to develop a comprehensive case against big data blacklisting that makes several of its instantiations seem at least as offensive to constitutional values as those restrictions.

Moreover, when blacklisting itself is so resistant to traditional procedural due process protections (for example, in cases of black box processing), substantive due process claims may be the only way to relieve citizens of burdens it imposes. Democratic processes cannot be expected to protect the discrete, insular minorities targeted unfairly by big data blacklisting. Even worse, these “invisible minorities” may never even be able to figure out exactly what troubling classifications they have been tarred with, impairing their ability to even make a political case for themselves.

Visionary when it was written, Big Data Blacklisting becomes more relevant with each data breach and government overreach in the news. It is agenda-setting work that articulates the problem of government data processing in a new and compelling way. I have rarely read work that so meticulously credits pathbreaking work in the field, while still developing a unique perspective on a cutting edge legal issue. I hope that legal advocacy groups will apply Hu’s ideas in lawsuits against arbitrary government action cloaked in the deceptive raiments of algorithmic precision and data-driven empiricism.

Cite as: Frank Pasquale, Black Box Stigmatic Harms (and how to Stop Them), JOTWELL (April 17, 2018) (reviewing Margaret Hu, Big Data Blacklisting, 67 U. Fla. L. Rev. 1735 (2016)), https://cyber.jotwell.com/black-box-stigmatic-harms-and-how-to-stop-them/.

New Kids on the Blockchain

Bitcoin was created in 2009 by a member of a cryptography mailing list who goes under the pseudonym of Satoshi Nakamoto, and whose identity is still a mystery. The project was designed to become a decentralized, open source, cryptographic method of payment that uses a tamper-free, open ledger to store all transactions, also known as the blockchain. In a field that is replete with hype and shady operators, David Gerard’s book Attack of the 50 Foot Blockchain has become one of the most prominent and needed sceptical voices studying the phenomenon. Do not let the amusing title you deter you; this is a solid book filled with solid and thorough research that goes through all of the most important aspects of cryptocurrencies, and it is one of the most cited take-downs of the technology.

The book covers a wide range of topics on cryptocurrencies and blockchain, and does so in self-contained chapters that can be read almost independently. The book does not follow a strict chronological order. This structure actually makes the book entirely more readable and a delight from cover to cover, not only because of the interesting subject matter, but also because of Gerard’s wit and knowledge.

The work follows three main themes: explaining Bitcoin and unearthing its various problems; the prevalence of fraudulent practices and unsavoury characters in cryptocurrencies, and then explaining blockchains and smart contracts, and their various criticisms.

In the introductory section Gerard does an excellent job of explaining the technology without the usual techno-jargon that surrounds the subject, and goes through the main reasons that proponents advocate the use of Bitcoin. Cryptocurrencies are often offered as a decentralised solution to the excesses incurred by financial institutions and governments. “Be your own bank” is cited as one of the advantages of Bitcoin, but Gerard accurately describes the various problems that this presents. Being your own bank means requiring security fit for a bank, which most people do not have. Moreover, some of the characteristics present in Bitcoin make it particularly unsuitable as a means of payment. Bitcoin is based on scarcity; only 21 million coins will ever be mined, so there is a strong incentive to hoard coins and hold. Similarly, cryptocurrency transactions are irreversible; if you lose coins in a hack, or make a transaction mistake, the coins are gone forever.

In the chapters dealing with fraud, Gerard does an excellent job of going through the dark side of cryptocurrencies. Cryptocurrencies rely on intermediaries, either exchanges that will accept your “fiat” currency and exchange it into digital currency, or “wallets”, where people can store their coins. The problem is that this unregulated space attracted fraudsters and amateurs in equal measure, and during its short history the space has been filled with Ponzi schemes, con-men, and manipulators. Gerard also describes the use of Bitcoin in the Dark Web, where it is the currency of choice of various illegal businesses.

But it is in his criticism of the blockchain technology where the book really shines. Even vocal Bitcoin critics used to think that that even if cryptocurrencies fail, the underlying blockchain technology would remain and become an important contribution to the way in which online transactions are made. Gerard became one of the first critics of the blockchain itself.

The blockchain is an immutable and decentralised record of all of the transactions that requires no trust in an intermediary. This is supposed to prove useful in any situation where a trustless system is required. But as Gerard points out, there are not a lot of situations when this is even the case, and most instances presented by blockchain advocates are not necessary. The book describes two main issues with using blockchain in a business environment. Firstly, decentralization is always expensive; there is a reason why many companies have been moving towards centralization of network services through the hiring of cloud providers. Decentralization means that you have to make sure that everyone is using the same protocols and compatible systems, but also you have to account for redundancies as you have to rely on services that are not always available, this results in slower and more cumbersome networks that spend more energy to produce a similar result. Secondly, if data management is a problem in your business, then adding a blockchain won’t make the problem go away. On the contrary, he sets out a number of questions that must be asked whenever anyone is thinking of implementing a blockchain to existing business models, including whether the technology can scale, and whether a centralised system will work just as well.

Finally, the book analyses smart contracts, which are contracts conducted digitally through a combination of cryptocurrencies and tokens recorded on a blockchain. The idea is that the parties to a contract code terms and conditions into an immutable token written in computer code which defines the parameters of the contract (conditions, payment, operational parameters), and those who want to transact with each other will write another token that will meet those parameters, at which point the payment is made and the electronic contract concluded. This contract is immutable and irrevocable.

Gerard accurately points out that this combination of immutability and irrevocability are toxic in a legal environment, as any error in the code can lead to nefarious legal consequences. Traditional contracts rely on human intent, and if a mistake is made or a conflict arises, the parties can go to court. But in a smart contract, the code is the last word, and there is no recourse in case of an error or a conflict other than trying to re-write the blockchain, which is not possible unless a majority of participants in the scheme agree to change the code.

This book is a must-read for anyone interested in an easy-to-read and enjoyable criticism of cryptocurrencies and the blockchain. It is a testament of the strength of the ideas presented that we are just now starting to undergo a much-needed check on the blockchain hype from various quarters. Even if cryptocurrencies manage to get past this early stage unscathed, it will be books like this one that will help to narrow the focus away from the narrative of bubbles and easy gains.

Cite as: Andres Guadamuz, New Kids on the Blockchain, JOTWELL (April 3, 2018) (reviewing David Gerard, Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Ethereum & Smart Contracts (2017)), https://cyber.jotwell.com/new-kids-on-the-blockchain/.

Governing The New Governors and Their Speech

Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech RegulationU.C. Davis L. Rev. (forthcoming 2018), available at SSRN.

Jack Balkin is one of the leading thinkers and visionaries in the fields of information and cyber law. Every one of his scholarly contributions must be closely read. His recent article, Free Speech in the Algorithmic Society is no exception. It is highly recommended to those interested in fully understanding the current and future tensions between emerging technologies and human rights. The article also provides numerous gems – well-structured statements that eloquently articulate the central challenges of the day, some of which are quoted below.

The article starts off by introducing and defining the “Algorithmic Society” as one that “facilitates new forms of surveillance, control, discrimination and manipulation by both government and by private companies.” As before, society is driven by those seeking fame and fortune. However, much has changed. For instance, Balkin lists the four main sources of wealth the digital age brings about as “intellectual property, fame, information security and Big Data.” To achieve such wealth in this society, individuals are subjected to being governed by algorithms. At the same time, firms and governments achieve “practical omniscience”, while not only knowing what is happening but often accurately predicting what will happen next. These enhanced abilities, Balkin warns, lead to power asymmetries between groups of people (and not only between individuals and technologies) and generate several substantial challenges.

The article follows Balkin’s earlier scholarship which addressed the changing role of free speech doctrines and the First Amendment in the digital age, and the way they apply to the Internet titans. Indeed, Balkin explains that the central constitutional questions of this age will be those related to free speech and freedom of expression. The “Frightful Five” (and any future giants that might emerge) will cry for free speech protection to fend off intervention in their platforms and business models. Yet, at the same time, they will shrug off claims that they must comply with free speech norms themselves, while noting that they are merely private parties to whom these arguments do not pertain.

Continuing this line of scholarship, “Free Speech in the Algorithmic Society” introduces a rich discussion, which spans across several key topics, starting with the rise of “information fiduciaries”. These, Balkin defines, should include digital entities, which collect vast amounts of personal data about their users yet offer very limited insights as to their internal operations. Naturally, this definition includes leading search engines and social media platforms. Balkin concludes that information fiduciaries should be subjected to some of the duties classic fiduciaries were subjected to. To summarize their central obligation, Balkin states that they must not “act like con artists – inducing trust in their end users to obtain personal information and then betraying end users…”. Clearly, articulating this powerful obligation in “legalese” will prove to be a challenge.

The article also introduces the notion of “algorithmic nuisance”. This concept is important when addressing entities that have not entered a contractual relationship with individuals, yet can potentially negatively impact them. Balkin explains that these entities rely on algorithmic processes to make judgments about individuals at important and even crucial junctures. Such reliance – when extensive – inflicts costs and side effects on those subjected to the judgment. This is especially true of individuals singled out as risky, due to error. Balkin explains such individuals may be subjected to discrimination and manipulation. Furthermore, some people will be pressured to “conform their lives to the requirements of the algorithm,” thus undermining their personal autonomy. To limit these problems, Balkin suggests that such “nuisance” be treated as other forms of nuisances in public and private law, while drawing an interesting comparison to pollution and environmental challenges. As with pollution, Balkin suggests that those causing algorithmic nuisance be forced to “internalize the costs they shift onto others”. Balkin moves on to apply the concepts of “information fiduciaries” and “algorithmic nuisance” to practical examples such as smart appliances and personal robots.

The article’s next central point pertains to “New School Speech Regulation.” By this, Balkin refers to the dominant measures for curtailing speech in the digital age. As opposed to previous forms of speech regulation which addressed the actual speaker, today’s measures focus on dominant digital intermediaries, which control the flow of information to and from users. Balkin explains that regulating such entities is now “attractive to nation states” and goes on to detail the various ways this could be done. It should be noted that the analysis is quite U.S.-specific. Outside the U.S., nations are often frustrated by their inability to regulate the powerful (often U.S.-based) online intermediaries, and therefore the analysis of this issue is substantially different.

Beyond the actions of the state, Balkin points out that these online intermediaries, at their discretion, may take down materials which they consider abusive and violate their policies. Balkin notes that users “resent” the fact that the criteria are at times hidden and the measures applied arbitrarily. Yet these steps are often welcomed by users. At times, these steps might even prove efficient (to borrow from the outcomes of some analyses examining the actions of the company towns of previous decades– see my discussion here). Furthermore, relying on broad language to take assumedly arbitrary actions allows firms to punish “bad actors” whose actions are clearly frowned upon by the crowd, yet cannot be easily tied to an existing prohibition (if merely a detailed list of forbidden actions is strictly relied upon)– an important right to retain in an ever-changing digital environment.

Balkin further explains that the noted forms of speech regulation are closely related, and together form three important forces shaping the individual’s ability to speak online: (1) state regulation of speech; (2) the intermediary’s governance attempts, and (3) the government’s attempts to regulate speech by influencing the intermediary. This important triangular taxonomy is probably the article’s most important contribution and must be considered when facing similar questions. Balkin later demonstrates how these forces unfold when examining the test cases of “The Right to Be Forgotten” and “Fake News.”

What can be done to limit the concerns here noted? Balkin does not believe these problems can solve themselves via market forces. He explains that individuals are limited to signaling their discontent with their “voice,” rather than by “exiting” (using the terminology introduced by Hirschman) – and the power of their voice is quite limited. It should be noted that some other forms of limited signaling might still unfold, such as reducing activity within a digital platform. Yet it is possible that such signaling will still prove insufficient. Rather than relying on markets or calling on regulators to resolve these matters, Balkin argues that change must come from within the companies themselves – by them understanding that they are now entities with obligations to promote free speech on a global level. One can only hope that this wish will be fulfilled. Reading this article and spreading its vision, with hope that it would make its way to the leaders of today’s technology giants, will certainly prove to be an important step forward.

Cite as: Tal Zarsky, Governing The New Governors and Their Speech, JOTWELL (February 13, 2018) (reviewing Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech RegulationU.C. Davis L. Rev. (forthcoming 2018), available at SSRN), https://cyber.jotwell.com/governing-new-governors-speech/.

Money For Your Life: Understanding Modern Privacy

Stacy-Ann Elvy, Paying for Privacy and the Personal Data Economy, 117 Colum. L. Rev. 1369 (2017).

The commercial law of privacy has long occupied a relatively marginal place in modern legal scholarship, situated in gaps among doctrinal exposition, critical conceptual elaboration, and economically-motivated modeling. Much of the explanation for the omission is surely technological. Until Internet technologies came along in the mid-1990s, it was difficult to turn private information into a “thing” that was both technically and economically worth buying and selling.

Technology and markets have passed the point of no return on that score. Claude Shannon, credited as the author of the insight that all information can be converted into digits, has met Adam Smith. Yet relevant legal scholarship has not quite found its footing. Paying for Privacy and the Personal Data Economy, from Stacy-Ann Elvy, offers a novel way forward. Professor Elvy’s article offers a nifty, highly concrete, and eminently useful framework for thinking about the commercial law of things that consist of assets derived from consumers’ private information. It is not only the case that commercial law is one of the legally-relevant attributes of privacy and privacy practices. Privacy can be thought of as a mode of commercial law.

Paying for Privacy lays out its argument in a series of simple steps. It begins with a brief review of the emergence of the now-familiar Internet of Things, network-enabled everyday objects, industrial devices, and related technologies that increasingly permeate and collect data concerning numerous aspects of individuals’ daily lives. That review is pertinent not merely to common claims about the urgency of privacy regulation but also and more importantly to the premise that the supply of data-collecting technologies by industry (with accompanying privacy-implicating features) is likely to lead soon to increased demand by consumers for privacy-mediating, privacy-regulating, and privacy-protecting instruments.

The supply/demand metaphor is purposeful, if somewhat speculative, for it leads to a thorough and useful description and taxonomy of instruments currently on offer. Those include “traditional” privacy models involving personal data traded for “free” services (such as Facebook) and “freemium” services (such as LinkedIn) that offer both subscription-based and “free” versions of their services, harvesting money from subscribers (and advertisers and partners) and money and data from the free users. More recent PFP or “Pay For Privacy” models include newer firms offering multiple versions of “pay for privacy” services. Those include “privacy as a luxury,” in which providers offer added privacy controls for users in exchange for higher payments, and privacy discounts, by which users get cheaper versions of services if they agree to participate in data monitoring and collection. Switching perspectives from the service to the consumer yields a series of models collected as the PDE, or “Personal Data Economy.” Those include the “data insights model,” companies that enable individual consumers to monitor and aggregate private information about themselves, perhaps for their own use and perhaps to monetize by offering to third parties. In the related “data transfer model,” companies broker markets in which consumers voluntarily collect and contribute data about themselves, making it available for transfer (typically, purchase) by third parties.

The taxonomy is only a snapshot of current practices. This field seems to be so dynamic that inevitably many of the details in the article will be superseded, no doubt sooner rather than later. But the taxonomy helpfully reveals the two-sided character of privacy commerce. Rounding out that basic insight, one might add that there are privacy sellers and privacy buyers, privacy borrowers and privacy lenders, privacy principals and privacy agents, privacy capital and privacy debt, privacy currency and privacy assets. There are secondary markets and tertiary markets. As Professor Elvy notes, the list of privacy intermediaries includes privacy ratings firms – firms that play much the same role as the bond ratings firms that participated so enthusiastically (and eventually, so devastatingly) in the subprime mortgage market of the early 2000s.

Having laid out this framework, in the rest of the article Professor Elvy thoughtfully parses the weaknesses of the commercial law of privacy and develops a counterpart set of prescriptions and recommendations for further evaluation and possible implementation. All of this is admirably immediate and concrete.

Her critique is linked model by model to the taxonomy; the review below condenses it in the interest of space. First, not all consumers have equal or fair opportunities to collect and market their private data. To some significant degree, and for reasons that may be beyond their control or influence, those consumers either cannot participate in the wealth-creating dimensions of privacy or, because of social, economic, or cultural vulnerabilities (Professor Elvy highlights children and tenants), are effectively coerced into participating. Second, the article repeats, with helpful added doses of commercial law context, the widespread contract law critique that consumers are presented with vague, illusory, and incomplete “choices” in respect of collection, aggregation, and use of private data. Third and fourth (to combine two categories of critique offered in the article), current market and legal understandings of privacy as commercial law treat privacy primarily as what one might call an “Article 2” asset, that is, in terms of sales of things. Overlooked in this developing commercial market is privacy as what one might call an “Article 9” asset, that is, as a source of security and securitization. The potentially predatory and discriminatory implications of that second character should be obvious to anyone with a passing familiarity with the history of consumer lending, and Professor Elvy hammers on those.

Paying for Privacy concludes with a review of the fragmented legal landscape for addressing these problems and a complementary summary of recommendations for improving the prospects of consumers while preserving valuable aspects of both PFP and PDE models. Professor Elvy nods in the direction of COPPA (the Children’s Online Privacy Protection Act) and the possibility of industry-specific or sector-specific regulation. Most of her energy is directed to clarifying the jurisdiction of the Federal Trade Commission with respect to PDE models to deal with unfair trade practices regarding privacy that do not fit into traditional or accepted models of harm addressable by the FTC. All of this has the air of the technical, but its broader substantive import should not be overlooked. Paying for Privacy serves as a helpful entrée to a newer, broader – and difficult — vision of privacy’s future.

Cite as: Michael Madison, Money For Your Life: Understanding Modern Privacy, JOTWELL (January 8, 2018) (reviewing Stacy-Ann Elvy, Paying for Privacy and the Personal Data Economy, 117 Colum. L. Rev. 1369 (2017)), https://cyber.jotwell.com/money-life-understanding-modern-privacy/.