As more and more of our daily activities and private lives shift to the digital realm, maintaining digital security has become a vital task. Private and public entities find themselves in the position of controlling vast amounts of personal information and therefore responsible for assuring such information does not find its way to unauthorized hands. In some cases, there are strong incentives to maintain high standards of digital security, as security breaches are a real pain. When reports on such breaches are made public, they generate reputation costs, lead to regulatory scrutiny and often call for substantial out-of-pocket expenses to fix. Unfortunately, however, the internal incentives for maintaining high security standards are often insufficient motivators. In such cases, the security measures taken are unfitting, outdated and generally unacceptable. These are the instances where legal intervention is required.
There are several possible regulatory strategies to try and improve digital security standards. One option calls for greater transparency regarding breaches that led to personal data leakage and other negative outcomes. Another option calls upon the government to set data security standards and enforce them, at least in key sectors (more on these two options and their limitations, below). Yet an additional central form of legal intervention is through private litigation and the court system. However, key doctrinal hurdles in the United States currently make it extremely difficult to sue for damages resulting from security breaches. In an important recent paper, Daniel Solove and Danielle Citron, two prominent privacy scholars, explain what these hurdles are, how to overcome them, and why such doctrinal changes are essential.
As the authors explain, the key to many of the challenges of data security litigation is the concept of “harm”, or lack thereof. A finding of actual, tangible harm is crucial for establishing standing, which requires demonstrating an injury that is both concrete and actual (or at least imminent). Without standing, the case is thrown out immediately without additional consideration. Additionally, tort-based claims (as opposed to some property-based claims) require a showing of harm. And when examining data security claims, courts require tangible damages to prove harm. Security-related harms are often considered intangible. Therefore, many data security-related lawsuits are either immediately blocked or ultimately fail.
The complex issue of harm, standing and data security/privacy has been recently addressed by the U.S. Supreme Court in Clapper v. Amnesty International USA (where the Court generally rejected “hypothetical” injuries as sufficient to establish standing) and more recently in Spokeo Inc. v. Robins. In this latter case (addressing the standing and the FCRA) the Court has, at least in principle, recognized that intangible harms could be considered as sufficiently “concrete” if they generate the risk of real harm, and thus provide plaintiffs with standing. Furthermore, an additional case—Frank v. Gaos—is currently before the Supreme Court. While this latter case focuses on the practice of cy pres settlements in class actions, it appears to incidentally yet again raise questions related to standing, harms and digital security/privacy—this time with regard to referrer headers.
In response to the noted challenges security litigation faces, the authors call upon courts to enter the 21st century and accept changes to the doctrines governing the establishment of harm. They convincingly show that security breaches indeed create both harm and anxiety—but of somewhat different form. In fact, they assert, some courts have already begun to recognize harms resulting from data security breaches. For instance, courts have found that a “mere” increased risk of identity theft constitutes actual harm (even before such theft has occurred) when the data has made its way to the hands of cyber-criminals. The authors prod courts to push further in their expansion of the harm concept in the digital age. They note three major forms of injury which should be recognized in this context: (1) the risk of future injury, (2) the fact that individuals at risk must take costly (in time and money) preventive measure to protect against future injury, and (3) enhanced anxiety.
To make this innovative argument, the authors explain that data security breaches create unique concerns which justify the expansion of the concept of harm. For instance, they explain that damages (which might prove substantial) resulting from data breaches could be delayed. Therefore, recognizing harm at an earlier stage is essential. In addition, they argue that the risk of security harms might deter individuals from engaging in important and efficiency-enhancing activities such as seeking new employment opportunities and purchasing a new home. This is yet another strong argument for immediately creating a cause of action through the recognition of harm.
Judges are usually cautious about creating new rules, especially in common law systems. Yet the authors explain that in other legal contexts, such as medical malpractice, similar forms of intangible harms have already been recognized. They refer to cases based on actions that increased a chance of illness or decreased the chance of recovery. These have been recognized as actual harms—instances somewhat analogous to personal data leakage and the harms that might follow.
Yet broadening the notion of data “harm” has some downsides, such as attempts to “cheat” and manipulate by plaintiffs. This is because intangible harms are easier to fake or fabricate, and because the definition of intangible harm might be too open-ended. In addition, broadening the notion of harm might lead to confusion for the courts. To mitigate some of these concerns, the authors introduce several criteria to assist courts in establishing and assessing harm in this unique context. These include the likelihood and magnitude of future injury as well as the mitigating and preventive measures those holding the data have taken.
Finally, the authors confront some broader policy questions pertaining to their innovative recommendations. Litigation, of course, is not the only way to try and overcome the problems of insecure digital systems. It probably isn’t even the best way to do so. I have argued elsewhere that courts are often an inadequate venue for promoting cybersecurity objectives. Litigation is costly to all parties. It also might stifle innovation and end up merely enriching the parties’ lawyers. In addition, judges usually lack the proper expertise to decide on these issues. Furthermore, in this context, ex post court rulings are an insufficient motivator to ensure that proper security measures will be set in place ex ante, given the issue’s complexity and the difficulties of proving causation (i.e. the linkage between the firm’s actions or omissions and the damages that follow at a later time).
The authors would probably agree with these assertions and indeed acknowledge most of them in their discussion. Nonetheless, they argue that other regulatory alternatives such as breach notification requirements and regulatory enforcement suffer from flaws as well. This is, no doubt, true. Breach notifications might generate insufficient incentives for data collectors to minimize future breaches, as users might be unable or unwilling to voice or act on their disappointment with the flawed security measures adopted. And data security regulatory enforcement might suffer from the usual shortcomings of governmental enforcement—it being too minimal, not up to date and at times subject to capture. Litigation, the authors argue, could fill a crucial void when other options fail. They state that “data-breach harms should not be singled out” as problematic relative to other kinds of legal harms. Therefore, courts should have the option to find that harm has been caused and thus additional legal actions must be taken when they have good reasons to do so.
Using doctrinal barriers (such as refraining from acknowledging new forms of harm) to block off specific legal remedies is an indirect and somewhat awkward strategy. Yet it is also an acceptable measure to achieve overall policy goals. The authors convincingly argue that (all) judges should have the power to decide on a case’s merits, yet by doing so the authors inject uncertainty into the already risky business of data security. If this proposal would be ultimately accepted, let us hope that judges use this power responsibly. If Solove and Citron’s proposals are adopted, judges should look beyond the hardship of those victimized by data breaches and consider the overall interests of the digital ecosystem before delivering their judgement in digital security cases.
Kristen E. Eichensehr, Digital Switzerlands
, 167 U. Pa. L. Rev.
___ (forthcoming 2019), available at SSRN
Battles over the public policy obligations and implications of late 20th-century and early 21st-century technologies have long been fought via metaphor as well as via megabyte and microeconomics. Today, modern information technology platforms are characterized brightly as “generative” and darkly as “information feudalism.” Public policy might be informed by treating some network providers as “information fiduciaries.” Or, borrowing the phrase that prompts Kristen Eichensehr’s thought-provoking paper, tech companies might be characterized as metaphorical “digital Switzerlands.” They might be neutral institutions in their dealings with national governments.
In Professor Eichenbehr’s telling, the idea of a corporate digital Switzerland resisting government aggression—refusing to cooperate with government requests for private user information, for example—comes from a recent suggestion to that effect by Brad Smith, president of Microsoft. As she notes briefly, it’s an old idea, not a new one, even if it has migrated from corporation-vs-corporation conflict to state-vs-corporation power dynamics. Ken Auletta’s history of Google reported that back in 2005, Google CEO Eric Schmidt characterized Google’s search engine and advertising platform as a neutral “digital Switzerland” in its treatment of content companies and advertisers. Schmidt was defending the idea that Google had no agenda vis-à-vis incumbent entertainment industry players. Google’s technology produced accurate data about consumer viewing practices. If that data led advertisers to pay less for their ad buys, that wasn’t Google’s intent—or its responsibility. Schmidt’s listener, the then-president of Viacom, erupted in protest: “You’re fucking with the magic!”
Indeed. The reader should take many lessons from Eichensehr’s article. Foremost among them is this: Wandering into the digital Switzerlands of contemporary technology, whether because Microsoft (in its obvious self-interest) says that’s how we should do things or because that’s an objectively useful place to begin, is fucking with the magic—that is, the mythos that guides how scholars and policymakers think about technology purveyors and their civic roles and responsibilities.
Metaphors, it turns out, are the least of our concerns. The point that Smith and Microsoft made with the “digital Switzerland” claim is on its face a primitive and laughable appeal to the idea that technology and technology companies can and should be apolitical and neutral. Eichensehr, appropriately, barely pauses to consider the metaphorical mechanics at work.
Instead, she takes as given that technologies have politics, and that politics have technologies. Reading the paper, I was reminded of Fred Turner’s research. Silicon Valley firms and their allies explicitly borrowed and built on 1960s ideologies of anti-government communalism, so much so that modern information technology came to be seen by its producers, and sold to consumers, as an instrument of personal liberation and freedom. Whether a Mac or a PC, the computer, and later the network, was and is meant to empower individuals to create social order independent of traditional, formal governments, and if necessary in opposition to them. Eichensehr doesn’t dig quite to that level. She skips ahead, helpfully escaping the metaphor wars by relying on “digital Switzerlands” as a potentially useful diagnostic. Her argument is consistent with Turner’s. Ideas have power. Maybe Smith is on to something.
Eichensehr makes a host of interesting observations and asks some critical if provocative questions. The article starts by laying out a basic toolkit. The claim that technology companies might be “digital Switzerlands” (she switches Smith’s singular to the more descriptively apt plural) implicates the foundational idea that a company might be treated as a sovereign, as if it were a country, and the next-level idea that as a sovereign state, a technology company might be fairly characterized as “neutral” under international law. The characterization works in some respects and not in others, but as a starting point it is plausible enough that Eichensehr moves easily to her next step, which is describing and analyzing how that neutral status implicates technology companies both in relation to their individual users and in relation to governments. Microsoft or Facebook might resist government efforts to secure corporate cooperation in investigations that implicate their users. Or they might cooperate. But companies have always had to choose whether to fight or fold in response to government requests for information, or more. Today, the global reach of the largest tech companies, and the fact that they succeed as businesses because of their attractiveness to users and advertisers distinguishes them from powerful corporate behemoths of earlier eras. Those were powerful and durable for decades as resource extractors, not as modern goodwill generators.
In short, Eichensehr argues that the digital Switzerlands claim has merit. Sometimes, modern technology companies do exercise some of the powers of sovereignty that we traditionally associate with governments. They develop and deploy large-scale trust-based governance infrastructures through their technology platforms. They exercise substantial powers to structure behavior by users. Users discipline the companies via governance to a limited degree, primarily exit. Taken together, those attributes give heft and credibility to the proposition that operating as “digital Switzerlands” may enable technology companies effectively to shield their users from formal government regulation—collecting private information users store via the platform, for example. As Eichensehr notes, that power is limited; it comes at a cost, and with risks. Switzerland itself has not only been neutral but was also passive to the point of complicity with Nazi Germany during World War II. With great power—pervasively armed neutrality, in the case of Switzerland; surveillance-based surreptitious data aggregation, in the case of Facebook—comes great responsibility. That responsibility is not always exercised appropriately.
But Eichensehr is less interested in a detailed normative exploration of Facebook’s data collection practices than in using the insight about tech companies as states to build a useful framework for understanding their practices, with governments anchoring one point of the framework, tech companies anchoring a second, and users anchoring a third. She uses that framework to predict outcomes in conflicts where companies might cooperate or resist government efforts to regulate or police the companies’ users. “Stated generally, the Digital Switzerlands concept suggests that companies should fight against or resist governments when the companies perceive themselves to be and can credibly argue that they are protecting the interests of users against governments….” (P. 39.) She tests that hypothesis against some relatively easy, paradigm cases (corporate compliance is more likely when a democratic government is attempting to apply its domestic legislation to users in that jurisdiction), and reviews limitations (the government may be undemocratic; the company may misapprehend its users’ interests; governments may be applying the law extraterritorially; the company may not be aware of government action).
The core case and the exceptions lead Eichensehr to evaluate the normative implications of the framework. Here, her observations are provocative rather than definitive, because she’s challenging some cyber-orthodoxy and some fundamentals of democratic theory. Recall what happens to the magic.
First and most important, Eichensehr knocks the power and freedom of the individual off their shared pedestal as the normative standard for evaluating both government (mis)conduct and corporate practice. That view has to be handled delicately as a philosophical matter, because individual agency is one of the central pillars of democratic theory, but it is refreshingly pragmatic. She cites Madison in support of institutional pluralism. Madison was writing about the dual roles of federal and state governments; in Eichensehr’s telling, treating tech companies as states, “having two powerful regulators, rather than only one, can benefit individuals’ freedom, liberty, and security because sometimes it takes a powerful regulator to challenge and check another powerful regulator.” (P. 49.) The individual isn’t all-powerful in practice. Bigger sometimes is better.
Second, Eichensehr repositions questions of legitimacy and accountability in governance institutions, pushing past political science concepts (“exit,” “voice”), past early cyber-constitutionalism (which described tech companies as merely commercial “merchant sovereigns”), and—implicitly—past easy reliance on critiques of neoliberalism (private appropriation of public functions, embodied in state-sanctioned invocations of contract and property law). She argues that contemporary corporate “citizenship” entails not only how the “state” disciplines those who are subject to its power, but also how the “state” advocates on their behalf. In Digital Switzerlands, she sees novel blends of public functions (defending user interests in privacy against state invasions), private functions (services traded in the market, data collection), and individual and collective identity, woven together at least as tightly as they were in 20th-century company towns, and arguably more so. But companies’ formally private status means that mechanisms of accountability, such as transparency and modes of due process, often can’t be imposed from without. They must be adopted voluntarily, as Google has done with its transparency reports and treatment of Right to Forgotten requests.
It’s possible to read Digital Switzerlands as a not-so-subtle defense of the corporate status quo, that corporate state-hood is not the world that we might want but is close to the best of the world that we might have. Break up big tech in the name of old school, consumer-protective antitrust at our peril, one might infer, and instead find ways to require, expect, or just hope that big tech will adopt a better demeanor in a traditional public-oriented sense.
I think that this conservative reading is a mistake. Instead, it’s worth taking the article quite seriously on its own terms, as a thoughtful effort to take apart well-established patterns of thinking about cyberlaw and policy and to reassemble them in a forward-looking and potentially sustainable way. The tech sector may have been naïve and selfish in telling the digital Switzerland story. Digital machines are no more tools of personal liberation and freedom supplied by neutral designers—nor less—than the assembly lines of Henry Ford were sources of individual opportunity provided by benign automobile makers. Yet to some scholars, the dehumanizing factories of the early 20th century produced relatively wealthy communities and class mobility; to its defenders, Facebook gives us identity and community. Eichensehr has taken the first steps toward what may become a larger realignment of arguments about statehood and governance. That project is well-worth considering, even if—abracadabra—it may take us in unexpected directions.
Cite as: Michael Madison, Fucking With the Magic
(January 22, 2019) (reviewing Kristen E. Eichensehr, Digital Switzerlands
, 167 U. Pa. L. Rev.
___ (forthcoming 2019), available at SSRN), https://cyber.jotwell.com/fucking-with-the-magic/
Shaanan Cohney, David Hoffman, Jeremy Sklaroff, & David Wishnick, Coin-Operated Capitalism
, __ Columbia L. Rev.
__ (forthcoming), available at SSRN
Oldthinkers unbellyfeel blockchain. We are told that blockchains, cryptocurrencies, and smart contracts are about to revolutionize everything. They remove fallible humans from every step where a transaction could go wrong, replacing them with the crystalline perfection of software. Result: clarity, certainty, and complete freedom from censors and tyrants.
And yet we still don’t get it. Some oldthinkers think that not all regulation is tyranny, while others point to the environmentally disastrous costs of blockchain strip mining. And then there are those of us who think that the entire premise of blockchain boosterism is mistaken, because the new “smart” contracts are not so different from the old “dumb” contracts. Coin-Operated Capitalism, by a team of four authors from the University of Pennsylvania, is the best recent entry in this vein. It is a playful, precise, and damning look at how smart contracts actually function in the real world.
This is one of very few law-and-computer science articles that takes both sides of the “and” seriously, and is one of the best examples I have ever seen of what this field can be. It is a law-review article about an empirical study of contracts and software. To quote the star footnote’s description of the authors’ combined expertise, “Cohney is a fifth-year doctoral student in computer and information science at the University of Pennsylvania, where Hoffman is a Professor of Law, Sklaroff received a JD/MBA in 2018, and Wishnick is a fellow in the Center for Technology, Innovation and Competition.” (Jeremy Sklaroff, the (alphabetically) third author, wrote an unusually good law-review comment on smart contracts last year.) Another nine research assistants helped, presumably with the extensive white-paper reading and coding. It takes a village to write a truly interdisciplinary article.
Coin-Operated Capitalism’s target is the initial coin offering (ICO). As the name suggests, an ICO is a blockchain analogue to a corporate initial public offering (IPO) of equity shares. Instead of receiving stock in a new business, an ICO investor receives tokens that give her a stake in a new smart contract. The token typically gives the holder some transactional rights (the authors’ example is to receive sodas from vending machines) and some control rights (e.g. to vote on investment opportunities, or to approve modifications to some of the terms of the ICO contract), both of which are coded into the smart contract. The promoters use the funds thereby raised for the associated venture (e.g., building and filling the vending machines), for the development and maintenance of the smart contract itself, and sometimes for further investments as directed by the new class of token-holders.
Anyone who has ever heard of securities law should be hearing alarm bells at this point. A typical ICO walks and quacks like “an investment of money in a common enterprise with a reasonable expectation of profits to be derived from the entrepreneurial or managerial efforts of others,” which can trigger obligations to register with the Securities and Exchange Commission, disclose investment risks, and to screen investors in various ways. Indeed, some ICOs are transparent attempts to route around securities regulation, while others are outright scams, dressing up old cons with new buzzwords. But there is an interesting and important class of what we might call “legitimate” ICOs. They have business models that don’t fit well with a traditional corporation (e.g. decentralized storage as in Filecoin) and they make a good-faith effort to use the funds for the benefit of and as directed by token-holding participants.
ICOs (both sketchy and legitimate) typically come with a “white paper”—it would be a prospectus in a securities offering, but we’re not allowed to call it that—describing how the new coin will work and why investors should be confident enough in it to participate in the ICO. In the securities context, regulators and class-action lawyers have made a blood sport out of comparing a company’s securities disclosures with its actual conduct. The authors of Coin-Operated Capitalism brilliantly do something similar with ICO white papers. They compare the promises made in the offering documents of the fifty top-grossing ICOs of 2017 with those ICOs’ own smart contracts. An ICO, after all, is an investment specifically in the smart contract.
The results of the survey are sobering. Dozens of ICO smart contracts failed basic investor-protection checks:
- Some allowed the promoters to arbitrarily dilute the shares of ICO investors by issuing more tokens in the future (14 out of 50).
- Some allowed the promoters to immediately cash out their positions following the ICO with no vesting schedule (37 out of 50).
- Some allowed the promoters to modify the smart contract unilaterally—the equivalent of a corporation’s founder revising its charter (39 out of 50).
In many cases, the smart contract code directly contradicted promises made in the supporting white papers and other ICO documents. In other cases, the ICO promoters either made no promises about these features in their white papers, or explicitly disclosed them. These cases, while less alarming, are in some ways even more puzzling. The blockchain triumphalism story is a story of code displacing law. An investor can rely on whatever the smart contract says, and emphatically should not rely on anything else. But these are smart contracts that let the promoters take the money and run: who in their right mind would rely on one?
One possibility is that the ICO market is full of dumb-as-rocks money: investors hear blockchain blockchain blockchain and lose all capacity for rational thought. If so, any ICO promoter who doesn’t take the money and run is a holy fool for blockchain.
It could also be that ICO investors are smart but out of their depth. They know how to read a legal document closely, but don’t yet understand that ICO due diligence requires a line-by-line code audit. With time, they may learn how to translate their expertise in corporate governance to smart-contract governance, but they’re not there yet. Coin-Operated Capitalism finds some evidence that this understanding is seeping into the ICO investment community; another way to check would be to run a similar study on more recent ICOs.
Most interesting of all, maybe ICO investors correctly believe that they don’t need to rely on the smart contracts. Even if a promoter has the technical capacity to dilute investors into a trivial stake or modify their rights out of existence, investors are rationally unafraid it would actually happen. Perhaps they expect to win the fraud lawsuit and collect on their judgment if it comes to that. Perhaps they know that the promoters are holy fools who will preach the Gospel of Satoshi even in the face of temptation. Perhaps they see that the projects will come crashing down if the promoters start to slink away and that the promoters themselves are better off staying the course. Perhaps they know where the promoters live and also know some burly men with guns. Or perhaps they think that the shame of forever being that blockchain guy who took the money and ran is enough to deter insider self-dealing.
But, as the authors explain, such arguments “are dangerous for ICO advocates. They show that advocates have already abandoned the high ground of ‘lex cryptographica.’” All of these safeguards are off the blockchain. It’s not that the smart contract protects investors. Instead, the legal system protects them, or the business community protects them, or business norms protect them. These are all things that are part of the glue holding modern capitalism together. The smart contract is just a starting point, an anchor that gives an important but incomplete description of people’s rights and responsibilities. The real work happens in the real world, not in the computations carried out by the smart contract. And if that’s right, then what was the point of the blockchain?
It is hard to dispute the authors’ conclusion that “no one reads smart contracts.” It is also hard to see these ICOs as anything other than open-and-shut fraud. It may not necessarily be securities fraud, but the code itself proves that it does not meet the promises being made about it. For all the rhetoric of tyranny and censorship, maybe regulators understand a few things about contracts, money, and human nature that smart contract promoters and investors do not.
Cite as: James Grimmelmann, Extraordinary Popular Delusions and the Madness of ICO Crowdfunding
(November 26, 2018) (reviewing Shaanan Cohney, David Hoffman, Jeremy Sklaroff, & David Wishnick, Coin-Operated Capitalism
, __ Columbia L. Rev.
__ (forthcoming), available at SSRN), https://cyber.jotwell.com/extraordinary-popular-delusions-and-the-madness-of-ico-crowdfunding/
There has been growing academic interest in the topic of decentralised, distributed open ledger technology—better known as the blockchain (see my last Jot). While the literature has been substantial, the copyright implications of the blockchain have not received as much coverage from the research community, perhaps because the use cases have not been as prevalent in the media. Taking the usual definition of a blockchain as an immutable distributed database, it is easy to imagine some potential uses of the technology for copyright, and for the creative industries as a whole. Blockchain technology has been suggested for management of copyright works through registration, enforcement, and licensing, and also as a business model allowing micropayments and use tracking.
Blockchain and Smart Contracts: The Missing Link in Copyright Licensing? by three academics at the Institute for Information Law at the University of Amsterdam, tackles this subject in excellent fashion. The article has the objective of introducing legal audiences to many of the technologies associated with the blockchain. It goes into more specific treatment of various features, such as distributed ledger technology (DLT), digital tokens, and smart contracts, and the potential uses of these for copyright licensing specifically. The article is divided into three parts: an introduction to the technology, an analysis of its potential use for copyright licensing, and a look at possible problems.
The article explains that DLTs are consensus mechanisms which “ensure that new entries can only be added to this distributed database if they are consistent with earlier records.” (P. 4.) Other technical features include the ability to time-stamp transactions, and the potential to verify ownership of a work through the use of “wallets” and other cryptographic tools. This type of technology can be useful for various copyright test cases, such as allocating rights, registering ownership, and keeping track of expiration. Because you could have an immutable and distributed record of ownership and registration, it would be possible for DLTs to become a useful tool for the management of copyright works by collecting agencies.
Then the article explains the concept of tokenization and the use of digital tokens. Any sort of data can be converted into a digital token, and these can express all sorts of rights. For example, tokenizing rights management information (RMI) could be useful for the expression and management of copyright works through licensing. Further action can be taken through a smart contract, which is software that interacts with the blockchain to execute if-then statements and can also be used for running more complex commands and sub-routines expressing legal concepts. According to the authors, a large number of “dumb transactions” could be taken over by smart contracts, allowing the identification and distribution of royalties, and the payment of such. While the deployment of large-scale smart contract management mechanisms would be very complex, the authors envisage a system by which owners retain control over their own works, and use smart contracts to allocate and distribute rights directly to users by means of these automated transactions.
The article goes into detail on other potential uses, particularly the use of blockchain in registration practices, the potential for solving the orphan works problem, fair remuneration, and allocating rights through RMIs. This is done with both knowledge of the subject as well as rigour in the analysis of potential pitfalls.
The article’s best section is its analysis of the many potential issues that may arise in using DLT and smart contracts in copyright. The authors astutely identify the complex nature of copyright norms, and comment that the many variations from one jurisdiction to another may prove to be too complex for a medium that is looking for ease of execution. The authors comment:
In the case of blockchain it is hard, at least as of 2018, to detect high levels of enthusiasm that would lead, in the short term, to the legal recognition/protection of copyright-replacing blockchain-related technological innovations. (P. 22.)
This matches my own observations about this subject. I have found that while the hype is considerable, there are just too many concerns about the potential uses of blockchain technologies in this area. There are valid concerns about the scalability of the technology, but also about the need to deploy complex technological solutions that could be equally implemented with other existing technology. The blockchain, we are told, can allow authors to publish their work with an immutable record of initial ownership, with automated remuneration awarded. But reality can be quite difficult to match with this vision. For starters, it may be difficult, if not impossible, to match existing rights, exceptions, and limitations in a manner that can be executed in a smart contract; the authors explain the complexity of international copyright law, with mismatched rights and responsibilities across jurisdictions. Similarly, blockchain systems are expensive, and if the market is currently working well with offline and online systems, then it is difficult to see how a cumbersome, slow, and wasteful solution would be adopted. The authors finish the discussion stating that there is a familiar feeling to the blockchain discussion, as DRM (digital rights management) was presented a decade or more ago as the enforcement solution that would end copyright infringement. Needless to say, that was not the case.
The question at the heart of any blockchain implementation always remains the same, what is the problem that you are trying to solve, and is the blockchain the appropriate technology to solve that issue?
Cite as: Andres Guadamuz, Copyright, Smart Contracts, and the Blockchain
(October 29, 2018) (reviewing Balázs Bodó, Daniel Gervais, & João Pedro Quintais, Blockchain and Smart Contracts: The Missing Link in Copyright Licensing?
, Int'l. J. of L. & Info. Tech.
(September 2018)), https://cyber.jotwell.com/copyright-smart-contracts-and-the-blockchain/
Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security
, 107 Cal. L. Rev.
__ (forthcoming 2019), available at SSRN
It’s no secret that the United States and much of the rest of the world are struggling with information and security. The flow of headlines about data breaches, election interference, and misuse of Facebook data show different facets of the problem. Information security professionals often speak in terms of the “CIA Triad”: confidentiality, integrity, and availability. Many recent cybersecurity incidents involve problems of confidentiality, like intellectual property theft or theft of personally identifiable information, or of availability, like distributed denial of service attacks. Many fewer incidents (so far) involve integrity problems—instances in which there is unauthorized alteration of data. One significant example is the Stuxnet attack on Iranian nuclear centrifuges. The attack made some centrifuges spin out of control, but it also involved an integrity problem: the malware reported to the Iranian operators that all was functioning normally, even when it was not. The attack on the integrity of the monitoring systems caused paranoia and a loss of trust in the entire system. That loss of trust is characteristic of integrity attacks and a large part of what makes them so pernicious.
Bobby Chesney and Danielle Citron have posted a masterful foundational piece on a new species of integrity problem that has the potential to take such problems mainstream and, in the process, do great damage to trust in reality itself. In Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, Chesney and Citron explain a range of possible uses for “deep fakes,” a term that originated from imposing celebrities’ faces into porn videos, but that they use to describe “the full range of hyper-realistic digital falsification of images, video, and audio.” (P. 4.)
After explaining the technology that enables the creation of deep fakes, Chesney and Citron spin out a parade of (plausible) horribles resulting from deep fakes. Individual harms could include exploitation and sabotage, such as a fake compromising video of a top draft pick just before a draft. (P. 19.) The equally, if not more, worrisome societal harms from deep fakes include manipulating elections through timely release of damaging videos of a candidate, eroding trust in institutions though compromising videos of their leaders, exacerbating social divisions by releasing videos of police using racial slurs, spurring a public panic with recordings of government officials discussing non-existent disease outbreaks, and jeopardizing national security through videos of U.S. troops perpetrating atrocities. (Pp. 22-27.)
So what can be done? The short answer appears to be not much. The authors conclude that technology for detecting deep fakes won’t save us, or at least won’t save us fast enough. Instead, they “predict,” but don’t necessarily endorse, “the development of a profitable new service: immutable life logs or authentication trails that make it possible for the victim of a deep fake to produce a certified alibi credibly proving that he or she did not do or say the thing depicted.” (P. 54.) This possible “fix” to the problem of deep fakes bears more than a passing resemblance to the idea of “going clear” spun out in Dave Eggers’ book The Circle. (Pp. 239-42.) In the novel, politicians begin wearing 24-hour electronic monitoring and streaming devices to build the public’s trust—and then others are pressured to do the same because, as Eggers puts it, “If you aren’t transparent, what are you hiding?” (P. 241.) When the “cure” for our problems comes from dystopian fiction, one has to wonder whether it’s worse than the disease. Moreover, companies offering total life logs would themselves become ripe targets for hacking (including attacks on confidentiality and integrity) given the tremendous value of the totalizing information they would store.
If tech isn’t the answer, what about law? Chesney and Citron are not optimistic about most legal remedies either. They are pessimistic about the ability of federal agencies, like the Federal Trade Commission or Federal Communications Commission, to regulate our way out of the problem. They do identify ways that criminal and civil remedies may be of some help. Victims could sue deep fake creators for torts like defamation and intentional infliction of emotional distress, and deep fake creators might be criminally prosecuted for things like cyberstalking (18 U.S.C. § 2261A) or impersonation crimes under state law. But, as the authors note, legal redress even under such statutes may be hampered by, for example, the inability to identify deep fake creators, or to gain jurisdiction over them. These statutes also do little do redress the societal, as opposed to individualized, harms from deep fakes.
For deep fakes perpetrated by foreign states or other hostile actors, Chesney and Citron are somewhat more optimistic, highlighting the possibility of military and covert actions, for example, to degrade or destroy the capacity of such actors to produce deep fakes. (Pp. 49-50.) They also suggest a way to ensure that economic sanctions are available for “attempts by foreign entities to inject false information into America’s political dialogue,” including attempts using deep fakes. (P. 53.) These tactics might have some benefit in the short term, but sanctions have not yet stemmed efforts at foreign interference in elections. And efforts to disrupt Islamic State propaganda have shown that attempts at digital disruption of adversaries’ capacities may often prompt a long-running battle of digital whack-a-mole.
One of the paper’s most interesting points is its discussion of another tactic that one might think would help address the deep fake problem, namely, public education. Public education is often understood to help inoculate against cybersecurity problems. For example, teaching people to use complex passwords and not to click on suspicious email attachments bolsters cybersecurity. But Chesney and Citron point out a perverse consequence of educating the public about deep fakes. They call it the “liar’s dividend”: “a skeptical public will be primed to doubt the authenticity of real audio and video evidence,” so those caught engaging in bad acts in authentic audio and video recordings will exploit this skepticism to “try to escape accountability for their actions by denouncing authentic video and audio as deep fakes.” (P. 28.)
Although the paper is mostly profoundly disturbing, Chesney and Citron try to end on a positive note by focusing on the content screening and removal policies of platforms like Facebook. They argue that the companies’ terms of service agreements “will be primary battlegrounds in the fight to minimize the harms that deep fakes may cause,” (P. 56) and urge the platforms to practice “technological due process.” (P. 57.) Facebook, they note, “has stated that it will begin tracking fake videos.” (P. 58.) The ending note of optimism is welcome, but rather underexplored in the current draft, leaving readers hoping for more details on what, when, and how much the platforms might be able and willing to do to prevent the many problems the authors highlight. It also raises fundamental questions about the role of private companies in playing at least arguably public functions. Why should this be the companies’ problem to fix? And if the answer is because they’re the only ones who can, then more basically, how did we come to the point where that is the case, and is that an acceptable place to be?
In writing the first extended legal treatment of deep fakes, Chesney and Citron understandably don’t purport to solve every problem they identify. But in a world plagued by failures of imagination that leave the United States reeling from unexpected attacks—Russian election interference being the most salient—there is tremendous benefit to thoughtful diagnosis of the problems deep fakes will cause. Deep fakes are, as Chesney and Citron’s title suggests, a “looming challenge” in search of solutions.
Lilian Edwards and Michael Veale, Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For
, 16 Duke L. & Tech. Rev.
18 (2017), available at SSRN
Scholarship on whether and how to regulate algorithmic decision-making has been proliferating. It addresses how to prevent, or at least mitigate, error, bias and discrimination, and unfairness in algorithmic decisions with significant impacts on individuals. In the United States, this conversation largely takes place in a policy vacuum. There is no federal agency for algorithms. There is no algorithmic due process—no notice and opportunity to be heard—not for government decisions, nor for private companies’. There are—as of yet—no required algorithmic impact assessments (though there are some transparency requirements for government use). All we have is a tentative piece of proposed legislation, the FUTURE of AI Act, that would—gasp!—establish a committee to write a report to the Secretary of Commerce.
Europe, however, is a different story. The General Data Protection Regulation (GDPR) went into direct effect on EU Member States on May 25, 2018. It contains a hotly debated provision, Article 22, that may impose a version of due process on algorithmic decisions that have significant effects on individuals. For those looking to understand how the GDPR impacts algorithms, I recommend Lilian Edwards’ and Michael Veale’s Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For. Edwards and Veale have written the near-comprehensive guide to how EU data protection law might affect algorithmic quality and accountability, beyond individualized due process. For U.S. scholars writing in this area, this article is a must-read.
Discussions of algorithmic accountability in the GDPR have, apart from this piece, largely been limited to the debate over whether or not there is an individual “right to an explanation” of an algorithmic decision. Article 22 of the GDPR places restrictions on companies that employ algorithms without human intervention to make decisions with significant effects on individuals. Companies can deploy such algorithmic decision-making only under certain circumstances (when necessary for contract or subject to explicit consent), and even then only if they adopt “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests.” These “suitable measures” include “at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” They also arguably include a right to obtain an explanation of a particular individualized decision. The debate over this right to an explanation centers on the fact that it appears in a Recital (which, in brief, serves as interpretative guidance), and not in the GDPR’s actual text. The latest interpretative document on the GDPR appears to agree with scholars who argue that a right to an explanation does exist, because it is necessary for individuals to contest algorithmic decisions. This suggests that the right to explanation will be oriented towards individuals, and making algorithmic decisions understandable by (or legible to) an individual person.
Edwards and Veale move beyond all of this. They do engage with the debate about the right to an explanation, pointing out both potential loopholes and the limitations of individualized transparency. They helpfully add to the conversation about the kinds of explanations that could be provided: (A) model-centric explanations that disclose, for example, the family of model, input data, performance metrics, and how the model was tested; and (B) subject-centric explanations that disclose, for example, not just counterfactuals (what would I have to do differently to change the decision?) but the characteristics of others similarly classified, and the confidence the system has in a particular individual outcome. But they worry that an individualized right to an explanation would in practice prove to be a “transparency fallacy”—giving a false sense of individual control over complex and far-reaching systems. They valuably add that the GDPR contains a far broader toolkit for getting at many of the potential problems with algorithmic decision-making. Edwards and Veale observe that the tools of omnibus data protection law—which the U.S. lacks—are tools that can also work in practice to govern algorithms.
First, they point out that the GDPR consists of far more than Article 22 and related transparency rights. This is an important point to make to a U.S. audience, which might otherwise come away from the right to explanation debate believing that in the absence of a right to an explanation, algorithmic decision-making won’t be governed by the GDPR. That conclusion would be wrong. Edwards and Veale point out that the GDPR contains other individual rights—such as the right to erasure, and the right to data portability—that will affect data quality and allow individuals to contest their inclusion in profiling systems, including ones that give rise to algorithmic decision-making. (I was surprised, given concerns over algorithmic error, that they did not also discuss the GDPR’s related right to rectification—the right to correct data held on an individual—which has been included in calls for algorithmic due process by U.S. scholars such as Citron & Pasquale and Crawford & Schultz.) These individual rights potentially give individuals control over their data, and provide transparency into profiling systems beyond an overview of how a particular decision was reached. But there remains the question of whether individuals will invoke these rights.
Edwards and Veale identify that the GDPR goes beyond individual rights to “provide a societal framework for better privacy practices and design.” For example, the GDPR requires something like privacy by design (data protection by design and by default), requiring companies to build data protection principles, such as data minimization and purpose specification, into developing technologies. For high-risk processing, including algorithmic decision-making, the GDPR requires companies to perform (non-public) impact assessments. And the GDPR includes a system for formal co-regulation, nudging companies towards codes of conduct and certification mechanisms. All of these provisions will potentially influence design and best practices in algorithmic decision-making. Edwards and Veale argue that these provisions—aimed at building better systems at the onset, and providing ongoing oversight over systems once deployed—are better suited to governing algorithms than a system of individual rights.
Edwards and Veale are not GDPR apologists. They recognize significant limitations in the law, including the lack of a true class-action mechanism, even where the GDPR contemplates third-party actions by NGOs. They acknowledge that data-protection authorities are often woefully underfunded and understaffed. And, like others, they point out mismatches between the GDPR’s language and current technological and social practices—asking, for example, whether behavioral advertising constitutes an algorithmic “decision.” But they helpfully move the conversation about algorithmic accountability away from the “right to an explanation” and towards the broader regulatory toolkit of the GDPR.
Where the piece falters most is in its almost offhand dismissal of individualized transparency. Some form of transparency will be necessary for the regulatory system that they describe to work—a complex co-regulatory system involving impact assessments, codes of conduct, and self-certification. Without public oversight of some kind, that system may be subject to capture, or at least devoid of important feedback from both civil society and public experts. And, as the ongoing conversation about justifiability shows, both the legitimizing and the dignitary value of individualized decisional transparency cannot be dismissed so lightly.
I wish this piece had a different title. In dismissing the value of an individual right to explanation, the title obscures the valuable work Edwards and Veale do in charting other regulatory approaches in the GDPR. However the right to an explanation debate plays out, they show that unlike in the United States, algorithmic decision-making is in the regulatory crosshairs in the EU.
Cite as: Margot Kaminski, The GDPR’s Version of Algorithmic Accountability
(August 16, 2018) (reviewing Lilian Edwards and Michael Veale, Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For
, 16 Duke L. & Tech. Rev.
18 (2017), available at SSRN), https://cyber.jotwell.com/the-gdprs-version-of-algorithmic-accountability/
Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018).
We have a problem with poverty, which we have converted into a problem with poor people. Policymakers tout technology as a way to make social programs more efficient, but they end up encoding the social problems they were designed to solve, thus entrenching poverty and over-policing of the poor. In Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks uses three core examples—welfare reform software in Indiana, homelessness service unification in Los Angeles, and child abuse prediction in Pennsylvania—and shows that while they vary in how screwed up they are (Indiana terribly, Los Angeles a bit, and Pennsylvania very hard to tell), they all rely on assumptions that leave poor people more exposed to coercive state control. That state control both results from and contributes to the assumption that poor people’s problems are their own fault. The book is a compelling read and a distressing work, mainly because I have little faith that the problems Eubanks so persuasively identifies can be corrected.
Across the country, poor and working-class people are targeted by new tools of digital poverty management and face life-threatening consequences as a result. Automated eligibility systems discourage them from claiming public resources that they need to survive and thrive. Complex integrated databases collect their most personal information, with few safeguards for privacy or data security, while offering almost nothing in return. Predictive models and algorithms tag them as risky investments and problematic parents. Vast complexes of social service, law enforcement, and neighborhood surveillance make their every move visible and offer up their behavior for government, commercial, and public scrutiny.
As Eubanks points out, the poor are test subjects because they offer “‘low rights environments’ where there are few expectations of political accountability and transparency.” Even those who do not care about poverty should be paying attention, however, because “systems first designed for the poor will eventually be used on everyone.”
Eubanks’ recommendation, even as more punitive measures are being enacted, is for more resources and fewer requirements. Homelessness isn’t a data problem, it’s a carpentry problem, and a universal basic income or universal health insurance would allocate care far better than a gauntlet of automated forms. Eubanks points out that automation, despite its promised efficiencies, has coincided with kicking people off of assistance programs. In 1973, nearly half of people under the poverty line received AFDC (Aid to Families with Dependent Children), but a decade later that was 30 percent (coinciding with the introduction of the computerized Welfare Management System) and now it’s less than 10 percent. Automated management is a tool of plausible deniability, allowing elites to believe that the most worthy of the poor are being taken care of and that the unworthy don’t deserve care, as evidenced by the fact that they failed to behave as they were asked to do in complying with various requirements to submit information and be subjected to surveillance.
Eubanks begins with the most obvious disaster: Indiana’s expensive contract with IBM to get rid of most caseworkers and automate medical coverage. Thousands of people were wrongly denied coverage, creating trauma for medically vulnerable people even when the denials were ultimately reversed. Indiana’s failure to create a working centralized system led to some backlash. Eubanks quotes people who suggest that the result from the backlash was a hybrid human-computer system, which restored almost enough caseworkers to deal with the people who make noise, but not enough for those who can’t. Of course, human caseworkers have their own problems—accounts of implicit and even explicit racial bias abound—but discrimination is easily ported to statistical models, such that states with higher African-American populations have “tougher rules, more stringent work requirements, and higher sanction rates.” And Indiana’s automated experiment disproportionately drove African Americans off the TANF (Temporary Assistance for Needy Families) rolls, perhaps in part because the system treated any error (including those made by the system itself) as deliberate noncompliance, and many people simply gave up.
The Los Angeles homelessness story is different, but not different enough. It provides a useful contrast of a “progressive” use of data and computerization. The idea was to create “coordinated entry,” so that homeless people who contacted any service provider would be connected with the right resources, sorting between the short-term and long-term homeless, who need different services, some of which can be less than helpful if given to the wrong groups. There’s a lot of good there, including the idea of “housing first”: rather than limiting housing only to those who are sober, employed, etc., the aim is to get people housed because of how hard all those other things are without housing. Eubanks profiles a woman for whom coordinated entry was a godsend.
But Eubanks also identifies two core problems: (1) The system itself is under-resourced; all the coordination in the world won’t help when there are only 10 beds for every 100 people in need of them. (2) The information collected is invasive and contributes to the criminalization and pathologization of poor people. The data are kept with minimal security and no protection against police scrutiny, which is particularly significant because, as Eubanks rephrases Anatole France, “so many of the basic conditions of being homeless—having nowhere to sleep, nowhere to put your stuff, and nowhere to go to the bathroom—are also officially crimes.” Homeless people can rarely pay tickets, and so the unpaid fines turn into warrants (turning into days in jail when they can’t afford bail, even though these kinds of nuisance charges are usually dismissed once in front of a judge). People in the database turn into fugitives.
These two problems reinforce each other. Given the low chance of getting help, people are less willing to explain their circumstances, often stories of escalating misfortune and humiliation, to the representative of the state’s computer. The resource crunch also contributes to workers’ felt imperative to find the most deserving and thus to scrutinize every applicant for appropriate levels of dysfunctionality. Too little trauma, and services might be deemed unnecessary. But too much dysfunctionality can also be disqualifying—the housing authority might determine that a client is incapable of living independently. One group of caseworkers Eubanks discusses “counsel their clients to treat the interview at the housing authority like a court proceeding.” They also see vulnerable clients rejected by landlords; Section 8 vouchers to pay for housing are nice, but still require a willing landlord, and the vouchers expire after six months, meaning that a lot of clients just give up. Meanwhile, “[s]ince 1950, more than 13,000 units of low-income housing have been removed from Skid Row, enough for them all.” It’s also worth noting how much discretion remains with humans, despite the appearance of Olympian objectivity in a housing need score: clients are assessed based on self-reports, and they won’t always tell people they haven’t grown to trust about circumstances bearing on their needs, including trauma.
What really mattered to getting resources devoted to addressing homelessness in Los Angeles, Eubanks argued, was rights, not data. Court rulings found that routine police practices—barring sleeping in public and confiscating and destroying the property of homeless people found in areas where they were considered undesirable—were unconstitutional. Once that happened, tent cities sprung up in places visible to people with money and power. Better data helped in identifying what resources were needed where, but tent cities were the driver of reform.
Finally, the experience of child welfare prediction software in Allegheny County, Pennsylvania, has continuities with and divergences from the other two stories. The software is at the moment used just to back up individual caseworkers’ determinations of whether to further investigate child abuse based on a call to the child welfare hotline, though Eubanks already saw caseworkers tweaking their own estimates of risk to match the model’s, an instance of automation bias that ought to alarm us. Some of the problems were statistical: the number of child deaths and near-deaths in the county is thankfully very low, and you can’t do a good model with a handful of cases a year for a population of 1.23 million.
Setting the base-rate problem aside, you can’t actually measure levels of child abuse. You can measure proxies, such as how many calls to CPS (Child Protective Services) are made and how many children CPS removes from a home. As a result, the automated system ends up predicting “decisions made by the community (which families will be reported to the hotline) and by the agency and the family courts (which children will be removed from their families), not which children will be harmed.” Unfortunately, those proxies are precisely the ones we know are infected with persistent racial and class bias, so that bias is baked into the predictions. This is the same problem explained so well in Cathy O’Neil’s Weapons of Math Destruction, a good book to read along with this one.
In Allegheny County itself, “the great majority of [racial] disproportionality in the county’s child welfare services arises from referral bias, not screening bias.” Sometimes this arises from perceptions of neighborhoods being bad, so the threshold for reporting someone from those neighborhoods is lower—which in the US means minority neighborhoods. But the prediction system “focuses all its predictive power and computational might on call screening, the step it can experimentally control, rather than concentrating on referral, the step where racial disproportionality is actually entering the system.” And it gets worse: the model is evaluated for whether it predicts future referrals. “[T]he activity that introduces the most racial bias into the system is the very way the model defines maltreatment.”
In rural or suburban areas, where witnesses are rarer, no one may call the hotline. Families with enough resources use private services for mental health or addiction treatment and thus don’t create a record available to the state (if they don’t directly talk about child abuse in a way that triggers mandatory reporting). Either way, those disproportionately whiter and wealthier families stay out of the system for conduct that would, if they were visible to the system, increase their risk score. The system can provide very useful services, but those services then become part of the public record, helping define a family as at-risk. A child whose parents were investigated by CPS now has a record of interaction with the system that, when she becomes a mother, will increase her risk score if someone reports her. Likewise, use of public services is coded as a risk factor. A quarter of the predictive variables in the model are “direct measures of poverty”—TANF, SSI (Supplemental Security Income), SNAP (Supplemental Nutrition Assistance Program), and county medical assistance. Another quarter of the predictive variables measure “interaction with juvenile probation” and the child welfare agency itself, when “professional middle-class families have more privacy, interact with fewer mandated reporters, and enjoy more cultural approval of their parenting” than poorer families. Nuisance calls by people with grudges are also a real problem.
Even if that didn’t bother you, consider this: of 15,000 abuse reports in 2016, at its current rate of (proxy-defined) accuracy, the system would produce 3,600 incorrect predictions. And the planned model is supposed to be “run on a daily or weekly basis on all babies born in Allegheny County.” This is a big step forward not just in extending the tech to everyone, but also in commitment to prediction. Prediction is about guessing how poor people might behave in the future based on data from their networks, not just about judging their past individual behavior, and thus it can infect entire communities and generations. At the same time, “digital poorhouses,” as Eubanks calls the networks into which data about poor people are fed, are hard to see and hard to understand, making them harder to organize against.
Eubanks also points out that parents can naturally resent outside scrutiny and often feel that once the child welfare system is involved the standards keep getting raised on them, no matter what they try to do. And caseworkers interpret resistance and resentment as danger signs. While these reactions aren’t directly dependent on the technology, they are human behaviors that change what the technology does in the world.
In theory, big data could increase transparency and decrease discrimination where that comes from the humans in the system. Unfortunately, that doesn’t seem to be what’s happening. Among other things, the purported “transparency” of algorithms, even putting trade secrets aside, is very much a transparency for the elite who can figure the code out, not for ordinary participants in democratic governance, who basically have to take experts’ explanations on faith.
In addition, Eubanks finds:
the philosophy that sees human beings as unknowable black boxes and machines as transparent…deeply troubling. It seems to me a worldview that surrenders any attempt at empathy and forecloses the possibility of ethical development. The presumption that human decision-making is opaque and inaccessible is an admission that we have abandoned a social commitment to try to understand each other. Poor and working-class people in Allegheny County want and deserve more: a recognition of their humanity, an understanding of their context, and the potential for connection and community.
This sounds great, but I wonder if it is fully convincing, in the fallen world in which we live. On the other hand, given that there are other interventions that wouldn’t sort the “worthy” from the “unworthy” in the ways that current underfunded services are forced to do, it is certainly persuasive to argue that we shouldn’t try to move from biased caseworkers to biased algorithms.
Along with non-technical solutions, Eubanks offers some ethics for designers, focusing on whether the tools they make increase the self-determination and agency capabilities of the poor, and whether they’d be tolerated if targeted at the non-poor. I think she’s overly optimistic about the latter criterion, at least as applied to private corporate targeting, which we barely resist. The example of TSA airport screening is also depressing. Perhaps I’d suggest the modification that, if we expect wealthier people to buy their way out of the system, as they can with TSA Pre-check and CLEAR Global Entry (at least if they’re not Muslim), then there is a problem with the system. Informed consent and designing with histories of oppression in mind, rather than assuming that equity and good intentions are the default baselines, are central to her vision of good technological design.
Like the far more caustic Evgeny Morozov, Eubanks contends that we have turned to technology to solve human problems in ways that are both corrupting and self-defeating. And Eubanks doesn’t focus the blame on Silicon Valley. The call for automation is coming from inside the polity. In fact, while IBM comes in for substantial criticism for overpromising in the Indiana example, the real drivers in Eubanks’ story are the policy wonks who are either trying to shrink the system until it can be drowned in the bathtub (Indiana), or sincerely trying to build something helpful while resources are continually being drained from the system (Los Angeles and Pennsylvania).
Ultimately, Eubanks argues, the problem is that we’re in denial about poverty, an experience that will happen to the majority of Americans for at least a year between the ages of 20 and 65, while two-thirds of us will use a means-tested public benefit such as TANF, SNAP, Medicaid, or SSI. But we persist in pretending that poverty is “a puzzling aberration that happens only to a tiny minority of pathological people.” We pass a suffering man on the street and fail to ask him if he needs help. We don’t keep our tormented child in an isolated place, as they do in Omelas. Instead of walking away, we walk by—but we don’t meet each other’s eyes as we do so. This denial is expensive in so many ways—morally, monetarily, and even physically, as we build entire highways, suburbs, private schools, and prisons so that richer people don’t have to share in the lives of poorer people. It rots politics: “people who cannot meet eachothers’ eyes will find it very difficult to collectively govern.” Eubanks asks us to admit that, as Dan Kahan and his colleagues have repeatedly demonstrated in work on cultural cognition, our ideological problems won’t be solved with data, no matter how well formed the algorithm.
There is a relatively new SSRN source I have found to be very useful: the Chinese Law e-Journal sponsored by the University of Hong Kong Faculty of Law (edited by Fu Hualing and Shitong Qiao, and thus referred to as Fu and Qiao, which appropriately might be translated as a “happy or blessed bridging”). This source is very broad with regard to the subjects it covers—many among them relating to Technology Law—and provides a valuable insight into how mainly but not exclusively Chinese researchers view developments in China and in the world.
Internet Governance: Exploration of Power Relationship, by Yik Chan Chin and Changfeng Chen, is included in this e-Journal, and was presented at the 2017 Giganet Symposium in Geneva in December last year. That symposium was held back to back (“Day Zero”) with the annual meeting of the Internet Governance Forum (IGF), a United Nations forum that sees itself as perhaps the example of a multistakeholder platform for governance. The paper looks at the reality of Internet governance in China, in search for a mechanism that comes close to the IGF’s multistakeholder model. It provides both a valuable account of the realities of Internet governance in China, and a method for thinking about what constitutes power in blends of multistakeholder and directive governance.
The authors describe in detail the Beijing Internet Association (BIA), a body with more than 100 entities, public and private, that acts as an intermediary between government agencies and those entities. The researchers analyzed this association by using social network analysis, questioning the actors in this setting about their interrelations. Their aim is to identify what they call “the significant force in shaping of Internet governance” power in China.
The authors identify power through three methods: (1) by identifying communications structures between the actors (For example: Which actors can communicate directly? Are there nodes that monopolize interactions?); (2) by assessing the capability of actors to act as a broker, i.e. the ability to bring other actors together to act and share information; and finally (3) by “capacity,” defined by the authors as a set of abilities to understand issues and influence interests. Using this methodology, the authors have—not surprisingly in the Chinese context—identified the secretariat of the BIA as the decisive seat of power in that Internet Governance regime.
Nevertheless, they still regard the BIA as a structure that does incorporate multistakeholder interests, even if strongly directed by government and the Party via its secretariat. The BIA, according to their research, builds strongly on social rather than formal or legal binding forces, using coordination rather than directives, indicating a multistakeholder approach. The BIA is perceived as a pragmatic response to the complexities of the Internet, and a result of learning from failures of more directive interventions. The BIA oscillates between being a dissemination and feedback mechanism for government information and directives, and a self-regulatory body with the described elements of multistakeholderism. The authors also point out differences from other multistakeholder concepts, and refer to internal problems of the BIA model, in particular because it seeks to integrate commercially competing interests. They finally discuss the chances of the BIA’s developing a more self-coordinating rather than directive-oriented governance structure.
The description of the BIA in the paper provides useful information for those not too familiar with the more detailed workings of Internet governance mechanisms in China. Some of the problems the BIA is dealing with sound familiar, even if they may have different political connotations, such as the establishment of an “anti-rumor network,” not unlike attempts in other Internet governance structures to address “Fake News” and political manipulation.
Beyond such detailed insights into the realities of Internet regulation in China, the article achieves three things:
(1) It shows that even in China, inclusive governance mechanics are used to address the limitations of direct centralized government regulation of complex technical, economic, and social issues, even if these mechanics leave no doubt where the final decision making power is situated.
(2) While those mechanics might be observed by some as an indicator of a possible global convergence of Internet governance models, this article invites us to refocus on the role of power in current multistakeholder settings.
(3) The article provides us with a tool set that can help us in assessing what constitutes “power” in the context of mixed governance.
William McGeveran’s new casebook on Privacy and Data Protection Law announces the death of the “death march” that anyone who has ever taught or taken a course in Information Privacy Law has encountered. The death march is the slog in the second half of the semester through a series of similar-but-not-identical federal sectoral statutory regimes, each given just one day of instruction, such as the Privacy Act, FCRA, HIPAA, Gramm Leach Bliley, and FERPA. Professors asked to cover so much substantive law beyond their area of scholarly focus (nobody can focus on all of these) usually resort to choosing only two or three. Even then, the coverage tends to be cursory and unsatisfying.
The death march points to a larger problem: information privacy law doesn’t really exist. At best, privacy law is an assemblage of barely related bits and pieces. The typical privacy course covers constitutional law, a little European Union data protection, a tiny bit of tort, some state law, and the death march of federal statutes. The styles of legal practice covered run the gamut from criminal prosecution and defense, to civil litigation, regulatory practice, corporate governance, and beyond. To justify placing so much in one course, we try futilely to bind together these bits and pieces through broad themes such as harm, social norms, expectations of privacy, and technological change.
My long-held doubt about the coherence of privacy law has led me to teach the course a bit apologetically, feeling like a fraud for pretending to find connections where there are almost none. I’m pleased to report that my belief isn’t universally held: McGeveran’s compelling new casebook is built on the idea that privacy law can be rationalized into a coherent area of practice and pedagogy, one it presents in an organized and tightly woven structure.
I don’t think I’m alone in the belief that privacy law lacks coherence. Daniel Solove, in his magisterial summary of privacy law, Understanding Privacy, argues that rather than give privacy a single, unified definition, the best we can do is identify a Wittgensteinian set of family resemblances of related concerns. Solove’s very good casebook on Information Privacy Law, co-authored with Paul Schwartz, reflects this pragmatic resignation. Their book starts with a long chapter quoting many scholars who cast privacy in different lights and philosophical orientations. Solove and Schwartz don’t do much to try to reconcile these inconsistent voices, suggesting that we ought not try to find any unified theory or consistent coherence in this casebook or this field. Having given up on coherence in chapter one, the rest of the book reads like a series of barely related silos. It’s no wonder that the authors also offer their book sliced into four smaller volumes, which to my mind work better standing on their own.
The other leading, also excellent, casebook, Privacy Law and Society, by Anita Allen and Marc Rotenberg, follows a similar organization, but without the introductory philosophical debate. It too presents privacy law as silos of substance and practice, dividing the field into five broad, but largely disconnected areas: tort, constitutional law, federal statutes, communications privacy, and international law.
McGeveran takes a very different approach. He divides his casebook into three parts, the first two advancing the coherence thesis, both representing refreshingly creative syntheses of privacy law. In Part One, McGeveran provides “Foundations”, which gives a relatively short chapter each on constitutional law, tort law, consumer protection law, and data protection. McGeveran wisely resists the urge to tell any of these four stories at this point in their full depth, delaying parts of each for later in the book. This survey method gives the student a better appreciation for the most important tools in the privacy lawyer’s toolkit; encourages more explicit comparisons between the four categories; and allows for learning through repetition and reinforcement when the topics are revisited later.
The other major innovation is McGeveran’s decision to single out consumer protection law as a distinct area of practice. This builds on work from Solove and Woodrow Hartzog, who have argued that we should treat the jurisprudence of the FTC as a form of common law, and from Danielle Citron, who has pointed to state attorneys general as unheralded great protectors of privacy. McGeveran’s book embraces both arguments, elevating the work of the FTC and state AGs to their due places as primary pillars of U.S. privacy law. This modernizes teaching of the subject, by reflecting what privacy practice has become in the 21st century, with many privacy lawyers advising clients about the FTC far more frequently than they think about tort or constitutional law.
Part Two is even more innovative. It consists of four chapters that follow stages in the “Life Cycle of Data”: “collection”, “processing and use”, “storage and security”, and “disclosures and transfers.” Solove’s influence is again felt here, as these stages echo the major parts of the privacy taxonomy he introduced in Understanding Privacy. Each stage of Part Two introduces new substantive law, but organized around the types of data flows they govern. This prepares students for the issue spotting they will encounter in practice, centering on the data rather than on the artificial boundaries between areas of law. The techie in me appreciates the way this focuses student attention on the broad theme of the impact of technology on privacy.
Because these two parts are so innovative and successful, they serve as the spoonfuls of sugar that help the death march of Part Three go down (although admittedly even this part was still a bit of a slog when I taught from the book this past fall). Students are primed by this point to place statutes like FERPA or HIPAA into the legal framework of Part One and the data lifecycle of Part Two, making them reinforcing examples of the coherent whole rather than disconnected silos. This also reduces the costs (and the guilt) for instructors of cutting sections of the death march. They understand that, thanks to the foundational structures of Part One and Two, their students will be better equipped to encounter, say, educational privacy for the first time on the job.
Finally, as a work of scholarship, not merely pedagogy, McGeveran’s argument for the coherence of privacy law might be an important marker in the evolution of our still relatively young field. Roscoe Pound said that Warren & Brandeis did “nothing less than add a chapter to our law,” a quote well-loved by privacy law scholars. William Prosser has been credited for taking the next step, turning Warren and Brandeis’s concerns into concrete legal doctrine, in the form of the four privacy torts.
This book is positively Prosserian in its aspirations. McGeveran attempts to organize, rationalize, and lend coherence to a messy, incoherent set of fields that we’ve adopted the habit of placing under one label, even if they do not deserve it. I’m not entirely convinced that he has succeeded, that there is something singular and coherent called privacy law, but this book is the best argument for the proposition I have seen. And as a teacher, it is refreshing to leaven my skepticism with this well-designed, compelling new classroom tool.
There is a remarkable body of work on the US government’s burgeoning array of high-tech surveillance programs. As Dana Priest and Bill Arkin revealed in their Top Secret America series, there are hundreds of entities which enjoy access to troves of data on US citizens. Ever since the Snowden revelations, this extraordinary power to collate data points about individuals has caused unease among scholars, civil libertarians, and virtually any citizen with a sense of how badly wrong supposedly data-driven decision-making can go.
In Big Data Blacklisting, Margaret Hu comprehensively demonstrates just how well-founded that suspicion is. She shows the high stakes of governmental classifications: No Work, No Vote, No Fly, and No Citizenship lists are among her examples. Persons blackballed by such lists often have no real recourse—they end up trapped in useless intra-agency appeals under the exhaustion doctrine, or stonewalled from discovering the true foundations of the classification by state secrecy and trade secrecy laws. The result is a Kafkaesque affront to basic principles of transparency and due process.
I teach administrative law, and I plan to bring excerpts of Hu’s article into our due process classes on stigmatic harm (to update lessons from cases like Wisconsin v. Constantineau and Paul v. Davis.) What is so evident from Hu’s painstaking work (including her diligent excavation of the origins, methods, and purposes of a mind-boggling alphabet soup of classification programs) is the quaint, even antique, nature of the Supreme Court’s decisionmaking on stigmatic harm. A durable majority on the Court has held that erroneous, government-generated stigma, by itself, is not the type of injury that violates the 5th or 14th Amendment. Only a concrete harm immediately tied to a reputational injury (stigma-plus) raises due process concerns. As Eric Mitnick has observed, “under the stigma-plus standard, the state is free to stigmatize its citizens as potential terrorists, gang members, sex offenders, child abusers, and prostitution patrons, to list just a few, all without triggering due process analysis.” Mitnick catalogs a litany of commentators who characterize this standard as “astonishing,” “puzzling,” “perplexing,” “cavalier,” “wholly startling,” “disturbing,” “odious,” “distressingly fast and loose,” “disingenuous,” “ill-conceived,” an “affront [to] common sense,” “muddled and misleading,” “peculiar,” “baroque,” “incoherent,” and my personal favorite, “Iago-like.” Hu shows how high the stakes have become thanks to the Court’s blockage of sensible reform of our procedural due process jurisprudence.
Presented numerous opportunities to do so, the Court simply refuses to deeply consider the cumulative impact of a labyrinth of government classifications. We need legal change here, Hu persuasively argues, because there are so many problems with the analytical capacities of government agencies (and their contractors), as well as the underlying data they are relying on. Cascading, knock-on effects of mistaken classification can be enormous. In area after area, from domestic law enforcement to anti-terrorism to voting roll review, Hu collects studies from experts that indicate not merely one-off misclassifications, but a deeper problem of recurrent error and bias. The database bureaucracy she critiques could become an unchallengeable monolith of corporate and government power arbitrarily arrayed against innocents, which prevents them from challenging their stigmatization both judicially and politically. When the state can simply use software and half-baked algorithms to knock legitimate voters off the rolls, without notice or due process, the very foundations of its legitimacy are shaken. Similarly, a lack of programmatic transparency and evaluative protocols in many settings makes it difficult to see how the traditional touchstones of the legitimacy of the administrative state could possibly be operative in some of the databases Hu describes.
Many scholars in the field of algorithmic accountability have been focused on procedural due process, aimed at giving classified citizens an opportunity to monitor and correct the data stored about them, and the processes used to analyze that data. Hu is generous in her recognition of the scope and detail of that past work. But with the benefit of her comprehensive, trans-substantive critique of big data blacklisting programs, she comes to the conclusion that extant proposals for reform of such programs may not do nearly enough to restore citizens’ footing, vis a vis government, to the level of equality and dignity that ought to prevail in our democracy. Rather, Hu argues that, taken as a whole, the current panoply of big data blacklisting programs offend substantive due process: basic principles that impose duties on government not to treat persons like things.
This is a bold intellectual move that reframes the debate over the surveillance state in an unexpected and clarifying way. Isn’t there something deeply objectionable about the gradual abdication of so many governmental, humanly-judged functions to private sector, algorithmically-processed databases and software—especially when technical complexity is all too often a cloak for careless or reckless action? For someone unfamiliar with the reach, fallibility, and stakes of big data blacklisting, it might seem jarring to contemplate that a pervasive, largely computerized method of classifying citizens might be as objectionable as, say, a law forbidding the teaching of foreign languages, or denying the right to marry to prisoners (other laws found to violate substantive due process). However, Hu has done vital work to develop a comprehensive case against big data blacklisting that makes several of its instantiations seem at least as offensive to constitutional values as those restrictions.
Moreover, when blacklisting itself is so resistant to traditional procedural due process protections (for example, in cases of black box processing), substantive due process claims may be the only way to relieve citizens of burdens it imposes. Democratic processes cannot be expected to protect the discrete, insular minorities targeted unfairly by big data blacklisting. Even worse, these “invisible minorities” may never even be able to figure out exactly what troubling classifications they have been tarred with, impairing their ability to even make a political case for themselves.
Visionary when it was written, Big Data Blacklisting becomes more relevant with each data breach and government overreach in the news. It is agenda-setting work that articulates the problem of government data processing in a new and compelling way. I have rarely read work that so meticulously credits pathbreaking work in the field, while still developing a unique perspective on a cutting edge legal issue. I hope that legal advocacy groups will apply Hu’s ideas in lawsuits against arbitrary government action cloaked in the deceptive raiments of algorithmic precision and data-driven empiricism.