Daniel Susser, Beate Roessler & Helen Nissenbaum, Online Manipulation: Hidden Influences in a Digital World
, available at SSRN
Congress has been scrambling to address the public’s widespread and growing unease about problems of privacy and power on information platforms, racing to act before the California Consumer Privacy Act becomes operative in 2020. Although the moment seems to demand practical and concrete solutions, legislators and advocates should pay close attention to a very timely and useful work by a set of philosophers. In a working paper entitled Online Manipulation: Hidden Influences in a Digital World, three philosophers–Daniel Susser, Beate Roessler, and Helen Nissenbaum–offer a rich and nuanced meditation on the nature of “manipulation” online. This article might provide the conceptual clarity required for the broad and sweeping kind of new law we need to fix much of what ails us. Although the article is theoretical, it could lead to some practical payoffs.
The article’s most important contribution is the deep dive it provides into the meaning of the manipulation, a harm separate and distinct from other harms more often featured in today’s technology policy discourse. Powerful players routinely deprive us of an opportunity for self-authorship over our own actions. Advertisers manipulate us into buying what we don’t need; platforms manipulate us into being “engaged” when we would rather be “enlightened” or “provoked” or “offline”; and political operatives manipulate us into voting against our interests. Taken together, these incursions into individual autonomy feed societal control, power imbalances, and political turmoil. The article builds on the work of many others, including Tal Zarsky, Ryan Calo (in an article that has received well-deserved praise from Zarsky in these pages), and Frank Pasquale, who have all written about the special problems of manipulation online.
The heart of the paper is an extended exploration into what it means to manipulate and how it differs from other forms of influence both neutral (persuasion) and malign (coercion). The philosophers focus on the hidden nature of manipulation. If I bribe or threaten or present new evidence to influence your decision-making, you cannot characterize what I am doing as manipulation, according to their definition, because my moves are visible in plain sight. I might be able to force you to take the decision I desire, which might amount to problematic coercion, but I have not manipulated you.
This insistence on hidden action might not square with our linguistic intuitions. We indeed might feel manipulated by someone acting in plain sight, and the authors are not trying to argue against these intuitions. Instead, they claim that by limiting our definition of manipulation to hidden action, we can clear up conceptual murkiness on the periphery of how we define and discuss different forms of discreditable influence. This is very useful ground clearing, helping manipulation stand on its own as a category of influence we might try to attack through regulation or technological redesign.
The piece convincingly links increased fears of manipulation, thus defined, to the current and likely future state of technology and the power of information platforms in particular. The pervasive surveillance of today’s information technology gives would-be manipulators access to a rich trove–Dan Solove’s digital dossiers and Danielle Citron’s reservoirs of danger—about each of us, which they can buy and use to personalize their manipulations. Knowing the secret manipulation formula for each individual, they can then use the “dynamic, interactive, intrusive, and personalized choice architectures” of platforms to give rise to what Karen Yeung calls “hypernudging.” Online tools hide such behavior, in the way they are designed to recede into the background; in one of the more evocative analogies in the paper, the authors argue that information technology operates more like eyeglasses than magnifying glasses, because we forget about them when we are using them. “A determined manipulator could not dream up a better infrastructure through which to carry out his plans” than today’s technological ecosystem, they conclude.
Having crafted their own definition of manipulation, and after connecting it to modern technology, the authors turn last to theories of harm. They focus on harm to autonomy, on the way manipulation undermines the ability of the manipulated “to act for reasons of their own.” We are treated like puppets by puppet masters pulling our strings; “we feel played.”
The cumulative effects of individual manipulations harm society writ large, posing “threats to collective self-government.” Consider the bolder claims of psychographic targeting made by the people at Cambridge Analytica before the last election, which if true suggest that “democracy itself is called into question” by online manipulation.
If Congress wants to enact a law prohibiting manipulative practices, this article offers some useful definitions: a manipulative practice is “a strategy that a reasonable person should expect to result in manipulation,” and manipulation is defined as “the covert subversion of an individual’s decision making.” Congress would be wise to enact this kind of law, perhaps adding it as a third prohibited act alongside deception and unfairness in section five of the FTC Act.
In addition, Congress could breathe new life into notice-and-choice regimes. Currently, we are asked to take for granted that users “consent” to the extensive collection, use, and sharing of information about them because they clicked “I agree” to a term-of-service pop-up window they once saw back in the mists of time. Were we to scrutinize the design of these pop-ups, assessing whether online services have used manipulative practices to coax users to “agree,” we might recognize the fiction of consent for what it really is. We should implicitly read or explicitly build into every privacy law’s consent defense a “no dark patterns” proviso, to use the phrase for manipulative consent interfaces by scholars like Woody Hartzog.
Finally, although these authors ground their work in the concept of autonomy, an unmeasurable concept not well-loved by economists, their argument could resonate in the god-forsaken, economics-drenched tech policy landscape we are cursed to inhabit. Manipulation, as they have defined it, exacerbates information asymmetry, interfering with an individual’s capacity to act according to preferences, resulting in market failure. A behavioral advertiser with a digital dossier “interferes with an agent’s decision-making process as they deliberate over what to buy. Worse yet, they may be enticed to buy even when such deliberation would weigh against buying anything at all.”
In fact, the authors go to lengths to explore how harmful manipulation interacts with the concept of nudges. Some nudges should count as manipulation, when their designs and mechanisms are hidden, even if they bring about positive behavioral change. The architects of the theory of nudges might even embrace this conclusion. The article quotes liberally from Cass Sunstein, who has explored the ethics of government-imposed nudges, acknowledging their sometimes manipulative quality. The article resonates with recent ruminations by Richard Thaler, who has coined a new term, “sludges,” the negative mirror image of positive nudges. These fathers of nudges are finally cottoning on to what privacy scholars have been writing about for years: at least online, the negative sludges we encounter seem to outnumber the positive nudges, with the gap widening every day.
We have a new target in our sights, whether we call them manipulative practices, dark patterns, or sludges: the technological tools and tricks that powerful information players use to treat us like their puppets and cause us to act against our own self-interest. By lending precision to the meaning of manipulation, this article can help us meet the challenge of many of the seemingly impossible problems before us.
Kiel Brennan-Marquez & Stephen E. Henderson, Artificial Intelligence and Role-Reversible Judgment
, __ J. of Crim. L. and Criminology
__ (forthcoming), available at SSRN
Are some types of robotic judging so troubling that they simply should not occur? In Artificial Intelligence and Role-Reversible Judgment, Kiel Brennan-Marquez and Stephen E. Henderson say yes, confronting an increasingly urgent question. They illuminate dangers inherent in the automation of judgment, rooting their analysis in a deep understanding of classic jurisprudence on the rule of law.
Automation and standardization via software and data have become a regulative ideal for many legal scholars. The more bias and arbitrariness emerge in legal systems, the more their would-be perfecters seek the pristine clarity of rules so clear and detailed that they can specify the circumstances of their own application. The end-point here would be a robotic judge, pre-programmed (and updated via machine learning) to apply the law to any situation that may emerge, calculating optimal penalties and awards via some all-commensurating logic of maximized social welfare.
Too many “algorithmic accountability” reformers, meanwhile, are in general either unaware of this grand vision of a legal singularity, or acquiescent in it. They want to use better data to inform legal automation, and to audit it for bias. The more foundational question is less often asked: Does the robo-judge not simply present problems of faulty algorithms and biased or inaccurate data, but something more fundamental—a challenge to human dignity?
Brennan-Marquez and Henderson argue that “in a liberal democracy, there must be an aspect of ‘role-reversibility’ to judgment. Those who exercise judgment should be vulnerable, reciprocally, to its processes and effects.” The problem with an avatar judge, or even some super-sophisticated robot, is that it cannot experience punishment the way that a human being would. Role-reversibility is necessary for “decision-makers to take the process seriously, respecting the gravity of decision-making from the perspective of affected parties.”
Brennan-Marquez and Henderson derive this principle from basic principles of self-governance:
In a democracy, citizens do not stand outside the process of judgment, as if responding, in awe or trepidation, to the proclamations of an oracle. Rather, we are collectively responsible for judgment. Thus, the party charged with exercising judgment—who could, after all, have been any of us—ought to be able to say:
This decision reflects constraints that we have decided to impose on ourselves, and in this case, it just so happens that another person, rather than I, must answer to them. And the judged party—who could likewise have been any of us—ought to be able to say: This decision-making process is one that we exercise ourselves, and in this case, it just so happens that another person, rather than I, is executing it.
Thus, for Brennan-Marquez and Henderson, “even assuming role-reversibility will not improve the accuracy of decision-making, it still has intrinsic value.”
Brennan-Marquez and Henderson are building on a long tradition of scholarship which focuses on the intrinsic value of legal and deliberative processes, rather than their instrumental value. For example, the U.S. Supreme Court’s famous Mathews v. Eldridge calculus has frequently failed to take into account the effects of abbreviated procedures on claimants’ dignity. Bureaucracies, including the judiciary, have enormous power. They owe litigants a chance to plead their case to someone who can understand and experience, on a visceral level, the boredom and violence portended by a prison stay, the brutal need resulting from the loss of benefits, the sense of shame that liability for drunk driving or pollution can give rise to. And as the classic Morgan v. United States held, even in complex administrative processes, the one who hears must be the one who decides. It is not adequate for persons to play mere functionary roles in an automated judiciary, gathering data for more authoritative machines. Rather, humans must take responsibility for critical decisions made by the legal system.
This argument is consistent with other important research on the dangers of giving robots legal powers and responsibilities. For example, Joanna Bryson, Mihailis Diamantis, and Thomas D. Grant have warned that granting robots legal personality raises the disturbing possibility of corporations deploying “robots as liability shields.” A “responsible robot” may deflect blame or liability from the business that set it into the world. It cannot truly be punished, because it lacks human sensations of regret or dismay at loss of liberty or assets. It may be programmed to look as if it is remorseful upon being hauled into jail, or to frown when any assets under its control are seized. But these are simulations of human emotion, not the thing itself. Emotional response is one of many fundamental aspects of human experience that is embodied.
Brennan-Marquez and Henderson are particularly insightful on how the application of law needs to be pervasively democratic in order to be legitimate. That is, of course, most obvious in the concept of the jury, but in a way that refines our common understanding of the practice. To understand “why the jury has long been celebrated as an organ of ‘folk wisdom,’” Brennan-Marquez and Henderson argue:
The idea is not that jurors have a better sense of right and wrong than institutional actors do. (Though that may also be true.) It is, more fundamentally, that jurors respond to the act of judgment as humans, not as officials, and in this respect, jury trials are a model of what role-reversibility makes possible: even when a jury trial does not lead to a different outcome than a trial before an institutional judge (or other fact-finding process), it facilitates the systemic recognition of judgment’s human toll. And even more fundamentally, it transforms the trial into a democratic act.
The common humanity of the judge (or agency director, or commissioner) and litigants is another reflection of the democratic nature of the polity that gives rise to a legal system.
It should come as little surprise that authoritarian legal systems are among the most enthusiastic for automatic, computational judgments of guilt or “trustworthiness.” Their concepts of “rule by law” place authorities above the citizenry they judge. By contrast, rule of law values, rooted in a democratic polity, require that any person dispensing justice is also eligible to be subject to the laws he or she applies.
Artificial Intelligence and Role-Reversible Judgment is a far-seeing project—one that aims to change the agenda of AI research in law, rather than merely improving its applications. Brennan-Marquez and Henderson carefully review the many objections scholars have raised to the data gathered for legal AI, and the theoretical objections to the vision of “artificial general intelligence” that seems necessary for computational legal systems to emerge. “We do not minimize any of these instrumental arguments in favor of human judgment,” they argue. “They are certainly valid today, and they may survive the next generation of AI. [But this article explores] what should happen if arguments like these do not survive.” The requirement for a human to evaluate arguments and dispense judgments in a legitimate legal system should give pause to those who are now trying to develop artificially intelligent judges. Why pursue the research program if it violates the role reversibility principle, which Brennan-Marquez and Henderson rightly characterize as a basic principle of democratic accountability?
Brennan-Marquez and Henderson’s work is a great example of how a keen phenomenology of the uncanniness and discomfort accompanying a vertiginously technified environment can deepen and extend our understanding of key normative principles. Judged by an avatar, one might wonder: “Who programmed it? What were they paid? Did they understand the laws they were coding? What could I have done differently?” The emerging European right to an explanation is meant to give persons some answers to such queries. But Brennan-Marquez and Henderson suggest that mere propositional knowledge is not enough. The “right to a human in the loop” in legal proceedings gains new moral weight in light of their work. It should be consulted by anyone trying to advance legal automation, and those affected by it.
Cite as: Frank Pasquale, Empathy, Democracy, and the Rule of Law
(May 8, 2019) (reviewing Kiel Brennan-Marquez & Stephen E. Henderson, Artificial Intelligence and Role-Reversible Judgment
, __ J. of Crim. L. and Criminology
__ (forthcoming), available at SSRN), https://cyber.jotwell.com/empathy-democracy-and-the-rule-of-law/
Any Internet regulation—from privacy to copyright to hate speech to network neutrality—must take account of the complex and messy dynamics of meme-fueled conflicts. And for that, An Xiao Mina‘s Memes to Movements is an essential guide.
Mina is not a traditional academic. She is a technologist, artist, and critic; her day job is Director of Products at Meedan, which builds tools for global journalism. But Memes to Movements draws fluently on cutting-edge work by scholars like Alice Marwick and Rebecca Lewis, Whitney Phillips, and Sasha Costanza-Chock, among many others. It is an outstanding synthesis, beautifully and clearly written, that gives an insightful overview of media and politics circa 2019.
Mina’s overarching point is that Internet memes—rather than being a frivolous distraction from serious political discourse—have become a central element of how effective social movements advance their political agendas. Their unique combination of virality and adaptability gives them immense social and communicative power. Think of a rainbow-flagged profile picture celebrating the Supreme Court’s same-sex marriage decision in 2015. The rainbow flag is universal; it makes the message of support immediately recognizable. But the picture is specific; it lets the user say, “I, me, personally support the right to marry.”
Memes do many kinds of work for movements. Memes allow participants to express belonging and solidarity in highly personalized ways, as with the rainbows. They let activists in repressive environments skirt the edges of censorship with playful wordplay. They enable activists to cycle rapidly through “prototypes” until they find ones with a compelling mass message. (There is an explicit parallel here to the technology industry’s use of rapid development practices; see also Mina’s recent essay on Shenzhen.) They help movements craft powerful narratives around a single immediately recognizable and easily graspable idea. One of the best extended examples in the book traces the gradual breakout of the #BlackLivesMatter hashtag from a surging sea of related memes: it had a popular poetic power that became widely apparent only as it started to catch on. And finally, the cycle closes: memes also let counter-movements use parodies and remixes to turn the ideas around for their own ends.
One of Mina’s most striking observations is the increasing importance of physical objects as memes, like mass-produced red MAGA caps and individually knitted pink pussy hats. Mina ties their rise both to globalized production and logistics networks and to individual craft work. The embeddedness of physical memes creates a powerful specificity, which in turn can fuel the spread of online ideas. Mina’s examples include the yellow umbrellas held by pro-democracy protesters in Hong Kong and the Skittles dropped by protesters calling attention to the death of Trayvon Martin.
As this last pair illustrates, Memes to Movements is a thoroughly global book. Mina discusses protest movements in the very different political environments of China and the United States with equal insight, and draws revealing parallels and contrasts between the two. The book is particularly sharp on how Chinese authorities sometimes defuse politically potent memes like Grass Mud Horse by allowing the natural forces of memetic drift to dilute them to the point that they no longer uniquely refer to prohibited ideas.
This is also a book that is deeply, depressingly realistic about the uses of power. Activists have no monopoly on memes; state actors deploy them for their own purposes. Government-sponsored memes can take the form of an anti-Hillary image macro or a patriotic pop song that seemingly comes out of nowhere. Indeed, these forms of propaganda are finely tuned to the Internet, just as Triumph of the Will was finely tuned to mass media. Marketers, too, pay close attention to the dynamics of virality, and Mina traces some of the cross-pollination among these different groups competing to use memetic tools most effectively. Kim Kardashian’s skill in promoting criminal justice reform is not so different in kind from her skill as a commercial influencer: she knows how to make a simple idea take off.
Above all, this is a compelling book on how attention functions in the world today, for better and for worse. It is a field guide to how groups and individuals—from Ayotzinapa 43 to Donald J. Trump—capture attention and direct it toward their preferred aims. Mina was writing perceptively about how Alexanda Ocasio-Cortez was winning Instagram long before it was cool.
What do Internet-law scholars have to learn from a book with very little discussion of Internet law? Just as much as family-law scholars have to learn from books about family dynamics, or intellectual-property scholars have to learn from books about creativity—Memes to Movements is an extraordinary guide to a social phenomenon the legal system must contend with. It describes democratic culture in action: it illustrates the idea-making on which law-making depends; it connects the micro scale of the creation and distribution of individual bits of content to the macro scale of how they shape politics and society. Plus it features elegant prose and charming pixel art by Jason Li. Fifty million cat GIFs can’t be wrong.
As more and more of our daily activities and private lives shift to the digital realm, maintaining digital security has become a vital task. Private and public entities find themselves in the position of controlling vast amounts of personal information and therefore responsible for assuring such information does not find its way to unauthorized hands. In some cases, there are strong incentives to maintain high standards of digital security, as security breaches are a real pain. When reports on such breaches are made public, they generate reputation costs, lead to regulatory scrutiny and often call for substantial out-of-pocket expenses to fix. Unfortunately, however, the internal incentives for maintaining high security standards are often insufficient motivators. In such cases, the security measures taken are unfitting, outdated and generally unacceptable. These are the instances where legal intervention is required.
There are several possible regulatory strategies to try and improve digital security standards. One option calls for greater transparency regarding breaches that led to personal data leakage and other negative outcomes. Another option calls upon the government to set data security standards and enforce them, at least in key sectors (more on these two options and their limitations, below). Yet an additional central form of legal intervention is through private litigation and the court system. However, key doctrinal hurdles in the United States currently make it extremely difficult to sue for damages resulting from security breaches. In an important recent paper, Daniel Solove and Danielle Citron, two prominent privacy scholars, explain what these hurdles are, how to overcome them, and why such doctrinal changes are essential.
As the authors explain, the key to many of the challenges of data security litigation is the concept of “harm”, or lack thereof. A finding of actual, tangible harm is crucial for establishing standing, which requires demonstrating an injury that is both concrete and actual (or at least imminent). Without standing, the case is thrown out immediately without additional consideration. Additionally, tort-based claims (as opposed to some property-based claims) require a showing of harm. And when examining data security claims, courts require tangible damages to prove harm. Security-related harms are often considered intangible. Therefore, many data security-related lawsuits are either immediately blocked or ultimately fail.
The complex issue of harm, standing and data security/privacy has been recently addressed by the U.S. Supreme Court in Clapper v. Amnesty International USA (where the Court generally rejected “hypothetical” injuries as sufficient to establish standing) and more recently in Spokeo Inc. v. Robins. In this latter case (addressing the standing and the FCRA) the Court has, at least in principle, recognized that intangible harms could be considered as sufficiently “concrete” if they generate the risk of real harm, and thus provide plaintiffs with standing. Furthermore, an additional case—Frank v. Gaos—is currently before the Supreme Court. While this latter case focuses on the practice of cy pres settlements in class actions, it appears to incidentally yet again raise questions related to standing, harms and digital security/privacy—this time with regard to referrer headers.
In response to the noted challenges security litigation faces, the authors call upon courts to enter the 21st century and accept changes to the doctrines governing the establishment of harm. They convincingly show that security breaches indeed create both harm and anxiety—but of somewhat different form. In fact, they assert, some courts have already begun to recognize harms resulting from data security breaches. For instance, courts have found that a “mere” increased risk of identity theft constitutes actual harm (even before such theft has occurred) when the data has made its way to the hands of cyber-criminals. The authors prod courts to push further in their expansion of the harm concept in the digital age. They note three major forms of injury which should be recognized in this context: (1) the risk of future injury, (2) the fact that individuals at risk must take costly (in time and money) preventive measure to protect against future injury, and (3) enhanced anxiety.
To make this innovative argument, the authors explain that data security breaches create unique concerns which justify the expansion of the concept of harm. For instance, they explain that damages (which might prove substantial) resulting from data breaches could be delayed. Therefore, recognizing harm at an earlier stage is essential. In addition, they argue that the risk of security harms might deter individuals from engaging in important and efficiency-enhancing activities such as seeking new employment opportunities and purchasing a new home. This is yet another strong argument for immediately creating a cause of action through the recognition of harm.
Judges are usually cautious about creating new rules, especially in common law systems. Yet the authors explain that in other legal contexts, such as medical malpractice, similar forms of intangible harms have already been recognized. They refer to cases based on actions that increased a chance of illness or decreased the chance of recovery. These have been recognized as actual harms—instances somewhat analogous to personal data leakage and the harms that might follow.
Yet broadening the notion of data “harm” has some downsides, such as attempts to “cheat” and manipulate by plaintiffs. This is because intangible harms are easier to fake or fabricate, and because the definition of intangible harm might be too open-ended. In addition, broadening the notion of harm might lead to confusion for the courts. To mitigate some of these concerns, the authors introduce several criteria to assist courts in establishing and assessing harm in this unique context. These include the likelihood and magnitude of future injury as well as the mitigating and preventive measures those holding the data have taken.
Finally, the authors confront some broader policy questions pertaining to their innovative recommendations. Litigation, of course, is not the only way to try and overcome the problems of insecure digital systems. It probably isn’t even the best way to do so. I have argued elsewhere that courts are often an inadequate venue for promoting cybersecurity objectives. Litigation is costly to all parties. It also might stifle innovation and end up merely enriching the parties’ lawyers. In addition, judges usually lack the proper expertise to decide on these issues. Furthermore, in this context, ex post court rulings are an insufficient motivator to ensure that proper security measures will be set in place ex ante, given the issue’s complexity and the difficulties of proving causation (i.e. the linkage between the firm’s actions or omissions and the damages that follow at a later time).
The authors would probably agree with these assertions and indeed acknowledge most of them in their discussion. Nonetheless, they argue that other regulatory alternatives such as breach notification requirements and regulatory enforcement suffer from flaws as well. This is, no doubt, true. Breach notifications might generate insufficient incentives for data collectors to minimize future breaches, as users might be unable or unwilling to voice or act on their disappointment with the flawed security measures adopted. And data security regulatory enforcement might suffer from the usual shortcomings of governmental enforcement—it being too minimal, not up to date and at times subject to capture. Litigation, the authors argue, could fill a crucial void when other options fail. They state that “data-breach harms should not be singled out” as problematic relative to other kinds of legal harms. Therefore, courts should have the option to find that harm has been caused and thus additional legal actions must be taken when they have good reasons to do so.
Using doctrinal barriers (such as refraining from acknowledging new forms of harm) to block off specific legal remedies is an indirect and somewhat awkward strategy. Yet it is also an acceptable measure to achieve overall policy goals. The authors convincingly argue that (all) judges should have the power to decide on a case’s merits, yet by doing so the authors inject uncertainty into the already risky business of data security. If this proposal would be ultimately accepted, let us hope that judges use this power responsibly. If Solove and Citron’s proposals are adopted, judges should look beyond the hardship of those victimized by data breaches and consider the overall interests of the digital ecosystem before delivering their judgement in digital security cases.
Kristen E. Eichensehr, Digital Switzerlands
, 167 U. Pa. L. Rev.
___ (forthcoming 2019), available at SSRN
Battles over the public policy obligations and implications of late 20th-century and early 21st-century technologies have long been fought via metaphor as well as via megabyte and microeconomics. Today, modern information technology platforms are characterized brightly as “generative” and darkly as “information feudalism.” Public policy might be informed by treating some network providers as “information fiduciaries.” Or, borrowing the phrase that prompts Kristen Eichensehr’s thought-provoking paper, tech companies might be characterized as metaphorical “digital Switzerlands.” They might be neutral institutions in their dealings with national governments.
In Professor Eichenbehr’s telling, the idea of a corporate digital Switzerland resisting government aggression—refusing to cooperate with government requests for private user information, for example—comes from a recent suggestion to that effect by Brad Smith, president of Microsoft. As she notes briefly, it’s an old idea, not a new one, even if it has migrated from corporation-vs-corporation conflict to state-vs-corporation power dynamics. Ken Auletta’s history of Google reported that back in 2005, Google CEO Eric Schmidt characterized Google’s search engine and advertising platform as a neutral “digital Switzerland” in its treatment of content companies and advertisers. Schmidt was defending the idea that Google had no agenda vis-à-vis incumbent entertainment industry players. Google’s technology produced accurate data about consumer viewing practices. If that data led advertisers to pay less for their ad buys, that wasn’t Google’s intent—or its responsibility. Schmidt’s listener, the then-president of Viacom, erupted in protest: “You’re fucking with the magic!”
Indeed. The reader should take many lessons from Eichensehr’s article. Foremost among them is this: Wandering into the digital Switzerlands of contemporary technology, whether because Microsoft (in its obvious self-interest) says that’s how we should do things or because that’s an objectively useful place to begin, is fucking with the magic—that is, the mythos that guides how scholars and policymakers think about technology purveyors and their civic roles and responsibilities.
Metaphors, it turns out, are the least of our concerns. The point that Smith and Microsoft made with the “digital Switzerland” claim is on its face a primitive and laughable appeal to the idea that technology and technology companies can and should be apolitical and neutral. Eichensehr, appropriately, barely pauses to consider the metaphorical mechanics at work.
Instead, she takes as given that technologies have politics, and that politics have technologies. Reading the paper, I was reminded of Fred Turner’s research. Silicon Valley firms and their allies explicitly borrowed and built on 1960s ideologies of anti-government communalism, so much so that modern information technology came to be seen by its producers, and sold to consumers, as an instrument of personal liberation and freedom. Whether a Mac or a PC, the computer, and later the network, was and is meant to empower individuals to create social order independent of traditional, formal governments, and if necessary in opposition to them. Eichensehr doesn’t dig quite to that level. She skips ahead, helpfully escaping the metaphor wars by relying on “digital Switzerlands” as a potentially useful diagnostic. Her argument is consistent with Turner’s. Ideas have power. Maybe Smith is on to something.
Eichensehr makes a host of interesting observations and asks some critical if provocative questions. The article starts by laying out a basic toolkit. The claim that technology companies might be “digital Switzerlands” (she switches Smith’s singular to the more descriptively apt plural) implicates the foundational idea that a company might be treated as a sovereign, as if it were a country, and the next-level idea that as a sovereign state, a technology company might be fairly characterized as “neutral” under international law. The characterization works in some respects and not in others, but as a starting point it is plausible enough that Eichensehr moves easily to her next step, which is describing and analyzing how that neutral status implicates technology companies both in relation to their individual users and in relation to governments. Microsoft or Facebook might resist government efforts to secure corporate cooperation in investigations that implicate their users. Or they might cooperate. But companies have always had to choose whether to fight or fold in response to government requests for information, or more. Today, the global reach of the largest tech companies, and the fact that they succeed as businesses because of their attractiveness to users and advertisers distinguishes them from powerful corporate behemoths of earlier eras. Those were powerful and durable for decades as resource extractors, not as modern goodwill generators.
In short, Eichensehr argues that the digital Switzerlands claim has merit. Sometimes, modern technology companies do exercise some of the powers of sovereignty that we traditionally associate with governments. They develop and deploy large-scale trust-based governance infrastructures through their technology platforms. They exercise substantial powers to structure behavior by users. Users discipline the companies via governance to a limited degree, primarily exit. Taken together, those attributes give heft and credibility to the proposition that operating as “digital Switzerlands” may enable technology companies effectively to shield their users from formal government regulation—collecting private information users store via the platform, for example. As Eichensehr notes, that power is limited; it comes at a cost, and with risks. Switzerland itself has not only been neutral but was also passive to the point of complicity with Nazi Germany during World War II. With great power—pervasively armed neutrality, in the case of Switzerland; surveillance-based surreptitious data aggregation, in the case of Facebook—comes great responsibility. That responsibility is not always exercised appropriately.
But Eichensehr is less interested in a detailed normative exploration of Facebook’s data collection practices than in using the insight about tech companies as states to build a useful framework for understanding their practices, with governments anchoring one point of the framework, tech companies anchoring a second, and users anchoring a third. She uses that framework to predict outcomes in conflicts where companies might cooperate or resist government efforts to regulate or police the companies’ users. “Stated generally, the Digital Switzerlands concept suggests that companies should fight against or resist governments when the companies perceive themselves to be and can credibly argue that they are protecting the interests of users against governments….” (P. 39.) She tests that hypothesis against some relatively easy, paradigm cases (corporate compliance is more likely when a democratic government is attempting to apply its domestic legislation to users in that jurisdiction), and reviews limitations (the government may be undemocratic; the company may misapprehend its users’ interests; governments may be applying the law extraterritorially; the company may not be aware of government action).
The core case and the exceptions lead Eichensehr to evaluate the normative implications of the framework. Here, her observations are provocative rather than definitive, because she’s challenging some cyber-orthodoxy and some fundamentals of democratic theory. Recall what happens to the magic.
First and most important, Eichensehr knocks the power and freedom of the individual off their shared pedestal as the normative standard for evaluating both government (mis)conduct and corporate practice. That view has to be handled delicately as a philosophical matter, because individual agency is one of the central pillars of democratic theory, but it is refreshingly pragmatic. She cites Madison in support of institutional pluralism. Madison was writing about the dual roles of federal and state governments; in Eichensehr’s telling, treating tech companies as states, “having two powerful regulators, rather than only one, can benefit individuals’ freedom, liberty, and security because sometimes it takes a powerful regulator to challenge and check another powerful regulator.” (P. 49.) The individual isn’t all-powerful in practice. Bigger sometimes is better.
Second, Eichensehr repositions questions of legitimacy and accountability in governance institutions, pushing past political science concepts (“exit,” “voice”), past early cyber-constitutionalism (which described tech companies as merely commercial “merchant sovereigns”), and—implicitly—past easy reliance on critiques of neoliberalism (private appropriation of public functions, embodied in state-sanctioned invocations of contract and property law). She argues that contemporary corporate “citizenship” entails not only how the “state” disciplines those who are subject to its power, but also how the “state” advocates on their behalf. In Digital Switzerlands, she sees novel blends of public functions (defending user interests in privacy against state invasions), private functions (services traded in the market, data collection), and individual and collective identity, woven together at least as tightly as they were in 20th-century company towns, and arguably more so. But companies’ formally private status means that mechanisms of accountability, such as transparency and modes of due process, often can’t be imposed from without. They must be adopted voluntarily, as Google has done with its transparency reports and treatment of Right to Forgotten requests.
It’s possible to read Digital Switzerlands as a not-so-subtle defense of the corporate status quo, that corporate state-hood is not the world that we might want but is close to the best of the world that we might have. Break up big tech in the name of old school, consumer-protective antitrust at our peril, one might infer, and instead find ways to require, expect, or just hope that big tech will adopt a better demeanor in a traditional public-oriented sense.
I think that this conservative reading is a mistake. Instead, it’s worth taking the article quite seriously on its own terms, as a thoughtful effort to take apart well-established patterns of thinking about cyberlaw and policy and to reassemble them in a forward-looking and potentially sustainable way. The tech sector may have been naïve and selfish in telling the digital Switzerland story. Digital machines are no more tools of personal liberation and freedom supplied by neutral designers—nor less—than the assembly lines of Henry Ford were sources of individual opportunity provided by benign automobile makers. Yet to some scholars, the dehumanizing factories of the early 20th century produced relatively wealthy communities and class mobility; to its defenders, Facebook gives us identity and community. Eichensehr has taken the first steps toward what may become a larger realignment of arguments about statehood and governance. That project is well-worth considering, even if—abracadabra—it may take us in unexpected directions.
Cite as: Michael Madison, Fucking With the Magic
(January 22, 2019) (reviewing Kristen E. Eichensehr, Digital Switzerlands
, 167 U. Pa. L. Rev.
___ (forthcoming 2019), available at SSRN), https://cyber.jotwell.com/fucking-with-the-magic/
Shaanan Cohney, David Hoffman, Jeremy Sklaroff, & David Wishnick, Coin-Operated Capitalism
, __ Columbia L. Rev.
__ (forthcoming), available at SSRN
Oldthinkers unbellyfeel blockchain. We are told that blockchains, cryptocurrencies, and smart contracts are about to revolutionize everything. They remove fallible humans from every step where a transaction could go wrong, replacing them with the crystalline perfection of software. Result: clarity, certainty, and complete freedom from censors and tyrants.
And yet we still don’t get it. Some oldthinkers think that not all regulation is tyranny, while others point to the environmentally disastrous costs of blockchain strip mining. And then there are those of us who think that the entire premise of blockchain boosterism is mistaken, because the new “smart” contracts are not so different from the old “dumb” contracts. Coin-Operated Capitalism, by a team of four authors from the University of Pennsylvania, is the best recent entry in this vein. It is a playful, precise, and damning look at how smart contracts actually function in the real world.
This is one of very few law-and-computer science articles that takes both sides of the “and” seriously, and is one of the best examples I have ever seen of what this field can be. It is a law-review article about an empirical study of contracts and software. To quote the star footnote’s description of the authors’ combined expertise, “Cohney is a fifth-year doctoral student in computer and information science at the University of Pennsylvania, where Hoffman is a Professor of Law, Sklaroff received a JD/MBA in 2018, and Wishnick is a fellow in the Center for Technology, Innovation and Competition.” (Jeremy Sklaroff, the (alphabetically) third author, wrote an unusually good law-review comment on smart contracts last year.) Another nine research assistants helped, presumably with the extensive white-paper reading and coding. It takes a village to write a truly interdisciplinary article.
Coin-Operated Capitalism’s target is the initial coin offering (ICO). As the name suggests, an ICO is a blockchain analogue to a corporate initial public offering (IPO) of equity shares. Instead of receiving stock in a new business, an ICO investor receives tokens that give her a stake in a new smart contract. The token typically gives the holder some transactional rights (the authors’ example is to receive sodas from vending machines) and some control rights (e.g. to vote on investment opportunities, or to approve modifications to some of the terms of the ICO contract), both of which are coded into the smart contract. The promoters use the funds thereby raised for the associated venture (e.g., building and filling the vending machines), for the development and maintenance of the smart contract itself, and sometimes for further investments as directed by the new class of token-holders.
Anyone who has ever heard of securities law should be hearing alarm bells at this point. A typical ICO walks and quacks like “an investment of money in a common enterprise with a reasonable expectation of profits to be derived from the entrepreneurial or managerial efforts of others,” which can trigger obligations to register with the Securities and Exchange Commission, disclose investment risks, and to screen investors in various ways. Indeed, some ICOs are transparent attempts to route around securities regulation, while others are outright scams, dressing up old cons with new buzzwords. But there is an interesting and important class of what we might call “legitimate” ICOs. They have business models that don’t fit well with a traditional corporation (e.g. decentralized storage as in Filecoin) and they make a good-faith effort to use the funds for the benefit of and as directed by token-holding participants.
ICOs (both sketchy and legitimate) typically come with a “white paper”—it would be a prospectus in a securities offering, but we’re not allowed to call it that—describing how the new coin will work and why investors should be confident enough in it to participate in the ICO. In the securities context, regulators and class-action lawyers have made a blood sport out of comparing a company’s securities disclosures with its actual conduct. The authors of Coin-Operated Capitalism brilliantly do something similar with ICO white papers. They compare the promises made in the offering documents of the fifty top-grossing ICOs of 2017 with those ICOs’ own smart contracts. An ICO, after all, is an investment specifically in the smart contract.
The results of the survey are sobering. Dozens of ICO smart contracts failed basic investor-protection checks:
- Some allowed the promoters to arbitrarily dilute the shares of ICO investors by issuing more tokens in the future (14 out of 50).
- Some allowed the promoters to immediately cash out their positions following the ICO with no vesting schedule (37 out of 50).
- Some allowed the promoters to modify the smart contract unilaterally—the equivalent of a corporation’s founder revising its charter (39 out of 50).
In many cases, the smart contract code directly contradicted promises made in the supporting white papers and other ICO documents. In other cases, the ICO promoters either made no promises about these features in their white papers, or explicitly disclosed them. These cases, while less alarming, are in some ways even more puzzling. The blockchain triumphalism story is a story of code displacing law. An investor can rely on whatever the smart contract says, and emphatically should not rely on anything else. But these are smart contracts that let the promoters take the money and run: who in their right mind would rely on one?
One possibility is that the ICO market is full of dumb-as-rocks money: investors hear blockchain blockchain blockchain and lose all capacity for rational thought. If so, any ICO promoter who doesn’t take the money and run is a holy fool for blockchain.
It could also be that ICO investors are smart but out of their depth. They know how to read a legal document closely, but don’t yet understand that ICO due diligence requires a line-by-line code audit. With time, they may learn how to translate their expertise in corporate governance to smart-contract governance, but they’re not there yet. Coin-Operated Capitalism finds some evidence that this understanding is seeping into the ICO investment community; another way to check would be to run a similar study on more recent ICOs.
Most interesting of all, maybe ICO investors correctly believe that they don’t need to rely on the smart contracts. Even if a promoter has the technical capacity to dilute investors into a trivial stake or modify their rights out of existence, investors are rationally unafraid it would actually happen. Perhaps they expect to win the fraud lawsuit and collect on their judgment if it comes to that. Perhaps they know that the promoters are holy fools who will preach the Gospel of Satoshi even in the face of temptation. Perhaps they see that the projects will come crashing down if the promoters start to slink away and that the promoters themselves are better off staying the course. Perhaps they know where the promoters live and also know some burly men with guns. Or perhaps they think that the shame of forever being that blockchain guy who took the money and ran is enough to deter insider self-dealing.
But, as the authors explain, such arguments “are dangerous for ICO advocates. They show that advocates have already abandoned the high ground of ‘lex cryptographica.’” All of these safeguards are off the blockchain. It’s not that the smart contract protects investors. Instead, the legal system protects them, or the business community protects them, or business norms protect them. These are all things that are part of the glue holding modern capitalism together. The smart contract is just a starting point, an anchor that gives an important but incomplete description of people’s rights and responsibilities. The real work happens in the real world, not in the computations carried out by the smart contract. And if that’s right, then what was the point of the blockchain?
It is hard to dispute the authors’ conclusion that “no one reads smart contracts.” It is also hard to see these ICOs as anything other than open-and-shut fraud. It may not necessarily be securities fraud, but the code itself proves that it does not meet the promises being made about it. For all the rhetoric of tyranny and censorship, maybe regulators understand a few things about contracts, money, and human nature that smart contract promoters and investors do not.
Cite as: James Grimmelmann, Extraordinary Popular Delusions and the Madness of ICO Crowdfunding
(November 26, 2018) (reviewing Shaanan Cohney, David Hoffman, Jeremy Sklaroff, & David Wishnick, Coin-Operated Capitalism
, __ Columbia L. Rev.
__ (forthcoming), available at SSRN), https://cyber.jotwell.com/extraordinary-popular-delusions-and-the-madness-of-ico-crowdfunding/
There has been growing academic interest in the topic of decentralised, distributed open ledger technology—better known as the blockchain (see my last Jot). While the literature has been substantial, the copyright implications of the blockchain have not received as much coverage from the research community, perhaps because the use cases have not been as prevalent in the media. Taking the usual definition of a blockchain as an immutable distributed database, it is easy to imagine some potential uses of the technology for copyright, and for the creative industries as a whole. Blockchain technology has been suggested for management of copyright works through registration, enforcement, and licensing, and also as a business model allowing micropayments and use tracking.
Blockchain and Smart Contracts: The Missing Link in Copyright Licensing? by three academics at the Institute for Information Law at the University of Amsterdam, tackles this subject in excellent fashion. The article has the objective of introducing legal audiences to many of the technologies associated with the blockchain. It goes into more specific treatment of various features, such as distributed ledger technology (DLT), digital tokens, and smart contracts, and the potential uses of these for copyright licensing specifically. The article is divided into three parts: an introduction to the technology, an analysis of its potential use for copyright licensing, and a look at possible problems.
The article explains that DLTs are consensus mechanisms which “ensure that new entries can only be added to this distributed database if they are consistent with earlier records.” (P. 4.) Other technical features include the ability to time-stamp transactions, and the potential to verify ownership of a work through the use of “wallets” and other cryptographic tools. This type of technology can be useful for various copyright test cases, such as allocating rights, registering ownership, and keeping track of expiration. Because you could have an immutable and distributed record of ownership and registration, it would be possible for DLTs to become a useful tool for the management of copyright works by collecting agencies.
Then the article explains the concept of tokenization and the use of digital tokens. Any sort of data can be converted into a digital token, and these can express all sorts of rights. For example, tokenizing rights management information (RMI) could be useful for the expression and management of copyright works through licensing. Further action can be taken through a smart contract, which is software that interacts with the blockchain to execute if-then statements and can also be used for running more complex commands and sub-routines expressing legal concepts. According to the authors, a large number of “dumb transactions” could be taken over by smart contracts, allowing the identification and distribution of royalties, and the payment of such. While the deployment of large-scale smart contract management mechanisms would be very complex, the authors envisage a system by which owners retain control over their own works, and use smart contracts to allocate and distribute rights directly to users by means of these automated transactions.
The article goes into detail on other potential uses, particularly the use of blockchain in registration practices, the potential for solving the orphan works problem, fair remuneration, and allocating rights through RMIs. This is done with both knowledge of the subject as well as rigour in the analysis of potential pitfalls.
The article’s best section is its analysis of the many potential issues that may arise in using DLT and smart contracts in copyright. The authors astutely identify the complex nature of copyright norms, and comment that the many variations from one jurisdiction to another may prove to be too complex for a medium that is looking for ease of execution. The authors comment:
In the case of blockchain it is hard, at least as of 2018, to detect high levels of enthusiasm that would lead, in the short term, to the legal recognition/protection of copyright-replacing blockchain-related technological innovations. (P. 22.)
This matches my own observations about this subject. I have found that while the hype is considerable, there are just too many concerns about the potential uses of blockchain technologies in this area. There are valid concerns about the scalability of the technology, but also about the need to deploy complex technological solutions that could be equally implemented with other existing technology. The blockchain, we are told, can allow authors to publish their work with an immutable record of initial ownership, with automated remuneration awarded. But reality can be quite difficult to match with this vision. For starters, it may be difficult, if not impossible, to match existing rights, exceptions, and limitations in a manner that can be executed in a smart contract; the authors explain the complexity of international copyright law, with mismatched rights and responsibilities across jurisdictions. Similarly, blockchain systems are expensive, and if the market is currently working well with offline and online systems, then it is difficult to see how a cumbersome, slow, and wasteful solution would be adopted. The authors finish the discussion stating that there is a familiar feeling to the blockchain discussion, as DRM (digital rights management) was presented a decade or more ago as the enforcement solution that would end copyright infringement. Needless to say, that was not the case.
The question at the heart of any blockchain implementation always remains the same, what is the problem that you are trying to solve, and is the blockchain the appropriate technology to solve that issue?
Cite as: Andres Guadamuz, Copyright, Smart Contracts, and the Blockchain
(October 29, 2018) (reviewing Balázs Bodó, Daniel Gervais, & João Pedro Quintais, Blockchain and Smart Contracts: The Missing Link in Copyright Licensing?
, Int'l. J. of L. & Info. Tech.
(September 2018)), https://cyber.jotwell.com/copyright-smart-contracts-and-the-blockchain/
Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security
, 107 Cal. L. Rev.
__ (forthcoming 2019), available at SSRN
It’s no secret that the United States and much of the rest of the world are struggling with information and security. The flow of headlines about data breaches, election interference, and misuse of Facebook data show different facets of the problem. Information security professionals often speak in terms of the “CIA Triad”: confidentiality, integrity, and availability. Many recent cybersecurity incidents involve problems of confidentiality, like intellectual property theft or theft of personally identifiable information, or of availability, like distributed denial of service attacks. Many fewer incidents (so far) involve integrity problems—instances in which there is unauthorized alteration of data. One significant example is the Stuxnet attack on Iranian nuclear centrifuges. The attack made some centrifuges spin out of control, but it also involved an integrity problem: the malware reported to the Iranian operators that all was functioning normally, even when it was not. The attack on the integrity of the monitoring systems caused paranoia and a loss of trust in the entire system. That loss of trust is characteristic of integrity attacks and a large part of what makes them so pernicious.
Bobby Chesney and Danielle Citron have posted a masterful foundational piece on a new species of integrity problem that has the potential to take such problems mainstream and, in the process, do great damage to trust in reality itself. In Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, Chesney and Citron explain a range of possible uses for “deep fakes,” a term that originated from imposing celebrities’ faces into porn videos, but that they use to describe “the full range of hyper-realistic digital falsification of images, video, and audio.” (P. 4.)
After explaining the technology that enables the creation of deep fakes, Chesney and Citron spin out a parade of (plausible) horribles resulting from deep fakes. Individual harms could include exploitation and sabotage, such as a fake compromising video of a top draft pick just before a draft. (P. 19.) The equally, if not more, worrisome societal harms from deep fakes include manipulating elections through timely release of damaging videos of a candidate, eroding trust in institutions though compromising videos of their leaders, exacerbating social divisions by releasing videos of police using racial slurs, spurring a public panic with recordings of government officials discussing non-existent disease outbreaks, and jeopardizing national security through videos of U.S. troops perpetrating atrocities. (Pp. 22-27.)
So what can be done? The short answer appears to be not much. The authors conclude that technology for detecting deep fakes won’t save us, or at least won’t save us fast enough. Instead, they “predict,” but don’t necessarily endorse, “the development of a profitable new service: immutable life logs or authentication trails that make it possible for the victim of a deep fake to produce a certified alibi credibly proving that he or she did not do or say the thing depicted.” (P. 54.) This possible “fix” to the problem of deep fakes bears more than a passing resemblance to the idea of “going clear” spun out in Dave Eggers’ book The Circle. (Pp. 239-42.) In the novel, politicians begin wearing 24-hour electronic monitoring and streaming devices to build the public’s trust—and then others are pressured to do the same because, as Eggers puts it, “If you aren’t transparent, what are you hiding?” (P. 241.) When the “cure” for our problems comes from dystopian fiction, one has to wonder whether it’s worse than the disease. Moreover, companies offering total life logs would themselves become ripe targets for hacking (including attacks on confidentiality and integrity) given the tremendous value of the totalizing information they would store.
If tech isn’t the answer, what about law? Chesney and Citron are not optimistic about most legal remedies either. They are pessimistic about the ability of federal agencies, like the Federal Trade Commission or Federal Communications Commission, to regulate our way out of the problem. They do identify ways that criminal and civil remedies may be of some help. Victims could sue deep fake creators for torts like defamation and intentional infliction of emotional distress, and deep fake creators might be criminally prosecuted for things like cyberstalking (18 U.S.C. § 2261A) or impersonation crimes under state law. But, as the authors note, legal redress even under such statutes may be hampered by, for example, the inability to identify deep fake creators, or to gain jurisdiction over them. These statutes also do little do redress the societal, as opposed to individualized, harms from deep fakes.
For deep fakes perpetrated by foreign states or other hostile actors, Chesney and Citron are somewhat more optimistic, highlighting the possibility of military and covert actions, for example, to degrade or destroy the capacity of such actors to produce deep fakes. (Pp. 49-50.) They also suggest a way to ensure that economic sanctions are available for “attempts by foreign entities to inject false information into America’s political dialogue,” including attempts using deep fakes. (P. 53.) These tactics might have some benefit in the short term, but sanctions have not yet stemmed efforts at foreign interference in elections. And efforts to disrupt Islamic State propaganda have shown that attempts at digital disruption of adversaries’ capacities may often prompt a long-running battle of digital whack-a-mole.
One of the paper’s most interesting points is its discussion of another tactic that one might think would help address the deep fake problem, namely, public education. Public education is often understood to help inoculate against cybersecurity problems. For example, teaching people to use complex passwords and not to click on suspicious email attachments bolsters cybersecurity. But Chesney and Citron point out a perverse consequence of educating the public about deep fakes. They call it the “liar’s dividend”: “a skeptical public will be primed to doubt the authenticity of real audio and video evidence,” so those caught engaging in bad acts in authentic audio and video recordings will exploit this skepticism to “try to escape accountability for their actions by denouncing authentic video and audio as deep fakes.” (P. 28.)
Although the paper is mostly profoundly disturbing, Chesney and Citron try to end on a positive note by focusing on the content screening and removal policies of platforms like Facebook. They argue that the companies’ terms of service agreements “will be primary battlegrounds in the fight to minimize the harms that deep fakes may cause,” (P. 56) and urge the platforms to practice “technological due process.” (P. 57.) Facebook, they note, “has stated that it will begin tracking fake videos.” (P. 58.) The ending note of optimism is welcome, but rather underexplored in the current draft, leaving readers hoping for more details on what, when, and how much the platforms might be able and willing to do to prevent the many problems the authors highlight. It also raises fundamental questions about the role of private companies in playing at least arguably public functions. Why should this be the companies’ problem to fix? And if the answer is because they’re the only ones who can, then more basically, how did we come to the point where that is the case, and is that an acceptable place to be?
In writing the first extended legal treatment of deep fakes, Chesney and Citron understandably don’t purport to solve every problem they identify. But in a world plagued by failures of imagination that leave the United States reeling from unexpected attacks—Russian election interference being the most salient—there is tremendous benefit to thoughtful diagnosis of the problems deep fakes will cause. Deep fakes are, as Chesney and Citron’s title suggests, a “looming challenge” in search of solutions.
Lilian Edwards and Michael Veale, Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For
, 16 Duke L. & Tech. Rev.
18 (2017), available at SSRN
Scholarship on whether and how to regulate algorithmic decision-making has been proliferating. It addresses how to prevent, or at least mitigate, error, bias and discrimination, and unfairness in algorithmic decisions with significant impacts on individuals. In the United States, this conversation largely takes place in a policy vacuum. There is no federal agency for algorithms. There is no algorithmic due process—no notice and opportunity to be heard—not for government decisions, nor for private companies’. There are—as of yet—no required algorithmic impact assessments (though there are some transparency requirements for government use). All we have is a tentative piece of proposed legislation, the FUTURE of AI Act, that would—gasp!—establish a committee to write a report to the Secretary of Commerce.
Europe, however, is a different story. The General Data Protection Regulation (GDPR) went into direct effect on EU Member States on May 25, 2018. It contains a hotly debated provision, Article 22, that may impose a version of due process on algorithmic decisions that have significant effects on individuals. For those looking to understand how the GDPR impacts algorithms, I recommend Lilian Edwards’ and Michael Veale’s Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For. Edwards and Veale have written the near-comprehensive guide to how EU data protection law might affect algorithmic quality and accountability, beyond individualized due process. For U.S. scholars writing in this area, this article is a must-read.
Discussions of algorithmic accountability in the GDPR have, apart from this piece, largely been limited to the debate over whether or not there is an individual “right to an explanation” of an algorithmic decision. Article 22 of the GDPR places restrictions on companies that employ algorithms without human intervention to make decisions with significant effects on individuals. Companies can deploy such algorithmic decision-making only under certain circumstances (when necessary for contract or subject to explicit consent), and even then only if they adopt “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests.” These “suitable measures” include “at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” They also arguably include a right to obtain an explanation of a particular individualized decision. The debate over this right to an explanation centers on the fact that it appears in a Recital (which, in brief, serves as interpretative guidance), and not in the GDPR’s actual text. The latest interpretative document on the GDPR appears to agree with scholars who argue that a right to an explanation does exist, because it is necessary for individuals to contest algorithmic decisions. This suggests that the right to explanation will be oriented towards individuals, and making algorithmic decisions understandable by (or legible to) an individual person.
Edwards and Veale move beyond all of this. They do engage with the debate about the right to an explanation, pointing out both potential loopholes and the limitations of individualized transparency. They helpfully add to the conversation about the kinds of explanations that could be provided: (A) model-centric explanations that disclose, for example, the family of model, input data, performance metrics, and how the model was tested; and (B) subject-centric explanations that disclose, for example, not just counterfactuals (what would I have to do differently to change the decision?) but the characteristics of others similarly classified, and the confidence the system has in a particular individual outcome. But they worry that an individualized right to an explanation would in practice prove to be a “transparency fallacy”—giving a false sense of individual control over complex and far-reaching systems. They valuably add that the GDPR contains a far broader toolkit for getting at many of the potential problems with algorithmic decision-making. Edwards and Veale observe that the tools of omnibus data protection law—which the U.S. lacks—are tools that can also work in practice to govern algorithms.
First, they point out that the GDPR consists of far more than Article 22 and related transparency rights. This is an important point to make to a U.S. audience, which might otherwise come away from the right to explanation debate believing that in the absence of a right to an explanation, algorithmic decision-making won’t be governed by the GDPR. That conclusion would be wrong. Edwards and Veale point out that the GDPR contains other individual rights—such as the right to erasure, and the right to data portability—that will affect data quality and allow individuals to contest their inclusion in profiling systems, including ones that give rise to algorithmic decision-making. (I was surprised, given concerns over algorithmic error, that they did not also discuss the GDPR’s related right to rectification—the right to correct data held on an individual—which has been included in calls for algorithmic due process by U.S. scholars such as Citron & Pasquale and Crawford & Schultz.) These individual rights potentially give individuals control over their data, and provide transparency into profiling systems beyond an overview of how a particular decision was reached. But there remains the question of whether individuals will invoke these rights.
Edwards and Veale identify that the GDPR goes beyond individual rights to “provide a societal framework for better privacy practices and design.” For example, the GDPR requires something like privacy by design (data protection by design and by default), requiring companies to build data protection principles, such as data minimization and purpose specification, into developing technologies. For high-risk processing, including algorithmic decision-making, the GDPR requires companies to perform (non-public) impact assessments. And the GDPR includes a system for formal co-regulation, nudging companies towards codes of conduct and certification mechanisms. All of these provisions will potentially influence design and best practices in algorithmic decision-making. Edwards and Veale argue that these provisions—aimed at building better systems at the onset, and providing ongoing oversight over systems once deployed—are better suited to governing algorithms than a system of individual rights.
Edwards and Veale are not GDPR apologists. They recognize significant limitations in the law, including the lack of a true class-action mechanism, even where the GDPR contemplates third-party actions by NGOs. They acknowledge that data-protection authorities are often woefully underfunded and understaffed. And, like others, they point out mismatches between the GDPR’s language and current technological and social practices—asking, for example, whether behavioral advertising constitutes an algorithmic “decision.” But they helpfully move the conversation about algorithmic accountability away from the “right to an explanation” and towards the broader regulatory toolkit of the GDPR.
Where the piece falters most is in its almost offhand dismissal of individualized transparency. Some form of transparency will be necessary for the regulatory system that they describe to work—a complex co-regulatory system involving impact assessments, codes of conduct, and self-certification. Without public oversight of some kind, that system may be subject to capture, or at least devoid of important feedback from both civil society and public experts. And, as the ongoing conversation about justifiability shows, both the legitimizing and the dignitary value of individualized decisional transparency cannot be dismissed so lightly.
I wish this piece had a different title. In dismissing the value of an individual right to explanation, the title obscures the valuable work Edwards and Veale do in charting other regulatory approaches in the GDPR. However the right to an explanation debate plays out, they show that unlike in the United States, algorithmic decision-making is in the regulatory crosshairs in the EU.
Cite as: Margot Kaminski, The GDPR’s Version of Algorithmic Accountability
(August 16, 2018) (reviewing Lilian Edwards and Michael Veale, Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For
, 16 Duke L. & Tech. Rev.
18 (2017), available at SSRN), https://cyber.jotwell.com/the-gdprs-version-of-algorithmic-accountability/
Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018).
We have a problem with poverty, which we have converted into a problem with poor people. Policymakers tout technology as a way to make social programs more efficient, but they end up encoding the social problems they were designed to solve, thus entrenching poverty and over-policing of the poor. In Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks uses three core examples—welfare reform software in Indiana, homelessness service unification in Los Angeles, and child abuse prediction in Pennsylvania—and shows that while they vary in how screwed up they are (Indiana terribly, Los Angeles a bit, and Pennsylvania very hard to tell), they all rely on assumptions that leave poor people more exposed to coercive state control. That state control both results from and contributes to the assumption that poor people’s problems are their own fault. The book is a compelling read and a distressing work, mainly because I have little faith that the problems Eubanks so persuasively identifies can be corrected.
Across the country, poor and working-class people are targeted by new tools of digital poverty management and face life-threatening consequences as a result. Automated eligibility systems discourage them from claiming public resources that they need to survive and thrive. Complex integrated databases collect their most personal information, with few safeguards for privacy or data security, while offering almost nothing in return. Predictive models and algorithms tag them as risky investments and problematic parents. Vast complexes of social service, law enforcement, and neighborhood surveillance make their every move visible and offer up their behavior for government, commercial, and public scrutiny.
As Eubanks points out, the poor are test subjects because they offer “‘low rights environments’ where there are few expectations of political accountability and transparency.” Even those who do not care about poverty should be paying attention, however, because “systems first designed for the poor will eventually be used on everyone.”
Eubanks’ recommendation, even as more punitive measures are being enacted, is for more resources and fewer requirements. Homelessness isn’t a data problem, it’s a carpentry problem, and a universal basic income or universal health insurance would allocate care far better than a gauntlet of automated forms. Eubanks points out that automation, despite its promised efficiencies, has coincided with kicking people off of assistance programs. In 1973, nearly half of people under the poverty line received AFDC (Aid to Families with Dependent Children), but a decade later that was 30 percent (coinciding with the introduction of the computerized Welfare Management System) and now it’s less than 10 percent. Automated management is a tool of plausible deniability, allowing elites to believe that the most worthy of the poor are being taken care of and that the unworthy don’t deserve care, as evidenced by the fact that they failed to behave as they were asked to do in complying with various requirements to submit information and be subjected to surveillance.
Eubanks begins with the most obvious disaster: Indiana’s expensive contract with IBM to get rid of most caseworkers and automate medical coverage. Thousands of people were wrongly denied coverage, creating trauma for medically vulnerable people even when the denials were ultimately reversed. Indiana’s failure to create a working centralized system led to some backlash. Eubanks quotes people who suggest that the result from the backlash was a hybrid human-computer system, which restored almost enough caseworkers to deal with the people who make noise, but not enough for those who can’t. Of course, human caseworkers have their own problems—accounts of implicit and even explicit racial bias abound—but discrimination is easily ported to statistical models, such that states with higher African-American populations have “tougher rules, more stringent work requirements, and higher sanction rates.” And Indiana’s automated experiment disproportionately drove African Americans off the TANF (Temporary Assistance for Needy Families) rolls, perhaps in part because the system treated any error (including those made by the system itself) as deliberate noncompliance, and many people simply gave up.
The Los Angeles homelessness story is different, but not different enough. It provides a useful contrast of a “progressive” use of data and computerization. The idea was to create “coordinated entry,” so that homeless people who contacted any service provider would be connected with the right resources, sorting between the short-term and long-term homeless, who need different services, some of which can be less than helpful if given to the wrong groups. There’s a lot of good there, including the idea of “housing first”: rather than limiting housing only to those who are sober, employed, etc., the aim is to get people housed because of how hard all those other things are without housing. Eubanks profiles a woman for whom coordinated entry was a godsend.
But Eubanks also identifies two core problems: (1) The system itself is under-resourced; all the coordination in the world won’t help when there are only 10 beds for every 100 people in need of them. (2) The information collected is invasive and contributes to the criminalization and pathologization of poor people. The data are kept with minimal security and no protection against police scrutiny, which is particularly significant because, as Eubanks rephrases Anatole France, “so many of the basic conditions of being homeless—having nowhere to sleep, nowhere to put your stuff, and nowhere to go to the bathroom—are also officially crimes.” Homeless people can rarely pay tickets, and so the unpaid fines turn into warrants (turning into days in jail when they can’t afford bail, even though these kinds of nuisance charges are usually dismissed once in front of a judge). People in the database turn into fugitives.
These two problems reinforce each other. Given the low chance of getting help, people are less willing to explain their circumstances, often stories of escalating misfortune and humiliation, to the representative of the state’s computer. The resource crunch also contributes to workers’ felt imperative to find the most deserving and thus to scrutinize every applicant for appropriate levels of dysfunctionality. Too little trauma, and services might be deemed unnecessary. But too much dysfunctionality can also be disqualifying—the housing authority might determine that a client is incapable of living independently. One group of caseworkers Eubanks discusses “counsel their clients to treat the interview at the housing authority like a court proceeding.” They also see vulnerable clients rejected by landlords; Section 8 vouchers to pay for housing are nice, but still require a willing landlord, and the vouchers expire after six months, meaning that a lot of clients just give up. Meanwhile, “[s]ince 1950, more than 13,000 units of low-income housing have been removed from Skid Row, enough for them all.” It’s also worth noting how much discretion remains with humans, despite the appearance of Olympian objectivity in a housing need score: clients are assessed based on self-reports, and they won’t always tell people they haven’t grown to trust about circumstances bearing on their needs, including trauma.
What really mattered to getting resources devoted to addressing homelessness in Los Angeles, Eubanks argued, was rights, not data. Court rulings found that routine police practices—barring sleeping in public and confiscating and destroying the property of homeless people found in areas where they were considered undesirable—were unconstitutional. Once that happened, tent cities sprung up in places visible to people with money and power. Better data helped in identifying what resources were needed where, but tent cities were the driver of reform.
Finally, the experience of child welfare prediction software in Allegheny County, Pennsylvania, has continuities with and divergences from the other two stories. The software is at the moment used just to back up individual caseworkers’ determinations of whether to further investigate child abuse based on a call to the child welfare hotline, though Eubanks already saw caseworkers tweaking their own estimates of risk to match the model’s, an instance of automation bias that ought to alarm us. Some of the problems were statistical: the number of child deaths and near-deaths in the county is thankfully very low, and you can’t do a good model with a handful of cases a year for a population of 1.23 million.
Setting the base-rate problem aside, you can’t actually measure levels of child abuse. You can measure proxies, such as how many calls to CPS (Child Protective Services) are made and how many children CPS removes from a home. As a result, the automated system ends up predicting “decisions made by the community (which families will be reported to the hotline) and by the agency and the family courts (which children will be removed from their families), not which children will be harmed.” Unfortunately, those proxies are precisely the ones we know are infected with persistent racial and class bias, so that bias is baked into the predictions. This is the same problem explained so well in Cathy O’Neil’s Weapons of Math Destruction, a good book to read along with this one.
In Allegheny County itself, “the great majority of [racial] disproportionality in the county’s child welfare services arises from referral bias, not screening bias.” Sometimes this arises from perceptions of neighborhoods being bad, so the threshold for reporting someone from those neighborhoods is lower—which in the US means minority neighborhoods. But the prediction system “focuses all its predictive power and computational might on call screening, the step it can experimentally control, rather than concentrating on referral, the step where racial disproportionality is actually entering the system.” And it gets worse: the model is evaluated for whether it predicts future referrals. “[T]he activity that introduces the most racial bias into the system is the very way the model defines maltreatment.”
In rural or suburban areas, where witnesses are rarer, no one may call the hotline. Families with enough resources use private services for mental health or addiction treatment and thus don’t create a record available to the state (if they don’t directly talk about child abuse in a way that triggers mandatory reporting). Either way, those disproportionately whiter and wealthier families stay out of the system for conduct that would, if they were visible to the system, increase their risk score. The system can provide very useful services, but those services then become part of the public record, helping define a family as at-risk. A child whose parents were investigated by CPS now has a record of interaction with the system that, when she becomes a mother, will increase her risk score if someone reports her. Likewise, use of public services is coded as a risk factor. A quarter of the predictive variables in the model are “direct measures of poverty”—TANF, SSI (Supplemental Security Income), SNAP (Supplemental Nutrition Assistance Program), and county medical assistance. Another quarter of the predictive variables measure “interaction with juvenile probation” and the child welfare agency itself, when “professional middle-class families have more privacy, interact with fewer mandated reporters, and enjoy more cultural approval of their parenting” than poorer families. Nuisance calls by people with grudges are also a real problem.
Even if that didn’t bother you, consider this: of 15,000 abuse reports in 2016, at its current rate of (proxy-defined) accuracy, the system would produce 3,600 incorrect predictions. And the planned model is supposed to be “run on a daily or weekly basis on all babies born in Allegheny County.” This is a big step forward not just in extending the tech to everyone, but also in commitment to prediction. Prediction is about guessing how poor people might behave in the future based on data from their networks, not just about judging their past individual behavior, and thus it can infect entire communities and generations. At the same time, “digital poorhouses,” as Eubanks calls the networks into which data about poor people are fed, are hard to see and hard to understand, making them harder to organize against.
Eubanks also points out that parents can naturally resent outside scrutiny and often feel that once the child welfare system is involved the standards keep getting raised on them, no matter what they try to do. And caseworkers interpret resistance and resentment as danger signs. While these reactions aren’t directly dependent on the technology, they are human behaviors that change what the technology does in the world.
In theory, big data could increase transparency and decrease discrimination where that comes from the humans in the system. Unfortunately, that doesn’t seem to be what’s happening. Among other things, the purported “transparency” of algorithms, even putting trade secrets aside, is very much a transparency for the elite who can figure the code out, not for ordinary participants in democratic governance, who basically have to take experts’ explanations on faith.
In addition, Eubanks finds:
the philosophy that sees human beings as unknowable black boxes and machines as transparent…deeply troubling. It seems to me a worldview that surrenders any attempt at empathy and forecloses the possibility of ethical development. The presumption that human decision-making is opaque and inaccessible is an admission that we have abandoned a social commitment to try to understand each other. Poor and working-class people in Allegheny County want and deserve more: a recognition of their humanity, an understanding of their context, and the potential for connection and community.
This sounds great, but I wonder if it is fully convincing, in the fallen world in which we live. On the other hand, given that there are other interventions that wouldn’t sort the “worthy” from the “unworthy” in the ways that current underfunded services are forced to do, it is certainly persuasive to argue that we shouldn’t try to move from biased caseworkers to biased algorithms.
Along with non-technical solutions, Eubanks offers some ethics for designers, focusing on whether the tools they make increase the self-determination and agency capabilities of the poor, and whether they’d be tolerated if targeted at the non-poor. I think she’s overly optimistic about the latter criterion, at least as applied to private corporate targeting, which we barely resist. The example of TSA airport screening is also depressing. Perhaps I’d suggest the modification that, if we expect wealthier people to buy their way out of the system, as they can with TSA Pre-check and CLEAR Global Entry (at least if they’re not Muslim), then there is a problem with the system. Informed consent and designing with histories of oppression in mind, rather than assuming that equity and good intentions are the default baselines, are central to her vision of good technological design.
Like the far more caustic Evgeny Morozov, Eubanks contends that we have turned to technology to solve human problems in ways that are both corrupting and self-defeating. And Eubanks doesn’t focus the blame on Silicon Valley. The call for automation is coming from inside the polity. In fact, while IBM comes in for substantial criticism for overpromising in the Indiana example, the real drivers in Eubanks’ story are the policy wonks who are either trying to shrink the system until it can be drowned in the bathtub (Indiana), or sincerely trying to build something helpful while resources are continually being drained from the system (Los Angeles and Pennsylvania).
Ultimately, Eubanks argues, the problem is that we’re in denial about poverty, an experience that will happen to the majority of Americans for at least a year between the ages of 20 and 65, while two-thirds of us will use a means-tested public benefit such as TANF, SNAP, Medicaid, or SSI. But we persist in pretending that poverty is “a puzzling aberration that happens only to a tiny minority of pathological people.” We pass a suffering man on the street and fail to ask him if he needs help. We don’t keep our tormented child in an isolated place, as they do in Omelas. Instead of walking away, we walk by—but we don’t meet each other’s eyes as we do so. This denial is expensive in so many ways—morally, monetarily, and even physically, as we build entire highways, suburbs, private schools, and prisons so that richer people don’t have to share in the lives of poorer people. It rots politics: “people who cannot meet eachothers’ eyes will find it very difficult to collectively govern.” Eubanks asks us to admit that, as Dan Kahan and his colleagues have repeatedly demonstrated in work on cultural cognition, our ideological problems won’t be solved with data, no matter how well formed the algorithm.