Content moderation is a high-stakes, high-volume game of tradeoffs. Platforms face difficult choices about how aggressively to enforce their policies. Too light a touch and they provide a home for pornographers, terrorists, harassers, infringers, and insurrectionists. Too heavy a hand and they stifle political discussion and give innocent users the boot. Little wonder that platforms have sometimes been eager to take any help they can get, even from their competitors.
evelyn douek’s The Rise of Content Cartels is a careful and thoughtful exploration of a difficult tradeoff in content-moderation policy: centralized versus distributed moderation. The major platforms have been quietly collaborating on a variety of moderation initiatives to develop consistent policies, coordinated responses, and shared databases of prohibited content. Sometimes they connect through nonprofit facilitators and clearinghouses, but increasingly they work directly with each other. douek’s essay offers an accessible description of the trend and an even-handed evaluation of both its promise and its perils.
Take the problem of online distribution of child sexual abuse materials (CSAM). There is a broad consensus behind the laws criminalizing the distribution of CSAM images, such images have no redeeming societal value, and image-hashing technology is quite good at flagging only uploads that are close matches for ones in a reference database. Under these circumstances, it would be wasteful for each service to maintain its own database of CSAM hashes. Instead, the National Center for Missing and Exploited Children (NCMEC) maintains a shared database, which is widely used by content platforms to check uploads.
douek traces the spread of the NCMEC model, however, to other types of content. The next domino to fall was “terrorist” speech: not always so clearly illegal and not always so obviously low-value. The Global Internet Forum to Counter Terrorism helps the platforms keep beheading videos from being uploaded. There have been similar initiatives around election interference, foreign influence campaigns, and more. I would add that technology companies have long collaborated with each other on security and anti-spam responses (often with law enforcement in the room as well) in ways that effectively amount to a joint decision on what content can and cannot transit their systems.
When there are so few platforms, however, content collaboration can become content cartelization. The benefits of cartelization on content moderation are many. Where there is an existing consensus on which content is acceptable, policy enforcement is more effective because platforms can pool their work. Even where there is not, platforms can learn from each other by sharing best practices. Some coordinated malicious activity is hard to detect when each platform holds only one piece of the puzzle; botnet takedowns now involve industry partners in dozens of countries. And to be effective, bans on truly bad actors need to be enforced everywhere, or they will simply migrate to the most permissive platform.
But douek smartly explains why content cartels are also so unsettling. They make it even harder to assess responsibility for any given moderation decision, both by obscuring who actually made it and by slathering the whole thing in a “false patina of legitimacy.” They amplify the existing “power of the powerful” by removing one of the classic safety valves for private platform speech restrictions: alternative avenues for the speaker’s messages. And, much like economic cartels, they present decisions made in smoky back rooms as though they were the “natural” outcomes of “market” forces.
douek’s explanation of how coordinated content moderation stands in sharp contrast to the rhetoric of competition these companies normally adopt is particularly sharp. Even the name itself–content cartels–points out the way in which this coordinated behavior raises questions of antitrust law and policy. To this list might be added the danger that content-moderation creep will turn into surveillance creep as platforms decide that to make decisions about their own users’ posts, they need access to information about those users’ activities across the Internet.
The Rise of Content Cartels resists the temptation to cram platform content moderation into a strictly “private” or strictly “public” box. Like douek’s forthcoming Governing Online Speech: From ‘Posts-As-Trumps’ to Proportionality and Probability, it is thoughtful about the relationship between power and legitimacy, and broad-minded about developing new hybrid models to account for the distinctive character of our new speech and governance institutions.
It is an exciting time for content-moderation scholarship. Articles from just five years ago read as dated and janky compared with the outstanding descriptive and normative work now being published. douek joins scholars like Chinmayi Arun, Hannah Bloch-Wehba, Joan Donovan, Casey Fiesler, Daphne Keller, Kate Klonick, Renee DiResta, Sarah T. Roberts, and Jillian C. York in doing important work in this urgently important field. To borrow a phrase, make sure to like and subscribe.
In her General Principles of the European Convention on Human Rights, Janneke Gerards demonstrates how one of Europe’s two highest Courts offers ‘practical and effective’ protection to a number of human rights. These rights are at stake when governments or other big players use data-driven measures to fight e.g. international terrorism, a global pandemic or social security fraud. For those who wish to understand how the General Data Protection Regulation (GDPR) is grounded in European constitutional law, this book is an excellent point of departure, because the GDPR explicitly aims to protect the fundamental rights and freedoms of natural persons. Rather than ‘merely’ protecting the right to privacy of data subjects, the GDPR does not mention privacy at all; it is pertinent for all human rights, including non-discrimination, fair trail, presumption of innocence, privacy and freedom of expression.
Those not versed in European law may frown upon calling the European Convention of Human Rights (ECHR, “the Convention”) European constitutional law, as they may conflate ‘Europe’ with the European Union (EU). The EU has 27 Member States who are all Contracting Parties to the Convention, and at the constitutional level the EU is grounded in the various Treaties of the EU and in the Charter of Fundamental Rights of the EU (CFREU, “the Charter”). The Convention is part of a larger European jurisdiction, namely that of the Council of Europe (CoE), which has 47 Contracting Parties. The CoE is an international organisation, whereas the EU is a supranational organisation (though not a federal state). To properly understand both the GDPR and the Charter, however, one must first immerse oneself in the ‘logic’ of the Convention, because the Charter stipulates that the meaning and scope of Charter rights that overlap with Convention rights are at least the same as those of Convention rights. The reader who finds all this complex and cumbersome, may want to consider that the overlap often enhances the protection of fundamental rights and freedoms, similar to how the interrelated systems of federal and state jurisdiction in the US may increase access to justice. It is for good reason that Montesquieu observed that the complexity of the law actually protects against arbitrary rule, providing an important countervailing power against the unilateral power of a smooth, efficient and streamlined administration of ‘justice’ (The Spirits of the Laws, VI, II).
(For those interested in exploring the complexities of the two European jurisdictions to better understand the ‘constitutional pluralism’ that defines European law, I recommend Steven Greer, Janneke Gerards, and Rose Slowe’s 2018 Human Rights in the Council of Europe and the European Union: Achievements, Trends and Challenges (New York: Cambridge University Press.)
On 8 April 2014, the Court of Justice of the European Union (CJEU) invalidated the 2006 EU Data Retention Directive (DRD) that required Member States (MS) to impose an obligation on telecom providers to retain metadata and to enact legislation to allow access to such data by criminal justice authorities (case Digital Rights Ireland C-293/12). The CJEU’s invalidation of an entire legislative instrument highlights the significance of Janneke Gerards’ work on the Convention. Let me briefly explain: (1) the CJEU invalidated the DRD because it violated the fundamental rights to privacy and data protection of the Charter, (2) this violation was due to the fact that the DRD was deemed disproportional in relation to its legitimate goal of fighting terrorism, (3) the reason being that the DRD enabled infringements of privacy and data protection that were not strictly necessary to achieve this goal and therefore not justified, (4) this criterion of necessity, framed in terms of proportionality, builds on the case law of the European Court of Human Rights (ECtHR, ‘the Court’) that decides potential violations of the Convention.
The invalidation of the DRD obviously demonstrates that those who wish to situate the remit of the General Data Protection Regulation (GDPR) should study the EU’s Charter, because the fundamental right to data protection is one of the Charter rights. It also marks out that where the right to data protection overlaps with the Convention’s right to privacy the case law of the (other) Court must be taken into account. Thus, precisely because the fundamental right to data protection is part of European constitutional law, those interested in legal protection against data-driven systems should probe the salience of legal framework for constitutional protection of human rights in Europe.
In General Principles, Gerards explains in simple and lucid prose how the Convention operates, while nevertheless respecting the complexity of an institutional system that provides human rights protection in 47 national jurisdictions, including Russia and Turkey. She introduces the Convention as ‘a living instrument’ (see section 3.3), which flies in the face of the cumbersome discussions in the US on ‘plain text’ meaning, ‘Framers’ intention,’ and ‘Originalism’. Its meaning is decided by the Court in Strasbourg on a case-to-case basis. The Court squarely faces the need for interpretation that is inherent in text-based law (chapter 4), while taking into account that deciding the meaning of the text decides the level of protection across all 47 Contracting States. The meaning of the Convention is not immutable but adaptive. That is why it is capable of offering what the Court calls ‘practical and effective protection’ (chapter 1). Unlike what some blockchain afficionados seem to believe, immutability does not necessarily offer better protection, especially not in real life.
Gerards discusses the constitutional nature of the Convention, and the emphasis of the Court on an interpretation of Convention rights as rights that should be both ‘practical and effective’, while taking into account that the role of the Court is subsidiary in relation to the national courts, who are the primary caretakers. This results in the double role of the Court: (1) supervising compliance by the contracting states on a case-to-case basis, including redress in case of a violation and (2) providing an interpretation of convention rights that clarifies the minimum level of protection in all contracting states.
To mediate these twin objectives the Court has developed an approach that incorporates three steps: (1) the Court decides whether the case falls within the scope of the allegedly violated right, (2) the Court decides whether the right has been infringed and (3), the Court decides whether the infringement was justified. Though infringements can be justified if specific explicit or implied conditions are fulfilled, some rights are absolute in the sense that if the right is infringed it is necessarily violated, meaning that no justification is possible (notably in the case of torture and degrading or inhuman treatment). Gerards explains how the first and the second step interact as the facts of the case are qualified in light of the applicable Convention text while, in turn, the applicability and the meaning of the Convention text are decided in light of the facts of the case at hand. She understands this as a ‘reflective equilibrium’ where facts and norms, the concrete and the abstract are – in my own words – mutually constitutive.
General Principles proceeds to a detailed discussion of the principles that determine the Court’s ‘evolutive interpretation’ (chapter 3), which takes into account, on the one hand, the changing understanding of the meaning of convention rights (the first step mentioned above) and on the other hand, the confrontation with new cases that cannot be reduced to prior cases (highlighting the second step). Note that Gerards’ structured conceptual approach is firmly anchored in the case law of the Court, providing concrete examples of the reasoning of the Court based on succinct and lucid accounts of what is at stake in the relevant case law. This is also how she discusses arduous issues such as positive and negative obligations for states (chapter 5) as well as the difference between vertical and horizontal effect (both direct and indirect) (chapter 6), explaining convoluted legal framings without ignoring their complexity.
Finally, Gerards explains in rich detail the third step indicated above, that of justification, anchored in an in-depth and crystal-clear analysis of the Court’s case law. Justification of a restriction of human rights is only possible if three cumulative conditions are fulfilled: the infringing measures are lawful (chapter 8), have a legitimate aim (chapter 9) and are necessary in a democratic society (chapter 10). Lawfulness is interpreted by the Court as legality, not as legalism; it not only requires a basis in written or unwritten law, but also demands both accessibility and foreseeability, while to qualify as lawful the legal basis must incorporate sufficient safeguards to mitigate the impact on relevant human rights (including procedural due care). As to necessity, the Court checks the proportionality between measures and legitimate aim, performing a fair balancing test, taking into account the scope and severity of the infringements in relation to the importance of the aim at stake.
This is the necessity criterion that also plays a crucial role in infringements of the fundamental right to data protection. The Charter requires necessity in a way similar to the Convention and even though ‘necessity’ plays a crucial role in the GDPR’s own principles and its requirement of a legal basis, necessity often plays a seminal role when testing these infringements against the necessity principle of European constitutional law. When the CJEU invalidated the DRD it explicitly invoked the meaning of ‘necessity’ in this sense.
This book is not only relevant as a textbook for students of human rights in Europe. It also offers a detailed account of why and how individual rights and freedoms matter, what difference they can make, and which complex balancing acts must be performed to ensure legal certainty as well as justice. For those seeking protection against algorithmic decision-making and data-driven surveillance General Principles is a key resource. The clarity of explanation highlights the difficult dynamics between public and individual interests, between national and supranational jurisdictions and between the freedom of states to act in the general interest and the freedom from unlawful interference for individual citizens, acknowledging that such individual freedom is also a public good. Whereas human rights can be used to protect the interests of those already in power by ignoring the rights and freedoms of marginalised communities, the Court’s requirement that rights are ‘practical and effective’ rather than formal or efficient gives clear direction to an interpretation strategy that is firmly grounded in a substantive and procedural conception of the rule of law. I guess this comes closest to Jeremy Waldron’s ‘The Rule of Law and the Importance of Procedure’, 50 Nomos 2011, 3-31, underlining the need for institutional checks and balances without which rule of law checklists offer little to no protection when push comes to shove.
Salome Viljoen, Democratic Data: A Relational Theory for Data Governance
(Nov. 11, 2020), available on SSRN
Between 2018 and 2020, nine proposals (or discussion drafts) for comprehensive data privacy legislation were introduced in the U.S. Congress. 28 states introduced 42 comprehensive privacy bills during that time. This is on top of the European Union’s General Data Protection Regulation, which took effect in 2018, and the California Consumer Privacy Act, which took effect in 2020. Clearly, U.S. policymakers are eager to be active on privacy.
Are these privacy laws any good? Put differently, are policymakers drafting, debating, and enacting the kind of privacy laws we need to address the problems of informational capitalism? In Democratic Data: A Relational Theory for Data Governance, Salome Viljoen suggests that the answer is no.
Viljoen’s argument is simple. The information industry’s data collection practices are “primarily aimed at deriving population-level insights from data subjects” that are then applied to individuals who share those characteristics in design nudges, behavioral advertising, and political microtargeting, among others. (P. 3.) But privacy laws, both in their traditional form and in these recent proposals, “attempt to reduce legal interests in information to individualist claims subject to individualistic remedies that are structurally incapable of representing this fundamental population-level purpose of data protection.” (P. 3.)
Viljoen could not be more right, both in her diagnosis of current proposals and in their structural mismatch with the privacy, justice, and dignitary interests undermined by data-driven business models that traffic in the commodification of the human experience.
Viljoen first notes that privacy has traditionally been legally conceptualized as an individual right. The Fair Information Practice Principles (FIPPs) and a long series of federal sectoral privacy laws and state statutes grant privacy rights to consumers qua individuals. This new crop of privacy laws is no different. They guarantee rights of access, correction, deletion, and portability, among others. But all of these rights are for the individual consumer. Notice-and-choice, the framework for much of U.S. privacy law, operated the same way: Its consent paradigm centered the right to choose or consent in the individual internet user.
This also tracks the scholarly literature in privacy since 1890. Privacy has long been understood as either a negative—freedom from—or positive—freedom to—right, but almost always a right located in the individual. Modern privacy scholarship has moved away from this model, recognizing both privacy’s social value, its importance in social interaction and image management, and the connection between privacy and social trust. That terrain is well worn; its inclusion here speaks both to Viljoen’s in-depth knowledge of the literature in her field and law review editors’ adherence to a model of overlong “background” sections.
Viljoen’s contributions are not so much her descriptive claim that privacy law has traditionally conceptualized privacy in individualistic terms, but where she goes from there.
Her notion of “data governance’s sociality problem” is compelling. (P. 23.) Viljoen argues that the relationships between individuals and the the information industry can be mapped along two axes: vertical and horizontal. (Pp. 25-27.) The vertical axis is the relationship between us and data collectors. When we agree to Instagram’s terms and conditions and upload a photo of our new dog, we are creating a vertical relationship with Instagram and its parent company, Facebook. The terms of that relationship “structure the process whereby data subjects exchange data about themselves for the digital services the data collector provides.”
“Horizontal data relations” are those relations between and among us, data subjects all, who share relevant characteristics. Those who “match” on OKCupid are in a horizontal data relationship with each other. A gay man who “likes” pictures of Corgis is in a horizontal data relationship with those targeted for advertisements based on those latent characteristics. As is a person arrested because a facial recognition tool identified him as a suspect socially connected with the person whose voluntarily uploaded picture of the same tattoo was used to train the facial recognition AI. (P. 26.)
Viljoen’s second important contribution flows from the first. She offers a normative diagnosis for why horizontal relationships matter for data governance law. That is, data extraction’s harms stem not only from concerns over my privacy or our visceral reaction to creepy, ubiquitous surveillance. By merely using technologies that track and extract data from us, we become unwitting accomplices in the process through which industry translates our behavior into designs, technologies, and patterns that shape and manipulate everyone else. Abetting this system is a precondition of participation in the information age.
For Viljoen, then, the information economy’s core evil is that it conscripts us all in a project of mass subordination that is (not so incidentally) making a few people very very rich.
This may be Viljoen’s central contribution, and it has already changed my understanding of privacy. Focusing on the individual elides the population-level harms Viljoen highlights. Data flows classify and categorize. Data helps industry develop models to predict and change behavior. And it is precisely this connection between data and the identification of relationships between groups of people that creates economic value. We are deeply enmeshed in perpetuating a vicious cycle that subordinates data subjects while enriching Big Tech. There is no way an individual rights-based regime that gives one person some measure of control over their data can ever address this problem.
And that is, at least in part, where current proposals for comprehensive privacy laws go awry. Although there are some differences at the margins, most proposals are binary: they guarantee individual rights of control and rely on internal compliance structures to manage data collection and use. The rights model, Viljoen shows, inadequately addresses the privacy harms of informational capitalism. So, for that matter, does the compliance model. But that conversation is for another day.
Rebecca Crootof & BJ Ard, Structuring TechLaw
, __ Harv. J.L. & Tech.
__, (forthcoming, 2020), available at SSRN
A decade ago, I mused about the implications and limits of what was then called “cyberlaw.” By that time, scholars had spent roughly 15 years experiencing the internet and speculating that a new jurisprudential era had dawned in its wake. The dialogue between the speculators and their critics was famously encapsulated in a pair of journal articles. Lawrence Lessig celebrated the transformative potential of what we used to call “cyberspace” for law. Judge Frank Easterbrook insisted on the continuing utility of existing law in solving cyber-problems. The latter’s pejorative characterization of cyberlaw as “law of the horse” has endured as a metonym for the idea that law ought not to be tailored too specifically to social problems prompted by some exotic new device.
It turns out, as I mused, that Lessig and Easterbrook and others in their respective camps were arguing on the wrong ground. Cyberspace and cyberlaw pointed the way to an integrative jurisprudential project, in which novel technologies and their uses motivate a larger rethinking of the roles and purposes of law, rather than a jurisprudence of exception (Lessig) or a jurisprudence of tradition (Easterbrook). But it has taken some time for elements of an integrative project to emerge. Rebecca Crootof and BJ Ard, in Structuring Techlaw, are among those who are now building in that direction and away from scholars’ efforts to justify legal exceptionalism in response to various metaphorical horses – among them algorithmic decision making, data analytics, robotics, autonomous vehicles, 3D printing, recombinant DNA, genome editing, and synthetic biology. Their story is not, however, primarily one of power, ideology, markets, social norms, or technological affordances. Julie Cohen, among others, has taken that approach. Structuring Techlaw is resolutely and therefore usefully positivist. The law and legal methods still matter, as such. The law itself can be adapted, reformed, and perhaps transformed.
In that spirit, Structuring Techlaw offers a framework for organizing legal analysis (Pp. 8-9), rather a solution, so it is (admirably, in my opinion) primarily descriptive rather than normative. Like Leo Marx’s classic The Machine in the Garden, exploring American literature’s industrial interruption of the pastoral, it clarifies the situation. The article is a field guild to problems in technology and law, rather than a theory or a jurisprudential intervention. As a field guide, few of its details will be new to scholars, lawyers, or even students familiar with technology policy debates of the last 25 years. But the paper collects and organizes those details in a thoughtful, clear way, with priority given to traditional legal forms and to illustrations drawn from a wide variety of technology-animated social problems. Historical problems get attention, including those that long pre-dated the internet, along with contemporary challenges. The resulting framework is for use by scholars, policy makers, and other decision makers confronted with what Crootof and Ard characterize as a critical problem common to all types of new technology: legal uncertainty in the application and design of relevant rules.
Their broad view requires a broad beginning. “Technology” means devices that extend human capabilities. (P. 3 n.1.) Structuring Techlaw offers the neologism “techlaw” to distinguish solutions to larger-scale social problems created by technology in society from technology-enabled solutions to specific problems in the provision of professional services, or so-called legaltech or lawtech. (Id.)
Techlaw exposes legal uncertainties of three types. The framework consists of those three types, in layers, with some nuances, details, and illustrations added for good measure, together with likely strategies for dealing with each one. Each type of uncertainty is described in terms of familiar debates. Some of those concern the welfare effects of precautionary and permissive regulatory approaches. Some concern choices among updating existing law, imagining new law, and reconceptualizing the legal regime the context of institutional choices. The full framework is laid out in a single graphic. (P.11.)
Layer one consists of application uncertainties, in which existing legal rules are deemed to be either too narrow (gaps) or too broad (overlaps) as responses to technology-fostered social problems. Regular or traditional tools of legal interpretation may be used effectively here.
Layer two consists of normative uncertainties, in which technology-fostered problems expose larger concerns about the purposes and functions of the laws in question. Existing law may be revealed to be underinclusive or overinclusive relative to its original aims. This is the space for normative realignment of the law.
Layer three consists of institutional uncertainties, in which the roles and responsibilities of different legal actors are called into question based on concerns about legitimacy, authority, and competence. Are technology-fostered problems best solved by updates supplied by legislatures? By administrative agencies? By courts?
This is not so much a functioning method for reaching a judgment in a particular instance in practice.. It’s best viewed as a tool for understanding. Crootof and Ard round out their description with examples at multiple points along the way, but they don’t seek to apply the framework fully either to a real historical case or to an imaginary new one. Instead, the framework is best understood as they describe it (P. 47 n.187): as an idealized template by which observers and participants alike can begin to discern and respond to common patterns in law-making, rather than deal with each technology as a shiny new object or worse, as a distracting but entertaining squirrel. The framework may produce an integrated jurisprudence of technology and law as it is used over time, over multiple applications.
Will it? If the challenge of resolving uncertainties in legal meaning evokes H.L.A. Hart’s famous “No Vehicles in the Park” illustration of interpretive flexibilities in the law – a positivist Pole star – that is no accident. Structuring Techlaw is replete with references to Hart (P. 16 n.35) and Hartian interpretations and extensions. (Pp.69-70.) But one needs a way to get from what this rule means (per Hart) to how this rule is part of a pattern of multiple rules, some for equivalent instances and some for different ones. Crootof and Ard manage the transition to a pattern of multiple rules via an overview of the critical role of analogical reasoning and framing effects in legal interpretation. (Pp. 52–62.) That move is surely the right one; analogies help us scale from case to case, from case to rule, and from rule to system. But its success depends on any number of empirical claims as to how legal reasoning actually works in practice, such as those summarized by Dan Hunter, that are beyond the scope of this work.
Moreover, as Crootof and Ard acknowledge, fully specifying the framework and building the resulting field of law requires exploring a standard set of questions regarding comparative institutional advantage. They don’t do that in Structuring Techlaw. Tantalizingly, they promise that exploration in an additional paper. (P.9 n.19.)
Even more tantalizing are glimpses of jurisprudence yet to come. I wondered a bit about Structuring TechLaw’s emphasis on legal uncertainty. The return to positivism is an important one, but some scholars today place significant normative weight on humans and humanity in legal systems, precisely because of the lack of predictability, certainty, and consistency that human imaginations entail in practice. Some scholars argue that contestability of legal meaning, an attribute that is akin to uncertainty, is both essential to the rule of law and threatened by some novel technologies. Crootof and Ard hint that there is more in store on this point. Understanding humans in technological systems, or “loops,” is the promised subject matter of an “aspirational” manuscript. (P. 12 n.26.)
I can’t wait.
Martha Finnemore and Duncan B. Hollis, Beyond Naming and Shaming: Accusations and International Law in Cybersecurity
, Eur. J. Int'l L.
(forthcoming, 2020), available at SSRN
In recent years, states have begun accusing other states of cyberattacks with some frequency. Just in the past few months, Canada, the United Kingdom, and the United States have warned of Russian intelligence services targeting COVID-19 vaccine development, the United States issued an alert about North Korea robbing banks via remote access, and U.S. prosecutors indicted hackers linked to China’s Ministry of State Security for stealing intellectual property.
The flurry of cyberattack attributions raises questions about what effects (if any) they have and what effects the attributors intend them to have. In their forthcoming article “Beyond Naming and Shaming: Accusations and International Law in Cybersecurity,” Martha Finnemore and Duncan Hollis offer a nuanced set of answers focused, as the title suggests, on moving beyond the idea that the attributions are just intended to name and shame states.
Government officials have repeatedly said that public attributions of cyberattacks to other states are intended to name and shame the perpetrator states and to cause them to change their behavior. The problem is that this strategy hasn’t seemed to work very well, prompting criticism from academics. Finnemore and Hollis helpfully offer an explanation for why naming and shaming is more difficult in the cybersecurity sphere than other areas of international law and international relations. They argue that existing literature on naming and shaming includes an implicit premise: that there is a preexisting norm against which compliance and deviation can be measured. (P. 27.) When there are existing norms or legal prohibitions, like the prohibitions on torture and genocide, accused states “do not contest [the] norms,” but “[i]nstead, . . . deny what the [accuser] says happened or offer a different interpretation or application of the norm than that proffered by the accuser.” (P. 27.) But in the cybersecurity realm, “the norms (and international law) governing online behavior are not always clear and well-entrenched,” particularly across different blocs of countries, and so enforcing norms via accusations is “tricky.” (P. 27.)
But that doesn’t mean cyberattack attributions lack value. Finnemore and Hollis contribute to a growing academic literature about other functions public attributions can serve. The most interesting of these is attributions’ potential constitutive role in international norms and international law. Finnemore and Hollis argue that accusations of state responsibility for a cyberattack can
serve as an opening bid, aimed at a particular community, indicating not just the accuser’s disapproval of the cited operation, but often, too, its proposal (perhaps implicit) that all such conduct should be barred, i.e., that there should be a norm against such conduct. Accusations may thus lay out the contours of ‘bad behavior’ along with an argument about why, exactly, the behavior is undesirable. Other actors may then respond to the accusation. They may accept some of it; they may accept all of it; they may accept it in some situations but not others; or, they may reject it entirely. It is these interactions between the accuser, the accused, and third party audiences that—over time—may result in the creation of a new norm (or its failure). (Pp. 14-15 (footnote omitted).)
The role of cyberattack attributions in setting the rules of the road in cyberspace need not stop with international norms. Rather, public attributions can also contribute to establishing international law. Finnemore and Hollis argue, “Today’s accusations may serve as early evidence of a ‘usage’—that is, a habitual practice followed without any sense of legal obligation,” but “[i]f such accusations persist and spread over time, states may come to assume that these accusations are evidence of opinio juris, delineating which acts are either appropriate or wrongful as a matter of international law.” (Pp. 16-17.)
Once one accepts the argument that public attributions play a role in creating international norms and law to govern state actions in cyberspace, important questions follow, including how such attributions should be made. I have argued that states should establish an international law rule requiring governments that engage in public attributions of cyberattacks to other states to provide sufficient evidence to enable crosschecking or corroboration of their attributions. Such a rule would help to ensure that attributions are accurate and credible and would thereby insulate the process of setting rules of the road for cyberspace from being skewed or tainted by accidentally or willfully false attributions that give an inaccurate picture of state practice and opinio juris. Other ongoing scholarly and policy debates center on the determining the appropriate roles that governments, private companies, international entities, and academic and other experts should play in accusations against states.
One could quibble with parts of Finnemore and Hollis’s article, perhaps especially their argument for changing terminology. The authors acknowledge that “[s]tates and scholars” generally call the process of assigning responsibility for a cyberattack “attribution” (P. 8), but they argue instead for using “accusation” (P. 7), reducing “attribution” to a component of an accusation and limiting it to “the process of associating what happened with a particular actor or territory.” (P. 6.) Although it’s true that “attribution” can have different meanings (P. 8), Finnemore and Hollis are fighting an uphill battle given the entrenched use of “attribution” and a working practice of specifying which kind or aspect of attribution is at issue in a particular context. Finnemore and Hollis’s term “accusation” also presents its own difficulties. For example, they argue, “Accusations can occur without attribution (i.e., when accusers say ‘we do not know who did this, but it happened, and it was bad.’)” (P. 8.) But in common parlance, accusations require an object—who is accused? An “accusation” without an object doesn’t really accuse anyone or anything.
Whatever one terms the phenomenon of states assigning responsibility for carrying out cyberattacks, Finnemore and Hollis rightly flag its importance to establishing the international rules governing state behavior in cyberspace. Moving toward a more sophisticated understanding of the roles that accusations or attributions of cyberattacks can play is a welcome contribution to an emerging academic field and important area of international relations.
Cite as: Kristen Eichensehr, Cyberattacks, Accusations, and the Making of International Law
(December 2, 2020) (reviewing Martha Finnemore and Duncan B. Hollis, Beyond Naming and Shaming: Accusations and International Law in Cybersecurity
, Eur. J. Int'l L.
(forthcoming, 2020), available at SSRN), https://cyber.jotwell.com/cyberattacks-accusations-and-the-making-of-international-law/
What distinguishes data protection (that is, legitimate privacy law) from data protectionism (arguably a barrier to trade)? Whether a country can use its domestic privacy laws to either de jure or de facto require a company to keep citizens’ personal data within that country’s borders is a significant point of international contention right now, especially between the United States and the European Union. In July, the Court of Justice of the EU invalidated (again) the sui generis mechanism for cross-border personal data transfers between the European Union and the United States (the “Privacy Shield”). The Court’s “Schrems II” decision makes it all the more likely that the United States will attempt to revisit the matter through strategic free trade agreement negotiations—and makes Svetlana Yakovleva’s Privacy Protection(ism): The Latest Wave of Trade Constraints on Regulatory Autonomy all the more timely and important.
Yakovleva observes that in recent free trade agreement negotiations, including at the World Trade Organization (WTO), the United States has pushed to characterize restraints on cross-border data flows as a protectionist trade measure, while the European Union, by contrast, has largely advocated for national regulatory autonomy. The outcome of this conflict over purported “digital protectionism” will have practical ramifications for transnational companies that regularly deal in cross-border data flows. It will also have serious theoretical consequences for ongoing and familiar discussions of how transnational law might bridge—or override—deep domestic regulatory divides. Yakovleva nimbly weaves together a history of the term “protectionism,” Foucauldian discourse theory, and the minute details of recent free trade agreement negotiations to provide an authoritative account of what exactly is at stake. Her big contribution is to tell us all to watch our language: one person’s “digital protectionism” can be another’s “fundamental right.”
Yakovleva opens with a broad discussion of the history of the term “protectionism” as it has been used in free trade policy and law, noting the term’s changing meanings at different times and in different institutions. She starts here in order to make the central point that meanings are not static; they’re very much constructed, contested, and chosen. The notion of “free trade” was first developed in direct contrast to the once-dominant theory of mercantilism, a strict form of protectionism which counseled “restricting imports, promoting domestic industries, and maintaining self-sufficiency from other countries.” (P. 436.) By contrast, neoclassical free trade theory rested on the concept of comparative advantage: that barriers to trade inefficiently prevent countries from increasing domestic welfare by exchanging goods they can each more efficiently produce.
This history would appear to place protectionism strongly in opposition to fundamental principles of free trade. However, early understandings of protectionism were narrow, focusing on tariffs or quotas on imports, and closely associated with political nationalism. Yakovleva explains that when the General Agreement on Tariffs and Trade (GATT 1947) was signed in 1947, “protectionism” was already a contested term, with the United States blaming trade distortions for the Great Depression and Second World War, and the United Kingdom instead emphasizing “the boundaries that the international trade regime should not cross in relation to domestic policies affecting trade.” (P. 439.) The compromise was GATT 1947’s “embedded liberalism,” which according to Yakovleva made liberalization not a “goal in itself” but “a component of a broader societal goal of maintaining economic stability.” (P. 441.) Practically, this meant that only intentional protectionism qualified as protectionism under the GATT 1947 regime, and domestic regulations with a de facto impact on trade, but not motivated by protectionist intent, largely went unchallenged.
Starting, however, in the 1970s, “new protectionism” was understood to encompass a variety of non-tariff barriers to trade, including domestic policies aimed at quelling growing unemployment. Yakovleva explains that these were precisely the domestic policies that had been deemed legitimate under “embedded liberalism.” At the same time, developed countries, including the United States, began advancing a counter-narrative of “fair trade,” working towards a goal of using international trade law to harmonize a number of domestic regulatory frameworks and thus eliminate “unfair” advantages held by less-regulated developing countries.
By the time the WTO was established in 1994, neoliberal norms had largely (though not exclusively) prevailed. Yakovleva writes that “[t]he main goal of the international trading system… was no longer ‘embedded liberalism,’ but the continued, gradual liberalization of trade.” (P. 457.) The WTO dispute settlement system was increasingly used to evaluate domestic regulations (say, on health or the environment) that caused de facto discrimination against foreign goods. Instead of looking to the regulatory intent of a country, WTO adjudicators looked at the economic impact of a domestic regulation. They did so, too, through the neoliberal lens of the free-trade system, largely without looking to relevant human rights instruments or principles. Practically, Yakovleva claims, this broadened the scope of the term “protectionism,” and thus put all the analytical pressure on the GATT and GATS exceptions, in which the burden of proof that a regulation was not protectionist fell on the country whose regulations were challenged.
What, then, should we make of the more recent notion of “digital protectionism,” or its subset “data protectionism?” “Discourse matters and the discourse is changing,” Yakovleva writes. (P. 473.) Digital protectionism is now part of the vocabulary of free trade, used by lobbyists, negotiators, and academics. (Even though, as Chris Kuner has pointed out, some of the policies now being called protectionist have been in place since the 1970s.) The European Union and the United States in fact both use the terms “digital trade” and “digital protectionism” in policy documents and negotiations. But as Yakovleva convincingly argues, the understanding of and values behind these terms differ vastly, as do the provisions on cross-border data flows advanced by each party in free trade negotiations. “Data protectionism” is not a stable term, but hotly contested.
Contrasts between the U.S. and EU approaches to data privacy abound. What Yakovleva does here is clearly link the relevant distinctions to current trade discourse. She explains that one way of framing the regulation of personal data is to look at such data as an economic asset, where any legal “protection is a precondition of data-intensive trade.” (P. 510.) The alternative is what Yakovleva calls the “moral value approach,” in which data protection law is directed at protecting fundamental human rights. (P. 510.) The EU has in fact historically embraced both frameworks, with an explicit goal of its EU-wide data protection instruments being to free up digital trade between Member States. However, Yakovleva notes that in the EU, the moral value approach will “always prevail” when the two conceptions are in conflict, because of the role the CJEU plays in interpreting EU law in light of the rights to privacy and data protection established in the EU Charter of Fundamental Rights. (P. 506.) The United States, by contrast, emphasizes only the former in trade negotiations, ignoring the possibility that privacy law might not just be economically efficient but can also implicate human rights and flourishing.
This disagreement in discourse has consequences for trade policy. Yakovleva identifies important differences in the current policy approaches to “data protectionism” taken by the U.S. and the EU in trade negotiations—differences every privacy law scholar or policy wonk should learn, if they haven’t already. (For more, see Mira Burri’s recent work.)
U.S. proposals in recent bilateral free trade agreements and at the WTO create a default that cross-border restrictions on the flow of personal data will not be allowed unless they are deemed objectively necessary—a test that Yakovleva points out in the GATS context is often failed. By contrast, the EU enumerates specific instances of inappropriate cross-border restrictions—conveniently, none restrictions the EU itself places on data flows. In its proposed exception language, the EU takes an approach more similar to the national security exception in WTO agreements, deferring to a country’s own subjective assessment of what is necessary. (P. 496.) U.S. proposals characterize data privacy laws as being an aspect of economic regulation, needed in order to encourage consumers to disclose more data. EU proposals, by contrast, explicitly refer to human rights.
If there is anything surprising about this, it is that there is some agreement that at least some privacy protection is necessary for trade, rather than inherently protectionist. The key question, as Yakovleva notes, is not whether there should be domestic data privacy law, but what level of protection is legitimate. (P. 515.) She concludes by calling for “a new multidisciplinary discourse… in order to allow each trading party to strike the right balance between globalization… democratic politics, and domestic autonomy to pursue domestic values such as fundamental rights to privacy and data protection.” (P. 513.)
This is an extraordinarily ambitious—and long—article. I remain impressed by its intellectual heft, and the ease with which Yakovleva moves up into discourse theory and then back into the weeds of free trade agreement provisions. Potential readers should also know that although the article clocks in at 104 pages, much of the length comes from footnotes, evidencing Yakovleva’s impressively thorough research. I do wish there had been more engagement with related, parallel conversations about the role of trade in international intellectual property law, and the relationship there between human rights and the trade regime—but for that to have been included, this would have had to become a book.
Yakovleva’s masterful article will sound in familiar notes for technology law scholars. It resembles recurring conversations about the internet and jurisdiction, differing free speech norms around the world, and the globalization of intellectual property law, including digital copyright law. How does one address gaps between different domestic regulatory goals and regimes, given that the internet (and its users’ data) can be everywhere instantaneously? While the notion of addressing transatlantic divides in privacy laws through international trade law is not new (the late, wonderful Joel Reidenberg called for an international privacy treaty housed at the WTO back in 1999), Yakovleva brings clear policy expertise and critical insights to the current conversation. These insights will inform not just privacy law scholars, but those tracking international negotiating strategies and framing games in multiple areas of technology law.
Which Western institutions aid and abet Chinese censorship? Major Internet companies probably come immediately to mind. In Peering down the Memory Hole: Censorship, Digitization, and the Fragility of Our Knowledge Base, Glenn Tiffert highlights an unexpected set of additional accomplices: scholarly archival platforms.
Tiffert shows that digitization makes it possible for censorship to disappear into the apparently limitless, but silently curated, torrents of information now available—adding a valuable example to Zeynep Tufekci’s catalog of ways that information is distorted online. He explains how “the crude artisanal and industrial forms of publication and censorship familiar to us from centuries past” may shortly give way to “an individuated, dynamic model of information control powered by adaptive algorithms that operate in ways even their creators struggle to understand.”
In 2017, Cambridge University Press “quietly removed 315 articles and book reviews from the online edition of the respected British academic journal The China Quarterly, without consulting the journal’s editors or the affected authors,” making them inaccessible to subscribers in China. While the press ultimately reversed itself, “Springer Nature, which bills itself as the largest academic publisher in the world, capitulated to Chinese requests, effectively arguing that its censorship of over 1,000 of its own publications was a cost of doing business.”
It is possible to alter the archive in even less visible and more global ways. Punishing resource constraints and a turn to digitization have led many libraries to deemphasize physical collections. Unlike the difficult maneuvers required to rewrite history in Orwell’s 1984, the centralization of digital collections makes it relatively simple to tweak censorship so that it reflects whatever past is most useful to the present. Tiffert analyzes how Chinese censors removed most of one side in a debate in “the two dominant academic law journals published in the PRC during the 1950s,” whose print editions “document the construction of China’s post-1949 socialist legal system and the often savage debates that seized it.” These law journals are particularly useful targets for censorship because there are few complete print runs outside the PRC, and the print volumes are fragile and often stored off-site, so digital versions are the only way most people can encounter them. (It is striking that the PRC devoted resources to this obscure corner of legal history, rather than simply trying to shape contemporary accounts of that history.)
The selective editing of online editions “materially distort the historical record but are invisible to the end user,” potentially deceiving good-faith researchers. Tiffert explains that the original issues from 1956 through 1958 “chronicle how budding debates over matters such as judicial independence, the transcendence of law over politics and class, the presumption of innocence, and the heritability of law abruptly gave way to vituperative denunciations of those ideas and their sympathizers.” The online databases, however, have removed 63 articles, constituting more than 8% of the articles and 11% of the total page count during this critical three-year period.
The missing articles are often lead articles—that is, articles the editors presumably thought were especially important. The deletions are often invisible. The online tables of contents show no omissions, and while one of the two authorized platforms on which the censored versions appear would allow counting of page numbers to reveal omitted sequences, the other simply omits page numbers. Tiffert argues that the suppressed authors “promoted values associated with the rule of law and greater separation between party and state,” making it embarrassing for the PRC to preserve “the record of their arguments and the persecutions they endured,” given the unitary version of Chinese history the government prefers.
Tiffert focuses on two publications, but points out that People’s Judicature (the official publication of the courts) and a leading social science journal are missing entire issues. And censorship of more current topics is even more pervasive, including the disappearance of President Xi Jinping’s 2001 doctoral dissertation from databases. A user who searches the online archives of the official party newspaper for sensitive terms that appeared in print can lose access, or get different results “depending on whether the vendors supplying access to the archive host their servers in China or outside of it.” As Tiffert shows by developing his own algorithm, which does a pretty good job of targeting the disfavored articles (he reports a 95% success rate), much of this censorship can be automated.
Copyright law shows up as an additional problem. The U.S. restoration of copyright in foreign works prolongs copyright for 95 years from publication, allowing the Chinese government to assert exclusive U.S. rights in the journals for decades to come (either by claiming copyright ownership directly or pressuring whatever Chinese entity claims copyright to enforce its rights—it is not clear who the owners are under Chinese law, though obviously the current commercial database providers are confident that they have permission from the owners). Though Tiffert notes the §108 limitation for libraries allowing them to make limited copies in the last 20 years of the extended term, he unfortunately does not discuss the strong case for fair use for any article censored by the Chinese government. Today’s fair use jurisprudence provides (1) clear protection for creating a database of all articles, including censored ones, and providing relevant snippets in response to user search, and (2) strong reason to think that providing full access to censored articles would be fair. But it is not surprising that fear, uncertainty and doubt surrounding copyright would deter scholarly archives that might otherwise be willing to preserve and protect this history, especially if they are associated with colleges or universities hoping for a lucrative flow of students from China.
Fair use could be an important addition to Tiffert’s recommendations, including “[d]emanding that providers make unredacted collections available on alternate servers beyond the reach of interested censors.” He also suggests “industry-wide best practices to uphold the integrity of our digital collections,” which would include “transparently disclos[ing] omissions and modiﬁcations.” But his larger appeal is ethical: principles that would prevent institutions in democratic societies from accepting this kind of censorship of the past.
Cite as: Rebecca Tushnet, Invisible Holes in History
(October 1, 2020) (reviewing Glenn D. Tiffert, Peering down the Memory Hole: Censorship, Digitization, and the Fragility of Our Knowledge Base
, 124 Am. Hist. Rev.
550 (2019), available in draft at The Washington Post), https://cyber.jotwell.com/invisible-holes-in-history/
Laurence Diver, Digisprudence: the design of legitimate code
, 13 Law, Innovation & Technology
__ (forthcoming, 2020), available at LawArXiv.
We often say that code is law, but what kind of law is it? Laurence Diver’s new article, Digisprudence: the design of legitimate code, introduces his ‘digisprudence’ theory, associating himself with the welcome emphasis upon design that is seen in particular in current work on privacy (e.g. Woodrow Hartzog’s Privacy’s Blueprint) and in Ian Kerr’s attention to the power of defaults, and doing so in light of a rich body of scholarship, from well beyond technology law, on law and legitimacy.
Code is not law, Diver says, with tongue slightly in cheek. It is more than law, constituting and regulating at the same time, rather than needing interpretation by addressees as law does. Yet it is also less than law, in the absence of, for instance, the possibility of disobedience. Drawing from ideas in the jurisprudential canon, including the morality of law and the more recent ‘legisprudence’ ideas of Luc Wintgens (on core principles for limiting subjective notions of freedom), Diver asks us to think of how ‘constitutional’ ideas such as legitimacy ought to be embedded in the software ‘legislature’, i.e. the contexts and environments for, and methodologies of, the production of software. He is rightly adamant that we must focus on production, arguing that code must be legitimate from the outset rather than often futilely retrofitted once it is in the wild.
This article summarises the findings of Diver’s doctoral research at the University of Edinburgh, and points to themes of his current work at COHUBICOL (Counting as a Human Being in the Era of Computational Law). (Indeed, digisprudence as a theory is clearly influenced by Edinburgh legal theorists past and present, including Neil MacCormick, Zenon Bankowski, and Diver’s doctoral supervisor Burkhard Schafer). From this work, Diver identifies the centrality of explanation and legitimacy to the acceptability of legal orders, drawing a firm distinction between law and legalism. He finds that code-as-law suffers from the worst excesses of legalism—narrow governance rather than principles, an inability to view and contest decisionmaking—and is, by its nature, resistant to the countervailing forces, such as requirements for certainty, or constraints upon sovereign power, that make law acceptable. (For a related argument, emphasizing the resulting need for new countermovements, see the Jotwell commentary on Julie Cohen’s book Between Truth and Power by Mireille Hildebrandt, who leads the COHUBICOL project.)
This article is full of thoughtful insights, which support the development of the theory of digisprudence, and are also capable of application on their own terms. I highlight two of them here. First, the affordances of software, a science and technology studies concept increasingly discussed in writing on law and technology which focuses on how design has an impact on use and behaviour, are discussed alongside the less familiar concept of disaffordances, or the restrictions imposed upon users. Brilliantly, Diver takes note of Lessig’s idea of ‘architectures of control’ but then draws our attention to choices made by designers to embed such disaffordances in objects and systems, engaging with work including that of Dan Lockton (founder of the Imaginaries Lab) and Peter-Paul Verbeek (co-director of the Design Lab in Twente). Second, Diver makes the powerful point that we should not be led by whether code authors position themselves as regulators, or having the authority to regulate—instead, we should look at what the code does and how it affects users. This is particularly important in a world where much of the production happens in the private sector and without some of the more obvious public law mechanisms of accountability and oversight.
In what is largely a conceptual article, Diver nonetheless applies emerging arguments to current circumstances. He chooses blockchain applications for this purpose, though his approach is less about how blockchain disrupts “insert legal area of choice” and more about how the desire for smart contracts and the like challenges how we think about rules. Tellingly, Diver mentions DRM at the outset of the section on blockchain; as with critiques of DRM, Diver asks the reader to reflect on the implications for governance and legitimacy of a widespread shift from more familiar legal approaches towards an apparently promising technological solution.
Digisprudence itself is explained in a table, where the characteristics of computational legalism are matched to Fullerian (morality of law) and legisprudential principles, resulting in a short and clear set of design-focused affordances, of which contestability is the core – because it allows both individuals and institutional to be empowered. If these concepts are considered at the right stage in the process (i.e. at the time of design), a form of legitimacy, recognisable as constitutional in nature, is possible. Quite properly, Diver points to areas that are ripe for digisprudential analysis, including machine learning and robotics.
As in many parts of the world a new and quite unusual new academic year approaches, there are also some great opportunities to use Diver’s digisprudence theory in teaching law and technology, even for revisiting earlier stages of technological development, such as the rise in influence of commercial social media platforms, or the debates, which now cross the decades, on regulating search. Though studying the way in which code regulates behaviour has rightly become an established feature of technology law, Diver’s contribution calls on us to look to the design process (and research on design) and to the limits of legalism, if we really want to understand and promote the legitimacy of such regulation.
No, law does not necessarily lag behind technological development. No, smart technologies are not destined to lead the road to either freedom or surveillance. Determinisms of any kind are not what make Julie E. Cohen’s Between Truth and Power: The Legal Constructions of Informational Capitalism a great sensitizer to the mutual transformations that law, economy, power and technology affect.
Instead, the underlying thesis of the book is that to come to terms with the systemic harms of informational capitalism, we need to develop a keen eye for the precise way that legal rights, duties, immunities and powers are deployed and reconfigured to enable the move from a market to a platform economy —while also detecting the emergence of novel entitlements and disentitlements outside Hohfeld’s framework. Steering clear of both technological and economic determinism, Cohen argues that the instrumentalization of legal institutions by powerful economic actors requires new types of Polanyian countermovements, to address and redress outrageous accumulation of economic power.
In my own terms, Cohen asserts that Montesquieu’s countervailing powers require reinvention in the face of the radical reconfiguration of the political economic landscape wrought by the shift from neo-liberal economic markets to monopolistic multi-sided vertically integrated platform economies. This will require what political economist Karl Polanyi called ‘countermovements’ in his seminal 1944 work, The Great Transformation. Economic markets do not grow like grass (they are not ‘natural’) but are the result of legal entitlements and legal constraints. This implies that markets can be ‘made’ in different ways, thus creating different economic incentives and different outcomes (as to equality and freedom). It also implies that the hold of market fundamentalism on other contexts (politics, health, education) is not ‘given’ and can be pushed back. (See a similar but more condensed discussion in Jedediah Britton-Purdy et al., Building a Law-and-Political-Economy Framework: Beyond the Twentieth-Century Synthesis, 129 Yale L.J. 1784 (2020).)
As the subtitle indicates, this work explains how law contributes to the construction of informational capitalism. The latter refers to a regime where ‘market actors use knowledge, culture, and networked information technologies as means of extracting and appropriating surplus value, including consumer surplus’ (P. 6). It is refreshing though disturbing to be guided through the motions by which some of law’s pathways have been instrumentalised to safeguard privileged private interests where public goods are at stake and both fairness and freedom trampled upon. Such instrumentalization needs to be detailed, called out, and countered.
Cohen weaves a textured narrative with detailed attention to the developments that shaped and reshaped our legal institutions, which in turn shaped and reshaped the pathways of our political economy. Often, she describes opposing accounts of what is at stake, followed by new insights that can only be mined when looking awry – away from conventional oppositions that distract attention from underlying reconstructions. Let me give one example. Discussions of IP law often contrast incentives for individual creation with control over such creation, or reward of original invention with reward of capital investment and corporate risk taking. Cohen uncovers how such discourse remains within the confines of Chicago School economics, with its emphasis on atomistic methodological individualism, consent as a commodity (termed ‘consumer preference’), and a blind eye to power relationships. Instead of staying within the limits of this discourse, she tracks the legislative as well as judicial transformations that enabled the growth of patent portfolios meant to bolster bargaining positions rather than rewarding either individual creativity or innovative risk taking. In doing so, Cohen avoids the usual ideological trenches, keeping her eye on the ball: the traditional countervailing powers allowing big players to work around, co-opt or redefine legal institutions that stand in the way of monopolistic control over newly emerging informational sources.
Instead of arguing for a return to liberal markets that supposedly ensured an ideal setting for liberal democracies, Cohen digs deeper into what Polanyi called the ‘double movement’ of 19th and 20th century capitalism. She traces the rise of liberal markets as part of the industrial revolution that was built on the commodification of land, labour and money (the first movement), explaining how the perverse implications of unbridled capital accumulation gave rise to ‘countermovements’ that resulted in market reforms and a strong state to protect against monopolistic power and inequity, thus instigating what in Europe we call social democracies (the second movement). Cohen then demonstrates how the influence of the Chicago School gave rise to a neo-liberal governmentality that makes the idea of an unfettered free market the default setting for pursuing both public and private interests, entangled with an ideology of managerialism. Co-opting the rise of new socio-technical infrastructures that afford rent seeking from the accumulation of (access to) knowledge and information, industrial capitalism has transmuted into informational capitalism, culminating in the platform economy. This, Cohen convincingly argues, requires a new agenda for institutional innovation (new countermovements) that cannot be taken for granted or derived from previous reforms.
As she ends her book, we have a ‘new window of opportunity that now stands open’, thus calling for active engagement of lawyers willing to resist and reform the unprecedented economic power generated by newly shaped neoliberal playing fields. I would agree with Benkler in his 2018 Law and Political economy blog posts on the ‘Political Economy of Technology’, in which he insists that we should not make the mistake of buying into the mainstream narrative that naturalises both economic markets and technological change, nor reduce the solution space to institutional rearrangement. Instead we should actively collaborate to design and redesign the technological infrastructures that afford informational capitalism.
I believe that Cohen’s analysis of networked socio-technical infrastructures in her Configuring the Networked Self: Law, Code, and the Play of Everyday Practice, Yale University Press (2012), together with the institutional investigations of Between Truth and Power, offer a way to both distinguish and combine institutional and technical redesign as part of the countermovement she calls for. An example would be the legal obligation imposed by the EU General Data Protection Regulation to implement data protection by design. This obligation requires those who deploy data-driven solutions to build protection into their computing systems at the level of their architecture, thus redressing potential power imbalances based on unlimited extraction of personal data at the technical level. Simultaneously, by making this a legal obligation instead of an ethical duty, such redress is institutionalised and becomes enforceable instead of depending upon the ethical inclinations of individual persons or companies.
For a lawyer dedicated to law and the rule of law, Cohen’s account of powerful actors successfully ‘playing’ legal institutions to serve private interests is painful reading. It reminds me that countervailing powers cannot be taken for granted and must be sustained and reinvented; they require new countermovements. This will take more than lawyers, because checks and balances will have to be built into the data- and code-driven architectures that form the backbone of our institutional environment. And those built-in affordances will determine the kind of informational capitalism we must live with.
The COVID crisis has starkly revealed the thin line between middle-class status and destitution in the United States. As a Greater Depression looms, vital assistance from the federal government may soon expire. At that point, the unemployed may need to seek loans for necessities, ranging from rent to food to health care. Advocates for a “public option” in finance have pressed ideas like postal banking or “quantitative easing for the people,” to enable direct government provision of lending for those the market is not serving. They have met a wall of opposition, particularly from libertarian advocates of cyber finance. The tech solutionist alternative is simple: instead of direct government lending, let new financial technology (fintech) companies accumulate more data, and then they can precisely calibrate optimal loan amounts and interest rates. Algorithmic lending, cryptocurrency, and smart contracts all have a place in this vision.
Christopher Odinet’s important article Consumer Bitcredit and Fintech Lending challenges this conventional wisdom, demonstrating that some fintech business models rely on deeply predatory and unfair treatment of borrowers. Through both qualitative and quantitative analysis of over 500 complaints from a Consumer Financial Protection Board (CFPB) dataset, Odinet paints a grim picture of fintech malfeasance. Cyberlenders may be a route for financial inclusion for many—but they also pose risks that are poorly understood, and nearly impossible to protect against.
Odinet painstakingly documents and classifies actual consumer complaints, adding an invaluable empirical foundation to widespread worries about the potential for predatory financial inclusion by new entrants in the consumer lending space. I wish I had Odinet’s article when I testified before the Senate Banking Committee on fintech in 2017. Key senators and Trump Administration officials clearly wanted to accelerate deregulation; Odinet shows the importance of an enduring role for both federal and state regulators in this space.
Here are just a few of the narratives Odinet unearths in consumer complaints:
From a borrower trying to auto-pay a loan: “They are outrageous with regard to how many problems they create to prevent you from paying your monthly installment. Clearly, they are trying to get consumers to default, so they can jab you with excessive late (and other) fees.”
From a borrower who paid off her loan in full, only to continue being debited: They “debited my account for bill and grocery money that i [sic] needed to take care of my family.”
From a borrower surprised by a large “origination fee: “The loan documentation was not available until the loan was funded and there is nothing in the documentation that indicates the origination fee that would be charged.”
From a borrower behind on payments: “This company calls every hour on the hour.”
From a borrower stuck with a high interest rate: “I was told, that after 1 yr. I was going to be able to lower my interested [sic] rate on [my] debt consolidation loan. But, it turns out, that I have to reapply & pay another lending club processing fee. The rate is ridiculously high compare [sic] to current rates. I only took this loan in desperation.”
Other entities appear to be harvesting sensitive financial information from loan applicants, then disappearing without actually funding loans.
Odinet complements these narratives with pie charts classifying complaints. He finds that “the largest number of complaints (over half) relate to how the loan was managed. The next highest category deals with taking out a loan.” His empirical analysis deftly visualizes government data in an accessible manner. It also has immense policy relevance. Emboldened by fintech utopianism, many regulators have loosened the reins for new firms. But this is a misguided approach, since the use of AI in fintech has just as many problems as traditional underwriting—if not more.
Odinet’s work also helped me suss out a paradox in fintech valuation. Investors have justified pouring money into this sector based on the prospect of ever-improving AI finding ever more profit opportunities than older statistical methods. However, I’ve also been to presentations by experts on finance algorithms convincingly demonstrating that past repayment history is powerfully predictive of future conduct, and that additional “fringe” or “nontraditional” data adds little to the predictive calculus. So how are fintechs supposed to make above market returns if their “secret sauce” in reality adds so little to their predictive capacities? As expertly interpreted by Odinet, the CFPB complaints database suggests a ready route to profitability: hiding good old fashioned cheating, sharp business practices, and dark patterns behind a shiny veneer of futuristic AI. Here, Odinet follows in the footsteps of many scholars who have exposed deep problems in an allegedly new digital economy (including platform capitalism and initial coin offerings). All too often, a narrative of technological advance masks old, disfavored, and illegal practices.
Of course, there will always be rival narratives about the value and dangers of algorithmic lending and fintech platforms. They do extend credit to some individuals who would find no conventional alternatives. Odinet offers important data here that will be of use to both advocates and critics of fintech. He complements his expert and compelling empirical findings with accessible explanations of why they matter. He grounds recommendations for regulatory responses on the empirical findings in this article, focusing on the need for relevant agencies to better understand fintechs’ business models, to detect and deter discrimination, and to ensure more effective disclosures. This is important work that will help governments around the world develop data-informed approaches to the regulation of fintech.