Danielle Keats Citron & Daniel J. Solove, Privacy Harms
, Geo. Wash. U. L. Stud. Res. Paper No. 2021-11 (Mar. 16, 2021), available at SSRN
Privacy law scholars have long contended with the retort, “what’s the harm?” In their seminal 1890 article The Right to Privacy, Samuel Warren and Louis Brandeis wrote: “That the individual shall have full protection in person and in property is a principle as old as the common law; but it has been found necessary from time to time to define anew the exact nature and extent of such protection.” Other legal scholars have noted that the digital age brings added challenges to the work of defining which privacy harms should be cognizable under the law and should entitle the complainant to legal redress. In Privacy Harms, an article that is sure to become part of the canon of privacy law scholarship, Danielle Citron and Daniel Solove provide a much needed and definitive update to the privacy harms debate. It is especially notable that the authors engage the full gamut of the debate, by parsing both who has standing to bring suit for a privacy litigation and also what damages should apply. This important update to privacy law literature builds upon prior solo and joint influential work by the two authors, such as Solove’s Taxonomy of Privacy, and Citron’s Sexual Privacy, and their joint article Risk and Anxiety.
The article furnishes three major contributions to law and tech scholarship. First, it highlights the challenges deriving from the incoherent and piecemeal patchwork of privacy laws in the U.S., exacerbated by what other scholars have noted are the exceedingly higher showings of harm demanded for privacy litigation versus other types of litigation. Second, the authors construct a road map for understanding the different genre of privacy harms with a detailed typology. Third, Citron and Solove helpfully provide an in-depth discussion of when and how privacy regulations should be enforced. That exercise is predicated on their viewpoint that there is currently a misalignment of the goals of privacy law and available legal remedies.
As Citron and Solove note, the higher prerequisite for a showing of privacy harm serves as an unreasonable gatekeeper to legal remedies for privacy violations. As such harm is difficult to define and proof of harm is elusive in some cases, such gatekeeping sends a dangerous signal to organizations, telling them that they do not need to heed legal obligations for privacy, so long as it remains difficult to prove harm.
Citron and Solove then provide a comprehensive typology of privacy harms. This exhaustive typology, which the authors meticulously illustrate with factual vignettes drawn from caselaw, is an especially useful resource for legal scholars, practitioners, and judges attempting to make sense of the morass that is privacy law in the United States. Citron and Solove’s typology encompasses 14 types of privacy harms: 1) physical harms, 2) economic harms, 3) reputational harms, 4) emotional harms, 5) relationship harms, 6) chilling effect harms, 7) discrimination harms, 8) thwarted expectation harms, 9) control harms, 10) data quality harms, 11) informed choice harms, 12) vulnerability harms, 13) disturbance harms, 14) autonomy harms. While some might quibble about whether some of the harms delineated are truly distinct from each other, the typology is an accessible and deft heuristic for contextualizing privacy harms both in terms of their origin and their societal effects. Two striking features of this taxonomy: first, in a departure from the authors’ previous solo and collective work, this taxonomy does not focus on the type of information breached and does not attempt to establish distinct privacy rights (see, for example, Citron’s Sexual Privacy, arguing for a novel privacy right regarding certain sexually abusive behaviors). Rather, this new taxonomy is concerned with the harmful effects of the privacy violation. Second, the taxonomy goes beyond individual level harms to introduce privacy harms that could also be seen as collective, such as chilling effect harms and vulnerability harms.
The Article’s final contribution is a discerning examination of when and how privacy harms should be recognized and regulated. This last discussion is important because, as the authors reveal, a focus on legally recognizing only those privacy harms that are easily provable, immediate, or handily quantifiable in monetary terms is detrimental to societal goals. The same can be said when the court’s focus is on a showing of what individual harm has resulted from a privacy violation.
As Citron and Solove remind us, and others have written, privacy harms are not merely individual harms, they are also societal wounds. Privacy as a human right allows for personhood, autonomy, and also the free exercise of democracy. Thus, the authors underscore that an undue emphasis on compensation, as a remedial goal for privacy violation, neglects other important societal considerations.
They observe that privacy regulations do not just compensate for harm, but serve the useful purpose of deterrence. A requirement of measurable economic or physical harm is only truly necessary to decide on compensation. If we have the clear aim of preserving privacy, merely for the benefit of what privacy affords us, rather than the objective of compensating for the injury of privacy violations, a decisive query for cutting through the bog is: what amount of damages would be optimal for deterrence?
With this keen analysis, Citron and Solove provide a way forward for determining when and how to adjudicate privacy litigation. As they conclude, for tort cases launched to demand compensation, a showing of harm may be requisite, but for other types of cases, when monetary damages are not sought, a showing of measurable economic or physical harm may be unnecessary.
In conclusion, Citron and Solove have written a truly useful article that provides a vital guardrail for navigating the quagmire of privacy litigation. Yet, their article is much more than a practitioner’s guide or judiciary touchstone. In plumbing the profundity of privacy harms, Citron and Solove have also started a cardinal socio-legal discourse on the human need for privacy and the societal ends that privacy insures. This is a conversation that has become even more urgent in the digital era.
Many Jotwell readers choose to subscribe to Jotwell either by RSS or by email.
For a long time Jotwell has run two parallel sets of email mailing lists, one of which serves only long-time subscribers. The provider of that legacy service is closing its email portal next week, so we are going to merge the lists. We hope and intend that this will be a seamless process, but if you find you are not receiving the Jotwell email updates you expect from the Techlaw section, then you may need to resubscribe via the subscribe to Jotwell portal. This change to email delivery should not affect subscribers to the RSS feed.
The links at the subscription portal already point to the new email delivery system. It is open to all readers whether or not they previously subscribed for email delivery. From there you can choose to subscribe to all Jotwell content, or only the sections that most interest you.
Human beings leave trails of genetic data wherever we go. We unavoidably leave genetic traces on the doorknobs we touch, the items we handle, the bottles and cups we drink from, and the detritus we throw away. We also leave a trail of genetic data with the physicians we visit, who may order genetic analysis to help treat a cancer or to assist a couple in assessing their pre-conception genetic risks. Our genetic data, often but not always shorn of obvious identifiers, may be repurposed for research use. If we seek to learn about our ancestry, we may send a DNA sample to a consumer genetics service, like 23andMe, or share the resulting data on a cross-service platform like GEDmatch. If we are arrested or convicted of a crime, we may be compelled to give a DNA sample for perpetual inclusion in an official law-enforcement database. Law enforcement might use each of these trails of genetic data to learn about or identify us—or our genetic relatives.
Should law enforcement be permitted to make use of each and every one of these forms of genetic data, consistent with the Fourth Amendment of the U.S. Constitution? That is the question that motivates James W. Hazel and Chris Slobogin’s recent article, “World of Difference”? Law Enforcement, Genetic Data, and the Fourth Amendment. Hazel and Slobogin take an empirical approach to the Fourth Amendment inquiry, reporting results of a survey of more than 1500 respondents and probing which types of data access respondents deemed “intrusive” or treading upon an “expectation of privacy.” Their findings indicate that the public often perceives police access to genetic data sources as highly intrusive, even where traditional Fourth Amendment doctrine might not. As Hazel and Slobogin put it, “our subjects appeared to focus on the location of the information, not its provenance or content.” That is, intrusiveness turns more on who holds the data, rather than on how it was first collected or analyzed. Hazel and Slobogin conclude that their findings “support an argument in favor of judicial authorization both when police access nongovernmental genetic databases and when police collect DNA from individuals who have not yet been arrested.”
Hazel and Slobogin’s analysis is firmly rooted in existing doctrine. As they observe, much genetic data collection, analysis, and use has traditionally been beyond the scope of the Fourth Amendment. The Fourth Amendment extends its protections only to “searches” and “seizures,” and existing doctrine defines government intrusion as a search, in large measure, based on whether government action intrudes upon an “expectation of privacy” that society is prepared to recognize as “reasonable.” Under the so-called “third-party doctrine,” “if you share information, you do not have an expectation of privacy in it.” But in its recent Fourth Amendment decision in United States v. Carpenter, the Supreme Court suggested that the third-party doctrine is not categorical. As Hazel and Slobogin aptly summarize, “In the wake of Carpenter, considerable uncertainty exists about the applicability of the third-party doctrine to genetic information.” Indeed, Justice Gorsuch, dissenting in Carpenter, “used DNA access as an example” of information in which individuals typically expect privacy, despite having entrusted that information to third parties.
Hazel and Slobogin provide an empirical response to this uncertainty. They survey public attitudes regarding the privacy of certain sources of genetic data, and the intrusiveness of investigative access to that data. In assessing these attitudes, the authors also queried respondents about a range of non-genetic scenarios, including some both clearly within and beyond existing Fourth Amendment regulation, in order to better gauge relative findings of intrusiveness and privacy. The authors appropriately acknowledge that the platform they utilized to complete the survey—Amazon Mechanical Turk—and the population they recruited to participate may be imperfectly representative of the general public. They discuss countermeasures they took to minimize biases in their results, including excluding responses received in under five minutes (which “are indicative that the individual did not answer thoughtfully”).
The results indicate that law-enforcement access to many sources of genetic data ranked as highly intrusive and infringing upon an expectation of privacy. Among other findings, “police access to public genealogy, direct-to-consumer and research databases, as well as the creation of a universal DNA database, were … ranked among the most intrusive activities.” These government activities ranked similarly to searches of bedrooms and emails, and as both more intrusive and more infringing on a reasonable expectation of privacy than “cell location”—the data at issue in the Carpenter case itself. Yet many already-common police collections of genetic data, including surreptitious collection of “discarded” DNA, compelled DNA collection from arrested or convicted persons, and even familial searches in official law enforcement DNA databases ranked as among the least intrusive or privacy-offending activities.
Hazel and Slobogin suggest that Fourth Amendment doctrine should be attentive to societal views about privacy, such as the data uncovered in their survey, and that this should prompt closer scrutiny of the “situs of genetic information” in assessing expectations of privacy. The role of survey data in Fourth Amendment analysis is contested, but one need not subscribe to Hazel and Slobogin’s view of the importance of this data to Fourth Amendment analysis to appreciate their insights.
For one thing, Hazel and Slobogin’s data provide an antidote to claims of broad public support for law enforcement use of consumer genetics platforms to investigate crimes. According to Hazel and Slobogin, government access to consumer genetics data consistently ranked as highly intrusive and privacy-invasive. These findings also lend weight to Justice Gorsuch’s intuition in Carpenter that government access to genetic data from these sources ought to require a warrant or probable cause.
In addition to the Fourth Amendment, moreover, Hazel and Slobogin’s findings suggest that Congress or the Department of Health or Human Services ought to act to better protect medical data, especially genetic data in medical records. Survey respondents “ranked law enforcement access to genetic data from an individual’s doctor as the most intrusive of all scenarios, just above police access to other information in medical records.” Under existing law, these records are typically protected from nonconsensual disclosure under the HIPAA Privacy Rule, and physicians and their patients share a fiduciary relationship that is often privacy protective. But the HIPAA Privacy Rule codifies a gaping exception to nonconsensual disclosure for law enforcement purposes. As Hazel and Slobogin recognize, the Privacy Rule permits genetic information to be disclosed to law enforcement upon as little as an “administrative request.” That minimal standard runs contrary to the strongly held attitudes of privacy and intrusiveness that Hazel and Slobogin’s study reveals. These findings should provide impetus to act to better protect medical records from government access.
We ought not, however, overinterpret the authors’ results. Their findings indicate limited concern about the most well-known forms of genetic surveillance, through compelled DNA collection from individuals arrested or convicted of crimes or from surreptitiously collected items containing trace DNA that individuals cannot help but leave behind. Perhaps these results reflect a genuine lack of concern with these practices—or perhaps they merely reflect that individuals expect what they know the government is already doing. A one-way ratchet of public acceptance ought to give us pause about findings of non-intrusiveness for well-known police practices.
In sum, Hazel and Slobogin’s article yields important new data suggesting that government access to many sources of genetic data is indeed highly intrusive. That data may inform Fourth Amendment analysis. It also may inform discussions about the fitness of existing statutory and regulatory protections for genetic data, the need for new protections, and the credibility of existing claims of public support for certain uses of such data.
Cite as: Natalie Ram, Gauging Genetic Privacy
(June 10, 2021) (reviewing James W. Hazel & Christopher Slobogin, “World of Difference”? Law Enforcement, Genetic Data, and the Fourth Amendment
, 70 Duke L.J.
705 (2021)), https://cyber.jotwell.com/gauging-genetic-privacy/
Sarah R. Wasserman Rajec and Andrew Gilden, Patenting Pleasure
(Feb. 25, 2021), available at SSRN
In Patenting Pleasure, Professors Sarah Rajec and Andrew Gilden highlight a surprising incongruity: while many areas of U.S. law are profoundly hostile to sexuality in general and the technology of sex in particular, the patent system is not. Instead, the U.S. Patent and Trademark Office (USPTO) has over the decades issued thousands of patents on sex toys—from vibrators to AI, and everything in between.
This incongruity is especially odd because patent law has long incorporated a doctrine that specifically tied patentability to the usefulness of the invention, and up until the end of the 20th century one strand of that doctrine held that inventions “injurious to society” failed the utility test. And until about that time—and in some states and localities, even today—the law was exceptionally clear that sex toys were immoral and illegal. Patents issued nonetheless. How did inventors show that their sex toys were useful, despite being barred from relying on their most obvious use? Gilden and Rajec examine hundreds of issued patents to weave an engrossing narrative about sex, patents, and the law.
Two very nice background sections are each worth the price of admission. “The Law of the Sex Toy” canvasses the many ways U.S. law has been historically hostile to sex toys, including U.S. Postal Inspector Anthony Comstock’s 19th century crusade against “articles for self-pollution.” (Comstock, of “Comstock laws” fame, seized over 60,000 “immoral” rubber articles.) Efforts to criminalize sex toys continued in the late 20th century as well; many of these laws are still on the books, including in Texas, Alabama, and Mississippi, and some, including Alabama’s, are still enforced. At the federal level, the 2020 CARES Act included over half a trillion dollars in small business loans as pandemic relief; those making or selling sex toys (as well as other sex-based businesses) were excluded.
What’s all this got to do with patent law? For one thing, patenting illegal sex toys seems a fruitless errand, since it makes little sense (most of the time) to patent things you can’t make or sell. This puzzle goes unaddressed by the authors. At a doctrinal level, patent law’s utility requirement long barred patents on inventions injurious to society like gambling machines; radar detectors; and, it would seem under the laws just mentioned, sex toys. So applicants for pleasure patents would need to assert some utility—while steering clear of beneficial utility’s immorality bar. Gilden and Rajec provide as background a clear and useful overview of the history of so-called “beneficial utility,” including its applicability to sex tech.
One way to thread the needle is to obfuscate. In the early 20th century, many vibrators were advertised for nonsexual purposes with an overt or implicit wink; personal massagers were for nothing but sore muscles. Such stratagems could, did, and do help innovators evade the laws that otherwise vex sex tech. But Rajec and Gilden intentionally step beyond the disguise gambit (though its success raises interesting questions about utility law doctrine in general). Instead, they focus on patented inventions that are obviously, explicitly, and clearly about sex. The USPTO classes inventions according to types of technology, and one classification, A61H19/00, is reserved for “Massage for the genitals; Devices for improving sexual intercourse.” Tough to obfuscate there. So how are inventors getting the hundreds of patents Gilden and Rajec find in this class?
This is the central tension that Pleasure Patents addresses: Because of the utility doctrine, patentees must say what their inventions are for—but because US law has been generally quite hostile to sex and sex tech, pleasure patents have to say they are for something other than, well, pleasure. In the heart of the piece, Rajec and Gilden carefully catalog these descriptions over time, revealing a changing picture about what sorts of purposes were considered acceptable sex tech—at least, in the eyes of the USPTO.
It turns out patents can tell us interesting things about sex norms. Gilden and Rajec identify several narratives about what sex tech was for, including saving marriages and treating women’s frigidity (both thankfully more historic than contemporary rationales), helping individuals who cannot find sexual partners, avoiding sexually transmitted infections, helping persons with disabilities, and facilitating sexual relations by LGBTQIA individuals.
In recent years (perhaps following the effective demise of beneficial utility in 1999 at the hands of the Federal Circuit in the coincidentally-but-aptly-captioned Juicy Whip v. Orange Bang), pleasure patents have finally copped to being actually about pleasure, telling a narrative of sexual empowerment. Many pleasure patents in this last vein are remarkably forthright pieces of sex ed, among their other functions. As Rajec and Gilden note, “Particularly compared with federally-supported abstinence-only education programs, or the Department of Education’s heavily-critiqued guidelines on student sexual conduct, the federal patent registry provides a pretty thorough education on the anatomies and psychologies of sexual pleasure.” There’s much to learn here of the fascinating rise and fall of different utility narratives and how the patent system reflects changing social norms.
There is much, too, to like in Gilden and Rajec’s sketched implications for patent law and for studies of law and sexuality. Pleasure patents provide an underexplored window onto the ways patent law shapes (or fails to shape) inventions to which other areas of law are deeply hostile. And for scholars of law and sexuality, who critique law’s overwhelming sex-negativity, the patent system is a surprising respite of sex positivity—if cloaked in a wide array of acceptability narratives.
The piece also cues up fascinating future work. In particular, patents are typically considered important because they provide incentives for innovation; do they provide incentives for sex tech? Rajec and Gilden mention a couple of times that the patents they study are “valuable property rights,” but how valuable are those rights, and why? Are patents providing ex ante incentives, as in the standard narrative? Do sex tech inventors rely on the exclusivity of a future patent to develop new products? Or is there something else going on? The imprimatur of government approval on an industry otherwise attacked by the law? Safety to commercialize inventions shielded from robust competition? Shiny patent ribbons to show investors? In short, how should we think about pleasure patents as innovation incentive?
Gilden and Rajec have found a trove of material in the USPTO files that sheds light on both the patent system and American sex-tech norms over the last century and a half. Patenting Pleasure is an enlightening, provocative, intriguing, and—yes—pleasurable read.
Content moderation is a high-stakes, high-volume game of tradeoffs. Platforms face difficult choices about how aggressively to enforce their policies. Too light a touch and they provide a home for pornographers, terrorists, harassers, infringers, and insurrectionists. Too heavy a hand and they stifle political discussion and give innocent users the boot. Little wonder that platforms have sometimes been eager to take any help they can get, even from their competitors.
evelyn douek’s The Rise of Content Cartels is a careful and thoughtful exploration of a difficult tradeoff in content-moderation policy: centralized versus distributed moderation. The major platforms have been quietly collaborating on a variety of moderation initiatives to develop consistent policies, coordinated responses, and shared databases of prohibited content. Sometimes they connect through nonprofit facilitators and clearinghouses, but increasingly they work directly with each other. douek’s essay offers an accessible description of the trend and an even-handed evaluation of both its promise and its perils.
Take the problem of online distribution of child sexual abuse materials (CSAM). There is a broad consensus behind the laws criminalizing the distribution of CSAM images, such images have no redeeming societal value, and image-hashing technology is quite good at flagging only uploads that are close matches for ones in a reference database. Under these circumstances, it would be wasteful for each service to maintain its own database of CSAM hashes. Instead, the National Center for Missing and Exploited Children (NCMEC) maintains a shared database, which is widely used by content platforms to check uploads.
douek traces the spread of the NCMEC model, however, to other types of content. The next domino to fall was “terrorist” speech: not always so clearly illegal and not always so obviously low-value. The Global Internet Forum to Counter Terrorism helps the platforms keep beheading videos from being uploaded. There have been similar initiatives around election interference, foreign influence campaigns, and more. I would add that technology companies have long collaborated with each other on security and anti-spam responses (often with law enforcement in the room as well) in ways that effectively amount to a joint decision on what content can and cannot transit their systems.
When there are so few platforms, however, content collaboration can become content cartelization. The benefits of cartelization on content moderation are many. Where there is an existing consensus on which content is acceptable, policy enforcement is more effective because platforms can pool their work. Even where there is not, platforms can learn from each other by sharing best practices. Some coordinated malicious activity is hard to detect when each platform holds only one piece of the puzzle; botnet takedowns now involve industry partners in dozens of countries. And to be effective, bans on truly bad actors need to be enforced everywhere, or they will simply migrate to the most permissive platform.
But douek smartly explains why content cartels are also so unsettling. They make it even harder to assess responsibility for any given moderation decision, both by obscuring who actually made it and by slathering the whole thing in a “false patina of legitimacy.” They amplify the existing “power of the powerful” by removing one of the classic safety valves for private platform speech restrictions: alternative avenues for the speaker’s messages. And, much like economic cartels, they present decisions made in smoky back rooms as though they were the “natural” outcomes of “market” forces.
douek’s explanation of how coordinated content moderation stands in sharp contrast to the rhetoric of competition these companies normally adopt is particularly sharp. Even the name itself–content cartels–points out the way in which this coordinated behavior raises questions of antitrust law and policy. To this list might be added the danger that content-moderation creep will turn into surveillance creep as platforms decide that to make decisions about their own users’ posts, they need access to information about those users’ activities across the Internet.
The Rise of Content Cartels resists the temptation to cram platform content moderation into a strictly “private” or strictly “public” box. Like douek’s forthcoming Governing Online Speech: From ‘Posts-As-Trumps’ to Proportionality and Probability, it is thoughtful about the relationship between power and legitimacy, and broad-minded about developing new hybrid models to account for the distinctive character of our new speech and governance institutions.
It is an exciting time for content-moderation scholarship. Articles from just five years ago read as dated and janky compared with the outstanding descriptive and normative work now being published. douek joins scholars like Chinmayi Arun, Hannah Bloch-Wehba, Joan Donovan, Casey Fiesler, Daphne Keller, Kate Klonick, Renee DiResta, Sarah T. Roberts, and Jillian C. York in doing important work in this urgently important field. To borrow a phrase, make sure to like and subscribe.
In her General Principles of the European Convention on Human Rights, Janneke Gerards demonstrates how one of Europe’s two highest Courts offers ‘practical and effective’ protection to a number of human rights. These rights are at stake when governments or other big players use data-driven measures to fight e.g. international terrorism, a global pandemic or social security fraud. For those who wish to understand how the General Data Protection Regulation (GDPR) is grounded in European constitutional law, this book is an excellent point of departure, because the GDPR explicitly aims to protect the fundamental rights and freedoms of natural persons. Rather than ‘merely’ protecting the right to privacy of data subjects, the GDPR does not mention privacy at all; it is pertinent for all human rights, including non-discrimination, fair trail, presumption of innocence, privacy and freedom of expression.
Those not versed in European law may frown upon calling the European Convention of Human Rights (ECHR, “the Convention”) European constitutional law, as they may conflate ‘Europe’ with the European Union (EU). The EU has 27 Member States who are all Contracting Parties to the Convention, and at the constitutional level the EU is grounded in the various Treaties of the EU and in the Charter of Fundamental Rights of the EU (CFREU, “the Charter”). The Convention is part of a larger European jurisdiction, namely that of the Council of Europe (CoE), which has 47 Contracting Parties. The CoE is an international organisation, whereas the EU is a supranational organisation (though not a federal state). To properly understand both the GDPR and the Charter, however, one must first immerse oneself in the ‘logic’ of the Convention, because the Charter stipulates that the meaning and scope of Charter rights that overlap with Convention rights are at least the same as those of Convention rights. The reader who finds all this complex and cumbersome, may want to consider that the overlap often enhances the protection of fundamental rights and freedoms, similar to how the interrelated systems of federal and state jurisdiction in the US may increase access to justice. It is for good reason that Montesquieu observed that the complexity of the law actually protects against arbitrary rule, providing an important countervailing power against the unilateral power of a smooth, efficient and streamlined administration of ‘justice’ (The Spirits of the Laws, VI, II).
(For those interested in exploring the complexities of the two European jurisdictions to better understand the ‘constitutional pluralism’ that defines European law, I recommend Steven Greer, Janneke Gerards, and Rose Slowe’s 2018 Human Rights in the Council of Europe and the European Union: Achievements, Trends and Challenges (New York: Cambridge University Press.)
On 8 April 2014, the Court of Justice of the European Union (CJEU) invalidated the 2006 EU Data Retention Directive (DRD) that required Member States (MS) to impose an obligation on telecom providers to retain metadata and to enact legislation to allow access to such data by criminal justice authorities (case Digital Rights Ireland C-293/12). The CJEU’s invalidation of an entire legislative instrument highlights the significance of Janneke Gerards’ work on the Convention. Let me briefly explain: (1) the CJEU invalidated the DRD because it violated the fundamental rights to privacy and data protection of the Charter, (2) this violation was due to the fact that the DRD was deemed disproportional in relation to its legitimate goal of fighting terrorism, (3) the reason being that the DRD enabled infringements of privacy and data protection that were not strictly necessary to achieve this goal and therefore not justified, (4) this criterion of necessity, framed in terms of proportionality, builds on the case law of the European Court of Human Rights (ECtHR, ‘the Court’) that decides potential violations of the Convention.
The invalidation of the DRD obviously demonstrates that those who wish to situate the remit of the General Data Protection Regulation (GDPR) should study the EU’s Charter, because the fundamental right to data protection is one of the Charter rights. It also marks out that where the right to data protection overlaps with the Convention’s right to privacy the case law of the (other) Court must be taken into account. Thus, precisely because the fundamental right to data protection is part of European constitutional law, those interested in legal protection against data-driven systems should probe the salience of legal framework for constitutional protection of human rights in Europe.
In General Principles, Gerards explains in simple and lucid prose how the Convention operates, while nevertheless respecting the complexity of an institutional system that provides human rights protection in 47 national jurisdictions, including Russia and Turkey. She introduces the Convention as ‘a living instrument’ (see section 3.3), which flies in the face of the cumbersome discussions in the US on ‘plain text’ meaning, ‘Framers’ intention,’ and ‘Originalism’. Its meaning is decided by the Court in Strasbourg on a case-to-case basis. The Court squarely faces the need for interpretation that is inherent in text-based law (chapter 4), while taking into account that deciding the meaning of the text decides the level of protection across all 47 Contracting States. The meaning of the Convention is not immutable but adaptive. That is why it is capable of offering what the Court calls ‘practical and effective protection’ (chapter 1). Unlike what some blockchain afficionados seem to believe, immutability does not necessarily offer better protection, especially not in real life.
Gerards discusses the constitutional nature of the Convention, and the emphasis of the Court on an interpretation of Convention rights as rights that should be both ‘practical and effective’, while taking into account that the role of the Court is subsidiary in relation to the national courts, who are the primary caretakers. This results in the double role of the Court: (1) supervising compliance by the contracting states on a case-to-case basis, including redress in case of a violation and (2) providing an interpretation of convention rights that clarifies the minimum level of protection in all contracting states.
To mediate these twin objectives the Court has developed an approach that incorporates three steps: (1) the Court decides whether the case falls within the scope of the allegedly violated right, (2) the Court decides whether the right has been infringed and (3), the Court decides whether the infringement was justified. Though infringements can be justified if specific explicit or implied conditions are fulfilled, some rights are absolute in the sense that if the right is infringed it is necessarily violated, meaning that no justification is possible (notably in the case of torture and degrading or inhuman treatment). Gerards explains how the first and the second step interact as the facts of the case are qualified in light of the applicable Convention text while, in turn, the applicability and the meaning of the Convention text are decided in light of the facts of the case at hand. She understands this as a ‘reflective equilibrium’ where facts and norms, the concrete and the abstract are – in my own words – mutually constitutive.
General Principles proceeds to a detailed discussion of the principles that determine the Court’s ‘evolutive interpretation’ (chapter 3), which takes into account, on the one hand, the changing understanding of the meaning of convention rights (the first step mentioned above) and on the other hand, the confrontation with new cases that cannot be reduced to prior cases (highlighting the second step). Note that Gerards’ structured conceptual approach is firmly anchored in the case law of the Court, providing concrete examples of the reasoning of the Court based on succinct and lucid accounts of what is at stake in the relevant case law. This is also how she discusses arduous issues such as positive and negative obligations for states (chapter 5) as well as the difference between vertical and horizontal effect (both direct and indirect) (chapter 6), explaining convoluted legal framings without ignoring their complexity.
Finally, Gerards explains in rich detail the third step indicated above, that of justification, anchored in an in-depth and crystal-clear analysis of the Court’s case law. Justification of a restriction of human rights is only possible if three cumulative conditions are fulfilled: the infringing measures are lawful (chapter 8), have a legitimate aim (chapter 9) and are necessary in a democratic society (chapter 10). Lawfulness is interpreted by the Court as legality, not as legalism; it not only requires a basis in written or unwritten law, but also demands both accessibility and foreseeability, while to qualify as lawful the legal basis must incorporate sufficient safeguards to mitigate the impact on relevant human rights (including procedural due care). As to necessity, the Court checks the proportionality between measures and legitimate aim, performing a fair balancing test, taking into account the scope and severity of the infringements in relation to the importance of the aim at stake.
This is the necessity criterion that also plays a crucial role in infringements of the fundamental right to data protection. The Charter requires necessity in a way similar to the Convention and even though ‘necessity’ plays a crucial role in the GDPR’s own principles and its requirement of a legal basis, necessity often plays a seminal role when testing these infringements against the necessity principle of European constitutional law. When the CJEU invalidated the DRD it explicitly invoked the meaning of ‘necessity’ in this sense.
This book is not only relevant as a textbook for students of human rights in Europe. It also offers a detailed account of why and how individual rights and freedoms matter, what difference they can make, and which complex balancing acts must be performed to ensure legal certainty as well as justice. For those seeking protection against algorithmic decision-making and data-driven surveillance General Principles is a key resource. The clarity of explanation highlights the difficult dynamics between public and individual interests, between national and supranational jurisdictions and between the freedom of states to act in the general interest and the freedom from unlawful interference for individual citizens, acknowledging that such individual freedom is also a public good. Whereas human rights can be used to protect the interests of those already in power by ignoring the rights and freedoms of marginalised communities, the Court’s requirement that rights are ‘practical and effective’ rather than formal or efficient gives clear direction to an interpretation strategy that is firmly grounded in a substantive and procedural conception of the rule of law. I guess this comes closest to Jeremy Waldron’s ‘The Rule of Law and the Importance of Procedure’, 50 Nomos 2011, 3-31, underlining the need for institutional checks and balances without which rule of law checklists offer little to no protection when push comes to shove.
Salome Viljoen, Democratic Data: A Relational Theory for Data Governance
(Nov. 11, 2020), available on SSRN
Between 2018 and 2020, nine proposals (or discussion drafts) for comprehensive data privacy legislation were introduced in the U.S. Congress. 28 states introduced 42 comprehensive privacy bills during that time. This is on top of the European Union’s General Data Protection Regulation, which took effect in 2018, and the California Consumer Privacy Act, which took effect in 2020. Clearly, U.S. policymakers are eager to be active on privacy.
Are these privacy laws any good? Put differently, are policymakers drafting, debating, and enacting the kind of privacy laws we need to address the problems of informational capitalism? In Democratic Data: A Relational Theory for Data Governance, Salome Viljoen suggests that the answer is no.
Viljoen’s argument is simple. The information industry’s data collection practices are “primarily aimed at deriving population-level insights from data subjects” that are then applied to individuals who share those characteristics in design nudges, behavioral advertising, and political microtargeting, among others. (P. 3.) But privacy laws, both in their traditional form and in these recent proposals, “attempt to reduce legal interests in information to individualist claims subject to individualistic remedies that are structurally incapable of representing this fundamental population-level purpose of data protection.” (P. 3.)
Viljoen could not be more right, both in her diagnosis of current proposals and in their structural mismatch with the privacy, justice, and dignitary interests undermined by data-driven business models that traffic in the commodification of the human experience.
Viljoen first notes that privacy has traditionally been legally conceptualized as an individual right. The Fair Information Practice Principles (FIPPs) and a long series of federal sectoral privacy laws and state statutes grant privacy rights to consumers qua individuals. This new crop of privacy laws is no different. They guarantee rights of access, correction, deletion, and portability, among others. But all of these rights are for the individual consumer. Notice-and-choice, the framework for much of U.S. privacy law, operated the same way: Its consent paradigm centered the right to choose or consent in the individual internet user.
This also tracks the scholarly literature in privacy since 1890. Privacy has long been understood as either a negative—freedom from—or positive—freedom to—right, but almost always a right located in the individual. Modern privacy scholarship has moved away from this model, recognizing both privacy’s social value, its importance in social interaction and image management, and the connection between privacy and social trust. That terrain is well worn; its inclusion here speaks both to Viljoen’s in-depth knowledge of the literature in her field and law review editors’ adherence to a model of overlong “background” sections.
Viljoen’s contributions are not so much her descriptive claim that privacy law has traditionally conceptualized privacy in individualistic terms, but where she goes from there.
Her notion of “data governance’s sociality problem” is compelling. (P. 23.) Viljoen argues that the relationships between individuals and the the information industry can be mapped along two axes: vertical and horizontal. (Pp. 25-27.) The vertical axis is the relationship between us and data collectors. When we agree to Instagram’s terms and conditions and upload a photo of our new dog, we are creating a vertical relationship with Instagram and its parent company, Facebook. The terms of that relationship “structure the process whereby data subjects exchange data about themselves for the digital services the data collector provides.”
“Horizontal data relations” are those relations between and among us, data subjects all, who share relevant characteristics. Those who “match” on OKCupid are in a horizontal data relationship with each other. A gay man who “likes” pictures of Corgis is in a horizontal data relationship with those targeted for advertisements based on those latent characteristics. As is a person arrested because a facial recognition tool identified him as a suspect socially connected with the person whose voluntarily uploaded picture of the same tattoo was used to train the facial recognition AI. (P. 26.)
Viljoen’s second important contribution flows from the first. She offers a normative diagnosis for why horizontal relationships matter for data governance law. That is, data extraction’s harms stem not only from concerns over my privacy or our visceral reaction to creepy, ubiquitous surveillance. By merely using technologies that track and extract data from us, we become unwitting accomplices in the process through which industry translates our behavior into designs, technologies, and patterns that shape and manipulate everyone else. Abetting this system is a precondition of participation in the information age.
For Viljoen, then, the information economy’s core evil is that it conscripts us all in a project of mass subordination that is (not so incidentally) making a few people very very rich.
This may be Viljoen’s central contribution, and it has already changed my understanding of privacy. Focusing on the individual elides the population-level harms Viljoen highlights. Data flows classify and categorize. Data helps industry develop models to predict and change behavior. And it is precisely this connection between data and the identification of relationships between groups of people that creates economic value. We are deeply enmeshed in perpetuating a vicious cycle that subordinates data subjects while enriching Big Tech. There is no way an individual rights-based regime that gives one person some measure of control over their data can ever address this problem.
And that is, at least in part, where current proposals for comprehensive privacy laws go awry. Although there are some differences at the margins, most proposals are binary: they guarantee individual rights of control and rely on internal compliance structures to manage data collection and use. The rights model, Viljoen shows, inadequately addresses the privacy harms of informational capitalism. So, for that matter, does the compliance model. But that conversation is for another day.
Rebecca Crootof & BJ Ard, Structuring TechLaw
, __ Harv. J.L. & Tech.
__, (forthcoming, 2020), available at SSRN
A decade ago, I mused about the implications and limits of what was then called “cyberlaw.” By that time, scholars had spent roughly 15 years experiencing the internet and speculating that a new jurisprudential era had dawned in its wake. The dialogue between the speculators and their critics was famously encapsulated in a pair of journal articles. Lawrence Lessig celebrated the transformative potential of what we used to call “cyberspace” for law. Judge Frank Easterbrook insisted on the continuing utility of existing law in solving cyber-problems. The latter’s pejorative characterization of cyberlaw as “law of the horse” has endured as a metonym for the idea that law ought not to be tailored too specifically to social problems prompted by some exotic new device.
It turns out, as I mused, that Lessig and Easterbrook and others in their respective camps were arguing on the wrong ground. Cyberspace and cyberlaw pointed the way to an integrative jurisprudential project, in which novel technologies and their uses motivate a larger rethinking of the roles and purposes of law, rather than a jurisprudence of exception (Lessig) or a jurisprudence of tradition (Easterbrook). But it has taken some time for elements of an integrative project to emerge. Rebecca Crootof and BJ Ard, in Structuring Techlaw, are among those who are now building in that direction and away from scholars’ efforts to justify legal exceptionalism in response to various metaphorical horses – among them algorithmic decision making, data analytics, robotics, autonomous vehicles, 3D printing, recombinant DNA, genome editing, and synthetic biology. Their story is not, however, primarily one of power, ideology, markets, social norms, or technological affordances. Julie Cohen, among others, has taken that approach. Structuring Techlaw is resolutely and therefore usefully positivist. The law and legal methods still matter, as such. The law itself can be adapted, reformed, and perhaps transformed.
In that spirit, Structuring Techlaw offers a framework for organizing legal analysis (Pp. 8-9), rather a solution, so it is (admirably, in my opinion) primarily descriptive rather than normative. Like Leo Marx’s classic The Machine in the Garden, exploring American literature’s industrial interruption of the pastoral, it clarifies the situation. The article is a field guild to problems in technology and law, rather than a theory or a jurisprudential intervention. As a field guide, few of its details will be new to scholars, lawyers, or even students familiar with technology policy debates of the last 25 years. But the paper collects and organizes those details in a thoughtful, clear way, with priority given to traditional legal forms and to illustrations drawn from a wide variety of technology-animated social problems. Historical problems get attention, including those that long pre-dated the internet, along with contemporary challenges. The resulting framework is for use by scholars, policy makers, and other decision makers confronted with what Crootof and Ard characterize as a critical problem common to all types of new technology: legal uncertainty in the application and design of relevant rules.
Their broad view requires a broad beginning. “Technology” means devices that extend human capabilities. (P. 3 n.1.) Structuring Techlaw offers the neologism “techlaw” to distinguish solutions to larger-scale social problems created by technology in society from technology-enabled solutions to specific problems in the provision of professional services, or so-called legaltech or lawtech. (Id.)
Techlaw exposes legal uncertainties of three types. The framework consists of those three types, in layers, with some nuances, details, and illustrations added for good measure, together with likely strategies for dealing with each one. Each type of uncertainty is described in terms of familiar debates. Some of those concern the welfare effects of precautionary and permissive regulatory approaches. Some concern choices among updating existing law, imagining new law, and reconceptualizing the legal regime the context of institutional choices. The full framework is laid out in a single graphic. (P.11.)
Layer one consists of application uncertainties, in which existing legal rules are deemed to be either too narrow (gaps) or too broad (overlaps) as responses to technology-fostered social problems. Regular or traditional tools of legal interpretation may be used effectively here.
Layer two consists of normative uncertainties, in which technology-fostered problems expose larger concerns about the purposes and functions of the laws in question. Existing law may be revealed to be underinclusive or overinclusive relative to its original aims. This is the space for normative realignment of the law.
Layer three consists of institutional uncertainties, in which the roles and responsibilities of different legal actors are called into question based on concerns about legitimacy, authority, and competence. Are technology-fostered problems best solved by updates supplied by legislatures? By administrative agencies? By courts?
This is not so much a functioning method for reaching a judgment in a particular instance in practice.. It’s best viewed as a tool for understanding. Crootof and Ard round out their description with examples at multiple points along the way, but they don’t seek to apply the framework fully either to a real historical case or to an imaginary new one. Instead, the framework is best understood as they describe it (P. 47 n.187): as an idealized template by which observers and participants alike can begin to discern and respond to common patterns in law-making, rather than deal with each technology as a shiny new object or worse, as a distracting but entertaining squirrel. The framework may produce an integrated jurisprudence of technology and law as it is used over time, over multiple applications.
Will it? If the challenge of resolving uncertainties in legal meaning evokes H.L.A. Hart’s famous “No Vehicles in the Park” illustration of interpretive flexibilities in the law – a positivist Pole star – that is no accident. Structuring Techlaw is replete with references to Hart (P. 16 n.35) and Hartian interpretations and extensions. (Pp.69-70.) But one needs a way to get from what this rule means (per Hart) to how this rule is part of a pattern of multiple rules, some for equivalent instances and some for different ones. Crootof and Ard manage the transition to a pattern of multiple rules via an overview of the critical role of analogical reasoning and framing effects in legal interpretation. (Pp. 52–62.) That move is surely the right one; analogies help us scale from case to case, from case to rule, and from rule to system. But its success depends on any number of empirical claims as to how legal reasoning actually works in practice, such as those summarized by Dan Hunter, that are beyond the scope of this work.
Moreover, as Crootof and Ard acknowledge, fully specifying the framework and building the resulting field of law requires exploring a standard set of questions regarding comparative institutional advantage. They don’t do that in Structuring Techlaw. Tantalizingly, they promise that exploration in an additional paper. (P.9 n.19.)
Even more tantalizing are glimpses of jurisprudence yet to come. I wondered a bit about Structuring TechLaw’s emphasis on legal uncertainty. The return to positivism is an important one, but some scholars today place significant normative weight on humans and humanity in legal systems, precisely because of the lack of predictability, certainty, and consistency that human imaginations entail in practice. Some scholars argue that contestability of legal meaning, an attribute that is akin to uncertainty, is both essential to the rule of law and threatened by some novel technologies. Crootof and Ard hint that there is more in store on this point. Understanding humans in technological systems, or “loops,” is the promised subject matter of an “aspirational” manuscript. (P. 12 n.26.)
I can’t wait.
Martha Finnemore and Duncan B. Hollis, Beyond Naming and Shaming: Accusations and International Law in Cybersecurity
, Eur. J. Int'l L.
(forthcoming, 2020), available at SSRN
In recent years, states have begun accusing other states of cyberattacks with some frequency. Just in the past few months, Canada, the United Kingdom, and the United States have warned of Russian intelligence services targeting COVID-19 vaccine development, the United States issued an alert about North Korea robbing banks via remote access, and U.S. prosecutors indicted hackers linked to China’s Ministry of State Security for stealing intellectual property.
The flurry of cyberattack attributions raises questions about what effects (if any) they have and what effects the attributors intend them to have. In their forthcoming article “Beyond Naming and Shaming: Accusations and International Law in Cybersecurity,” Martha Finnemore and Duncan Hollis offer a nuanced set of answers focused, as the title suggests, on moving beyond the idea that the attributions are just intended to name and shame states.
Government officials have repeatedly said that public attributions of cyberattacks to other states are intended to name and shame the perpetrator states and to cause them to change their behavior. The problem is that this strategy hasn’t seemed to work very well, prompting criticism from academics. Finnemore and Hollis helpfully offer an explanation for why naming and shaming is more difficult in the cybersecurity sphere than other areas of international law and international relations. They argue that existing literature on naming and shaming includes an implicit premise: that there is a preexisting norm against which compliance and deviation can be measured. (P. 27.) When there are existing norms or legal prohibitions, like the prohibitions on torture and genocide, accused states “do not contest [the] norms,” but “[i]nstead, . . . deny what the [accuser] says happened or offer a different interpretation or application of the norm than that proffered by the accuser.” (P. 27.) But in the cybersecurity realm, “the norms (and international law) governing online behavior are not always clear and well-entrenched,” particularly across different blocs of countries, and so enforcing norms via accusations is “tricky.” (P. 27.)
But that doesn’t mean cyberattack attributions lack value. Finnemore and Hollis contribute to a growing academic literature about other functions public attributions can serve. The most interesting of these is attributions’ potential constitutive role in international norms and international law. Finnemore and Hollis argue that accusations of state responsibility for a cyberattack can
serve as an opening bid, aimed at a particular community, indicating not just the accuser’s disapproval of the cited operation, but often, too, its proposal (perhaps implicit) that all such conduct should be barred, i.e., that there should be a norm against such conduct. Accusations may thus lay out the contours of ‘bad behavior’ along with an argument about why, exactly, the behavior is undesirable. Other actors may then respond to the accusation. They may accept some of it; they may accept all of it; they may accept it in some situations but not others; or, they may reject it entirely. It is these interactions between the accuser, the accused, and third party audiences that—over time—may result in the creation of a new norm (or its failure). (Pp. 14-15 (footnote omitted).)
The role of cyberattack attributions in setting the rules of the road in cyberspace need not stop with international norms. Rather, public attributions can also contribute to establishing international law. Finnemore and Hollis argue, “Today’s accusations may serve as early evidence of a ‘usage’—that is, a habitual practice followed without any sense of legal obligation,” but “[i]f such accusations persist and spread over time, states may come to assume that these accusations are evidence of opinio juris, delineating which acts are either appropriate or wrongful as a matter of international law.” (Pp. 16-17.)
Once one accepts the argument that public attributions play a role in creating international norms and law to govern state actions in cyberspace, important questions follow, including how such attributions should be made. I have argued that states should establish an international law rule requiring governments that engage in public attributions of cyberattacks to other states to provide sufficient evidence to enable crosschecking or corroboration of their attributions. Such a rule would help to ensure that attributions are accurate and credible and would thereby insulate the process of setting rules of the road for cyberspace from being skewed or tainted by accidentally or willfully false attributions that give an inaccurate picture of state practice and opinio juris. Other ongoing scholarly and policy debates center on the determining the appropriate roles that governments, private companies, international entities, and academic and other experts should play in accusations against states.
One could quibble with parts of Finnemore and Hollis’s article, perhaps especially their argument for changing terminology. The authors acknowledge that “[s]tates and scholars” generally call the process of assigning responsibility for a cyberattack “attribution” (P. 8), but they argue instead for using “accusation” (P. 7), reducing “attribution” to a component of an accusation and limiting it to “the process of associating what happened with a particular actor or territory.” (P. 6.) Although it’s true that “attribution” can have different meanings (P. 8), Finnemore and Hollis are fighting an uphill battle given the entrenched use of “attribution” and a working practice of specifying which kind or aspect of attribution is at issue in a particular context. Finnemore and Hollis’s term “accusation” also presents its own difficulties. For example, they argue, “Accusations can occur without attribution (i.e., when accusers say ‘we do not know who did this, but it happened, and it was bad.’)” (P. 8.) But in common parlance, accusations require an object—who is accused? An “accusation” without an object doesn’t really accuse anyone or anything.
Whatever one terms the phenomenon of states assigning responsibility for carrying out cyberattacks, Finnemore and Hollis rightly flag its importance to establishing the international rules governing state behavior in cyberspace. Moving toward a more sophisticated understanding of the roles that accusations or attributions of cyberattacks can play is a welcome contribution to an emerging academic field and important area of international relations.
Cite as: Kristen Eichensehr, Cyberattacks, Accusations, and the Making of International Law
(December 2, 2020) (reviewing Martha Finnemore and Duncan B. Hollis, Beyond Naming and Shaming: Accusations and International Law in Cybersecurity
, Eur. J. Int'l L.
(forthcoming, 2020), available at SSRN), https://cyber.jotwell.com/cyberattacks-accusations-and-the-making-of-international-law/
What distinguishes data protection (that is, legitimate privacy law) from data protectionism (arguably a barrier to trade)? Whether a country can use its domestic privacy laws to either de jure or de facto require a company to keep citizens’ personal data within that country’s borders is a significant point of international contention right now, especially between the United States and the European Union. In July, the Court of Justice of the EU invalidated (again) the sui generis mechanism for cross-border personal data transfers between the European Union and the United States (the “Privacy Shield”). The Court’s “Schrems II” decision makes it all the more likely that the United States will attempt to revisit the matter through strategic free trade agreement negotiations—and makes Svetlana Yakovleva’s Privacy Protection(ism): The Latest Wave of Trade Constraints on Regulatory Autonomy all the more timely and important.
Yakovleva observes that in recent free trade agreement negotiations, including at the World Trade Organization (WTO), the United States has pushed to characterize restraints on cross-border data flows as a protectionist trade measure, while the European Union, by contrast, has largely advocated for national regulatory autonomy. The outcome of this conflict over purported “digital protectionism” will have practical ramifications for transnational companies that regularly deal in cross-border data flows. It will also have serious theoretical consequences for ongoing and familiar discussions of how transnational law might bridge—or override—deep domestic regulatory divides. Yakovleva nimbly weaves together a history of the term “protectionism,” Foucauldian discourse theory, and the minute details of recent free trade agreement negotiations to provide an authoritative account of what exactly is at stake. Her big contribution is to tell us all to watch our language: one person’s “digital protectionism” can be another’s “fundamental right.”
Yakovleva opens with a broad discussion of the history of the term “protectionism” as it has been used in free trade policy and law, noting the term’s changing meanings at different times and in different institutions. She starts here in order to make the central point that meanings are not static; they’re very much constructed, contested, and chosen. The notion of “free trade” was first developed in direct contrast to the once-dominant theory of mercantilism, a strict form of protectionism which counseled “restricting imports, promoting domestic industries, and maintaining self-sufficiency from other countries.” (P. 436.) By contrast, neoclassical free trade theory rested on the concept of comparative advantage: that barriers to trade inefficiently prevent countries from increasing domestic welfare by exchanging goods they can each more efficiently produce.
This history would appear to place protectionism strongly in opposition to fundamental principles of free trade. However, early understandings of protectionism were narrow, focusing on tariffs or quotas on imports, and closely associated with political nationalism. Yakovleva explains that when the General Agreement on Tariffs and Trade (GATT 1947) was signed in 1947, “protectionism” was already a contested term, with the United States blaming trade distortions for the Great Depression and Second World War, and the United Kingdom instead emphasizing “the boundaries that the international trade regime should not cross in relation to domestic policies affecting trade.” (P. 439.) The compromise was GATT 1947’s “embedded liberalism,” which according to Yakovleva made liberalization not a “goal in itself” but “a component of a broader societal goal of maintaining economic stability.” (P. 441.) Practically, this meant that only intentional protectionism qualified as protectionism under the GATT 1947 regime, and domestic regulations with a de facto impact on trade, but not motivated by protectionist intent, largely went unchallenged.
Starting, however, in the 1970s, “new protectionism” was understood to encompass a variety of non-tariff barriers to trade, including domestic policies aimed at quelling growing unemployment. Yakovleva explains that these were precisely the domestic policies that had been deemed legitimate under “embedded liberalism.” At the same time, developed countries, including the United States, began advancing a counter-narrative of “fair trade,” working towards a goal of using international trade law to harmonize a number of domestic regulatory frameworks and thus eliminate “unfair” advantages held by less-regulated developing countries.
By the time the WTO was established in 1994, neoliberal norms had largely (though not exclusively) prevailed. Yakovleva writes that “[t]he main goal of the international trading system… was no longer ‘embedded liberalism,’ but the continued, gradual liberalization of trade.” (P. 457.) The WTO dispute settlement system was increasingly used to evaluate domestic regulations (say, on health or the environment) that caused de facto discrimination against foreign goods. Instead of looking to the regulatory intent of a country, WTO adjudicators looked at the economic impact of a domestic regulation. They did so, too, through the neoliberal lens of the free-trade system, largely without looking to relevant human rights instruments or principles. Practically, Yakovleva claims, this broadened the scope of the term “protectionism,” and thus put all the analytical pressure on the GATT and GATS exceptions, in which the burden of proof that a regulation was not protectionist fell on the country whose regulations were challenged.
What, then, should we make of the more recent notion of “digital protectionism,” or its subset “data protectionism?” “Discourse matters and the discourse is changing,” Yakovleva writes. (P. 473.) Digital protectionism is now part of the vocabulary of free trade, used by lobbyists, negotiators, and academics. (Even though, as Chris Kuner has pointed out, some of the policies now being called protectionist have been in place since the 1970s.) The European Union and the United States in fact both use the terms “digital trade” and “digital protectionism” in policy documents and negotiations. But as Yakovleva convincingly argues, the understanding of and values behind these terms differ vastly, as do the provisions on cross-border data flows advanced by each party in free trade negotiations. “Data protectionism” is not a stable term, but hotly contested.
Contrasts between the U.S. and EU approaches to data privacy abound. What Yakovleva does here is clearly link the relevant distinctions to current trade discourse. She explains that one way of framing the regulation of personal data is to look at such data as an economic asset, where any legal “protection is a precondition of data-intensive trade.” (P. 510.) The alternative is what Yakovleva calls the “moral value approach,” in which data protection law is directed at protecting fundamental human rights. (P. 510.) The EU has in fact historically embraced both frameworks, with an explicit goal of its EU-wide data protection instruments being to free up digital trade between Member States. However, Yakovleva notes that in the EU, the moral value approach will “always prevail” when the two conceptions are in conflict, because of the role the CJEU plays in interpreting EU law in light of the rights to privacy and data protection established in the EU Charter of Fundamental Rights. (P. 506.) The United States, by contrast, emphasizes only the former in trade negotiations, ignoring the possibility that privacy law might not just be economically efficient but can also implicate human rights and flourishing.
This disagreement in discourse has consequences for trade policy. Yakovleva identifies important differences in the current policy approaches to “data protectionism” taken by the U.S. and the EU in trade negotiations—differences every privacy law scholar or policy wonk should learn, if they haven’t already. (For more, see Mira Burri’s recent work.)
U.S. proposals in recent bilateral free trade agreements and at the WTO create a default that cross-border restrictions on the flow of personal data will not be allowed unless they are deemed objectively necessary—a test that Yakovleva points out in the GATS context is often failed. By contrast, the EU enumerates specific instances of inappropriate cross-border restrictions—conveniently, none restrictions the EU itself places on data flows. In its proposed exception language, the EU takes an approach more similar to the national security exception in WTO agreements, deferring to a country’s own subjective assessment of what is necessary. (P. 496.) U.S. proposals characterize data privacy laws as being an aspect of economic regulation, needed in order to encourage consumers to disclose more data. EU proposals, by contrast, explicitly refer to human rights.
If there is anything surprising about this, it is that there is some agreement that at least some privacy protection is necessary for trade, rather than inherently protectionist. The key question, as Yakovleva notes, is not whether there should be domestic data privacy law, but what level of protection is legitimate. (P. 515.) She concludes by calling for “a new multidisciplinary discourse… in order to allow each trading party to strike the right balance between globalization… democratic politics, and domestic autonomy to pursue domestic values such as fundamental rights to privacy and data protection.” (P. 513.)
This is an extraordinarily ambitious—and long—article. I remain impressed by its intellectual heft, and the ease with which Yakovleva moves up into discourse theory and then back into the weeds of free trade agreement provisions. Potential readers should also know that although the article clocks in at 104 pages, much of the length comes from footnotes, evidencing Yakovleva’s impressively thorough research. I do wish there had been more engagement with related, parallel conversations about the role of trade in international intellectual property law, and the relationship there between human rights and the trade regime—but for that to have been included, this would have had to become a book.
Yakovleva’s masterful article will sound in familiar notes for technology law scholars. It resembles recurring conversations about the internet and jurisdiction, differing free speech norms around the world, and the globalization of intellectual property law, including digital copyright law. How does one address gaps between different domestic regulatory goals and regimes, given that the internet (and its users’ data) can be everywhere instantaneously? While the notion of addressing transatlantic divides in privacy laws through international trade law is not new (the late, wonderful Joel Reidenberg called for an international privacy treaty housed at the WTO back in 1999), Yakovleva brings clear policy expertise and critical insights to the current conversation. These insights will inform not just privacy law scholars, but those tracking international negotiating strategies and framing games in multiple areas of technology law.