The Journal of Things We Like (Lots)
Select Page

Confronting Surveillance

Amanda Levendowski, Resisting Face Surveillance with Copyright Law, 100 N. C. L. Rev. __ (forthcoming, 2022), available at SSRN.

One prevailing feature of technological development is that it is not sui generis. Rather, new technologies often mirror or reflect societal anxieties and prejudices. This is true for surveillance technologies, including those used for facial recognition. Although the practice of facial recognition might be positioned as a type of convincing evidence useful for identifying an individual, the fact remains that racial and gender biases can limit its efficacy.. Scholars such as as Timnit Gebru and Joy Buolawmini have shown through empirical evidence that facial recognition systems, which are often trained on limited data, display stunningly biased inaccuracy. The two AI researchers reviewed the performance of facial analysis algorithms across four “intersectional subgroups” of males or females featuring lighter or darker skin. They made the startling discoveries that the algorithms performed better when determining the gender of men as opposed to women, and that, darker faces were most likely to be misidentified.

In her path-breaking article, Resisting Face Surveillance with Copyright Law, Professor Amanda Levendowski identifies these harms and others, and advocates for the proactive use of copyright infringement suits to curb the use of photographs as part of automated facial surveillance systems. First, Levendowski illustrates why the greater misidentification of darker faces by algorithmic systems is a problem of great concern. Levendowski shares the story of Robert Julian Borchak Williams who was placed under arrest in front of his home and in view of his family. A surveillance photograph had been used to algorithmically identify him.. However, once the photograph was compared to the actual person of Mr. Williams, it was obvious that he had been misidentified. The only explanation Mr. Williams got was, “The computer must have gotten it wrong.” The sad reality is that Williams’ case is not unique, there are many more stories of Black men being wrongfully arrested based on misidentification by AI systems. Given the glacial creep of federal legislation to regulate face surveillance, Levendowski advocates for turning to the copyright tools she believes we already have.

Facial recognition systems have proliferated in the past few years. For example, in 2020, an individual taking the Bar exam in New York related how he was directed to “sit directly in front of a lightning source such as a lamp” so the face recognition software could recognize him as present. I have written about and against the troubling use of facial recognition by automated hiring programs. Evan Sellinger and Woodrow Hartzog have written about the extensive use of facial surveillance in immigration and law enforcement and have called for a total ban. Although some jurisdictions in the United States have heeded the call to ban the use of facial recognition systems by law enforcement, many others have not, and there is currently no federal legislation banning or even regulating the use of facial recognition systems.

Resisting Face Surveillance with Copyright Law is innovative in its approach of deploying copyright law as a sword against the use of automated facial recognition. As Levendowski argues “Face Surveillance is animated by deep-rooted demographic and deployment biases that endanger marginalized communities and threaten the privacy of all.” Deploying copyright litigation to stem the use of facial recognition holds great potential for success because as Levendowski notes, corporations like Clearview AI are trawling selfies and profile pictures online to compose a gargantuan face-recognition database for law enforcement and other purposes. Levendowski notes that Clearview AI has copied about three billion photographs without the knowledge or consent of the copyright holders or even the authorization of the social media companies that host those photographs. Levendowski’s article is one answer to what can be done with the laws we have now to curtail the use of face surveillance.

Levendowski notes that one common defense of scraping — to invoke the First Amendment — would not be viable against copyright claims. Levendowski recounts the Court’s statement in Eldred v. Ashcroft that “copyright law contains built-in First Amendment Accommodations”“ which “strike a definitional balance between the First Amendment and copyright law by permitting free communication of facts while still protecting an author’s expression.” Thus, Levendowski concludes, copyright infringement lawsuits could serve as “a significant deterrent to face surveillance” particularly given the hair-raising statutory damages of $150,000 for each case of willful infringement.

However, as Levendowski notes, there are several hurdles to the successful use of a copyright infringement lawsuit against face surveillance. For one, there is the affirmative defense of fair use. Levendowski concedes that the Google v. Oracle decision in 2021, which concluded that Google made a fair use when it copied interface definitions from Java for use in Android, has changed the fair use landscape and may make it less likely for copyright infringement suits against face surveillance systems to prevail. Yet, as Levendowski explains. the use of profile pictures may still fall outside of fair use protections because they are more likely to fail the four-factor test. She argues that unlike search engines which fairly “use” works in order to point the public to them, facial recognition algorithms copy faces in order to identify faces. That is, the “heart” of the copied work — a person’s face — is the part that is copied by the face surveillance systems, and the use is less transformative than a search engine’s use. Levendowski also draws on recent case law to suggest that courts will be less likely to find the for-profit subscription model deployed by many facial recognition companies to be fair use, compared to the free-to-the-public model used by most search engines.

Levendowski deploys Google v. Oracle and other key fair use cases to assess each fair use factor. First, she notes that surveillance companies are not using the pictures for a new purpose, their reason for using the photographs are the same as profiles pictures: particularized identification. Yet, Levendowski argues, even absent a new purpose, such use may still be somewhat transformative, favoring face surveillance companies. She then also concludes that that the nature of the work is creative and that the use features the photographs’ faces—the “heart” of profile pictures, creating unfavorable outcomes for these companies under the middle two factors. Analyzing the final factor, Levendowski concludes that using these photographs harms the unique licensing market for profile pictures, and that this dictates a ruling against fair use.

All in all, although some might not agree with her fair use analysis, I find Levendowski’s approach to be an ingenious approach to lawyering in the digital age. If I have any reservations, it is whether this information might introduce a new tactic for face surveillance corporations —to purchase or license the copyrights in the photographs they use. Such a tactic would be facilitated by social media or other platforms that require users to give up the copyrights to any photos they post. This indicates that there might yet be more regulation needed to address face surveillance. But in the meantime, Levendowski’s lawyering represents a creative approach to the problem of face surveillance.

Cite as: Ifeoma Ajunwa, Confronting Surveillance, JOTWELL (May 12, 2022) (reviewing Amanda Levendowski, Resisting Face Surveillance with Copyright Law, 100 N. C. L. Rev. __ (forthcoming, 2022), available at SSRN), https://cyber.jotwell.com/confronting-surveillance/.

The Disconnect Between ‘Upstream’ Automation and Legal Protection Against Automated Decision Making

Reuben Binns and Michael Veale, Is that your final decision? Multi-stage profiling, selective effects, and Article 22 of the GDPR, 11 Int'l Data Privacy L. 319 (2021).

In their brief and astute article Is That Your Final Decision? Multi-stage Profiling, Selective Effects, and Article 22 of the GDPR, Reuben Binns and Michael Veale discuss the arduous issues of the EU GDPR’s prohibition of impactful automated decisions. The seemingly Delphic article 22.1 of the GDPR provides data subjects with a right not to be subject to solely automated decisions with legal effect or similarly significant effect. As the authors indicate, similar default prohibitions (of algorithmic decision-making) can be found in many other jurisdictions, raising similar concerns. The article’s relevance for data protection law sits mainly in their incisive discussion of how multi-level decision-making fares under such prohibitions and what ambiguities affect the law’s effectiveness. The authors convincingly argue that there is a disconnect between the potential impact of ‘upstream’ automation on fundamental rights and freedoms and the scope of article 22. While doing so, they lay out the groundwork for a more future-proof legal framework regarding automated decision-making and decision-support.

The European Data Protection Board (EDPB), which advises on the interpretation of the GDPR, has determined that the ‘right not to be subject to’ impactful automated decisions must be understood as a default prohibition that does not depend on data subjects invoking their right. Data controllers (those who determine purpose and means of the processing of personal data) must abide by the prohibition unless one of three exceptions apply. These concern (1) the necessity to engage such decision-making for ‘entering into, or performance of, a contract between the data subject and a data controller’, (2) authorization by ‘Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests’ or (3) ‘explicit consent’ for the relevant automated decision-making.

Binns and Veale remind us that irrespective of whether automated decisions fall within the scope of article 22, insofar as they entail the processing of personal data, the GDPR’s data protection principles, transparency obligations and the requirement of a legal basis will apply. However, automated decisions are often made based on patterns or profiles that do not constitute personal data, precisely because they are meant to apply to a number of individuals who share certain (often behavioral) characteristics. Article 22 seeks to address the gap between data protection and the application of non-personal profiles, both where such profiles have been mined from other people’s personal data and where they are applied to individuals singled out because they ‘fit’ a statistical pattern that in itself is not personal data.

Once a decision is qualified as an article 22 decision, a series of dedicated safeguards is put in place, demanding human invention, some form of explanation and an even more stringent prohibition on decisions based on article 9 “sensitive” data (‘revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation’).

The authors are interested in the salient question of how different layers of automation create a disconnect between, on the one hand, the impact on the fundamental rights and freedoms of those targeted and, on the other hand, the protection offered by article 22. For instance, algorithmically inferred dynamic pricing (or ‘willingness to pay’) may be used to inform human decisions on insurance, housing, credit and recruitment. However, it escapes the GDPR’s protection against automated decisions because humans make the final decision. Considering ‘automation bias’, the presorting that takes place in the largely invisible backend systems may disenfranchise those targeted from the kind of human judgement and effective contestability that article 22 calls for. (See recently Margot E. Kaminski & Jennifer M. Urban, The Right to Contest AI.) The ensuing gap in legal protection is key to the Schufa case that is now pending before the Court of Justice of the European Union, raising the question of whether a credit risk score decided by the scoring algorithm of a credit information agency that is used by an insurance company, in itself qualifies as an automated decision (case C-634/21).

The authors distinguish five types of ‘distinct (although in practice, likely interrelated) challenges and complications for the scope of article 22. The first (1) is that adding human input at the level of all data subjects, which affects whether article 22 applies, can still leave a subset of data subjects not protected by that human input. The second (2) is the GDPR’s lack of clarity on ‘where to locate the decision itself.’ The third challenge (3) is whether the prohibition concerns potential or only ‘realised’ impact. The fourth (4) is the likelihood that largely invisible automated backend systems have a major impact irrespective of the human input that is available on the frontend. And the fifth (5) and perhaps most significant challenge is the GDPR’s focus on only the final decision in a chain of relevant decisions, which ignores the impact of prior automated decisions on the choice architecture of those making the final decision. This is the “multi-stage” profiling the authors reference in their title.

The abstruse wordings of article 22, probably due to compromises made during the legislative process, may inadvertently reduce or obliterate what the European Court of Human Rights would call the ‘practical and effective’ protection that article 22 nevertheless aims to provide. The merit of the points made by Binns and Veale is their resolute escape from the usual distractions that turn discussions of article 22 into a rabbit hole of fruitless speculations, for instance on whether there is a right to explanation and what this could mean in the case of opaque algorithmic decision-making and on whether the explanations are due before decisions are made or only after. As they explain, all this will depend on the circumstances and should be decided in light of the kind of protection the GDPR aims to provide (notably enhancing both control over one’s personal data and accountability of data controllers).

Binns and Veale’s precise and incisive assessment of the complexities of upstream automation and the potential impact on those targeted should be taken into account by the upcoming legislative frameworks for AI and by courts and regulators deciding relevant cases. In the US we can think of the Federal Trade Commission’s mandate and the National Artificial Intelligence Initiative Act of 2020. Binns and Veale remind us of the gaps that will occur in practical and effective legal protection if AI legislation restricts itself to the behavior of data-driven systems instead of incorporating decisions of deterministic decision-support systems, which will be the case if AI is defined such that the latter systems fall outside the scope of AI legislation. Both Veale and Binns are prolific writers, anyone interested in the underlying rationale of EU data protection law and the relevant technical background should keep a keen eye on their output.

Cite as: Mireille Hildebrandt, The Disconnect Between ‘Upstream’ Automation and Legal Protection Against Automated Decision Making, JOTWELL (April 7, 2022) (reviewing Reuben Binns and Michael Veale, Is that your final decision? Multi-stage profiling, selective effects, and Article 22 of the GDPR, 11 Int'l Data Privacy L. 319 (2021)), https://cyber.jotwell.com/the-disconnect-between-upstream-automation-and-legal-protection-against-automated-decision-making/.

Shifting the Content Moderation Paradigm

evelyn douek, Content Moderation as Administration (Jan. 12, 2022), available on SSRN.

As law-and-technology scholar evelyn douek explains in her eye-opening, scholarly, and well-written Content Moderation as Administration, the conventional account of content moderation is wrong and its policy implications are off the mark. douek argues that we should toss aside the assumption that content moderation is a series of individual decisions made by people and computers acting as judges. The better way to think about it is as a process of ex ante rights administration and institutional design. Instead of learning lessons from judicial process, we need to learn from administrative law.

A system of immeasurable scale purportedly designed to reflect liberal First Amendment principles, content moderation now includes algorithms and artificial intelligence, armies of third-party moderators from the Global South paid very little to make decisions in seconds, and, a lot of money for Silicon Valley executives. Of course, this has led to repeated and repeatedly horrible results. Content moderation rules and practices facilitated genocide, helped swing elections toward fascists, and routinely and systematically censored queer and nonnormative sexual content. Right wing politicians got in on the act, as well, claiming designed-in and as-applied anti-conservative bias when the evidence proved the opposite. Facebook responded by creating an oversight board with a lot of fanfare, but very little power.

Through it all, the vision of content moderation has remained roughly the same: ex ante automated filtering and ex post judicialish review of whether user-generated content violated platform policies. If this “first wave” of content moderation scholarship is right, then presumably, the best way to protect speech and social media users is to demand procedural due processish protections: transparency and rights to appeal. And that’s precisely what those members of Congress who are legitimately concerned about content moderation have proposed.

The standard picture of content moderation is like an old Roman emperor whose thumbs up or thumbs down decides the fate of a gladiator: some all-powerful person or all-powerful thing is deciding whether a post stays up or comes down. Content moderation, then, happens post-by-post.

douek explains that almost none of that is helpful or correct. As many scholars have argued, content moderation involves an assemblage of people and things. Platforms do more than just decide to keep content up or take it down. And, most importantly, these misguided assumptions contribute to misguided policy.

Case-by-case ex post review misses systemic failures. It also provides inadequate remedies: a moderator could take something down or put something back up, leaving the problems of training and institutional design untouched. And the cycle will continue as long as the structural problem remains. Case-by-case review also lends itself to privacy theatre like the Facebook Oversight Board. By the nature of its design, it may eventually address a few takedown decisions, but has little to no impact on how the whole system works.

In place of this misguided vision, douek proposes a “second wave” of content moderation scholarship, discourse, and solutions. douek deftly argues that content moderation is a product of ex ante system design. It is one result of a larger institutional structure that frames the flow of all sorts of information. Content moderation is also the product of multiple corporate goals, not just the ostensible desire to reflect and perpetuate a liberal vision of free speech. Policy reform should reflect that.

douek suggests that one way to do that is to learn from the literature in collaborative governance, an approach to administrative regulation of corporations that involves public and private entities working together to achieve mutual goals. It benefits from private expertise while using a wide toolkit—audits, impact assessments, transparency reports, ongoing monitoring, and internal organizational structures, among others—cabining private discretionary decision-making by making firms accountable to the public and to regulatory agencies. Proponents see the multi-stakeholder model of governance as a more effective way of governing fast-changing and technologically complex systems, an argument made in profound and powerful detail by Margot Kaminski.

Collaborative governance is meant to help regulators supervise vast organizational systems ex ante before they do something wrong. Its ex ante approach and process toolkit are supposed to instantiate public values into every phase of organizational function. In that way, it is supposed to influence everyone, create systems up front, and foster the development of organizations more attuned to popular needs and values.

douek makes a compelling argument that collaborative governance is the better way to approach content moderation, both conceptually and as a matter of policy. Instead of an ex post appeal process, the collaborative governance approach means integrating officers whose entire jobs are to advocate for fair content moderation. It means giving those employees the safety and separation they need from departments with contrary motivations in order to do their work. It means transparency about underlying data and systemic audits of how the system works.

What’s so compelling about Content Moderation as Administration is that it changes the paradigm and pushes us to respond. douek has described a new and more compelling way of looking at content moderation. We all have to learn from their work, especially those of us writing or interested in writing about content moderation, collaborative governance, or both. The challenge, of course, will be guarding against managerialism and performative theatre in the content moderation space. Compliance models are at best risky when not subordinated to the rule of law and, in particular, a vision of the rule of law attuned to the unique pressures of informational capitalism. But those questions come next. Content Moderation as Administration does an article does an outstanding job of challenging the conventional account that has been at the core of content moderation scholarship for so long.

Cite as: Ari Waldman, Shifting the Content Moderation Paradigm, JOTWELL (March 1, 2022) (reviewing evelyn douek, Content Moderation as Administration (Jan. 12, 2022), available on SSRN), https://cyber.jotwell.com/shifting-the-content-moderation-paradigm/.

Debunking the Myth that Police Body Cams are Civil Rights Tool

Body-worn cameras are proliferating with astounding speed in police departments throughout the country. Depending on the conditions under which cameras are used, the spread of this technology has been defended by certain civil liberties organizations as a means of holding police accountable for excessive force used disproportionately against Black, Brown, and queer people. In his new book, Police Visibility, Professor Bryce Clayton Newell musters empirical fieldwork on police deployment of body cameras to slow the rush to implement this potentially pernicious law enforcement surveillance tool.

This book is a careful and in-depth study by a leading scholar of police technology. Specifically, Newell questions whether the prescription (police cameras) will meaningfully treat the illness (structural racism and police violence). As he puts it, “[i]n the absence of broader police reforms, the cameras may offer a Band-Aid … but they do not promise a cure.” (P. 40.) As Newell notes, body-worn cameras “serve the coercive aims of the state” and increase police power because the cameras are evidentiary tools controlled by the police that can be used to surveil and incarcerate more people.

According to Newell, police body cameras may lend police false legitimacy, lending a modicum of visibility without real transparency given that police officers and departments may in many instances limit access to and dissemination of the videos. More broadly, any single instance of police officer accountability may not lead to broader structural reforms. To that end, Newell notes the widespread (though not universal) approval of such cameras by the rank and file police officers he surveyed—one indicator that police cameras may not be the solution civil rights advocates hope.

All told, body cameras may not be a reform at all, but instead could aggravate our broken and racist carceral system and the surveillance that enables it. (One quibble: borrowing the perspective of those advocating for police cameras, Newell refers to surveillance of civilians as “collateral,” suggesting that the police are primary targets of the cameras’ lens. Centering the surveillance of civilians as the primary target would have been more accurate and rhetorically powerful.)

In light of these shortcomings, Newell offers a few suggestions for reform. As a background policy norm militating against implementation of police cameras in the first instance, he emphasizes that bystander videos of police conduct are a preferable form of sousveillance against the police because police departments do not serve as gatekeepers of who can and cannot access the videos and under what conditions. This is critically important, though not without drawbacks of its own as a means of police regulation. I’ve argued that such citizen recordings are themselves not without meaningful privacy harms. Safiya Noble has powerfully explained that they may contribute to the commodification of black death through profiteering by social media companies when images of police violence against people of color are viewed online.

If police body cameras are deployed, to counteract police power over how cameras are used, Newell believes that departments should not be able to institute body cameras through unregulated procurement policies prior to public deliberation and consent. And to guide that deliberation, Newell offers a few overarching principles to help better insure that police body cameras are a tool of antipower preventing further state domination: (1) independent oversight (not just for camera policies, but for officer conduct more broadly), (2) a right to access for anyone captured on film, (3) redaction/blurring of all identifying information of both victims and bystanders, and (4) default restrictions on accessing video of people’s private spaces.

These are trenchant suggestions for regulating police body cameras in that they try to maximize the extent to which cameras hold police accountable while minimizing (albeit not eliminating) the extent to which they can be used to invade others’ privacy. However, Newell’s recommendations do less work in preventing the cameras from serving as an evidentiary surveillance tool.

Compelling arguments can be made that attempting to bureaucratize the regulation of surveillance technologies is more cumbersome and less effective than outright banning them (as others have rightly argued in similar contexts such as police use of facial recognition technology). However, Newell’s informed recommendations move the policy conversation in a productive direction. They serve as an important bulwark against the “surveil now, ask questions later” ethos undergirding much of the body camera policies currently in place.

Cite as: Scott Skinner-Thompson, Debunking the Myth that Police Body Cams are Civil Rights Tool, JOTWELL (January 28, 2022) (reviewing Bryce Clayton Newell, Police Visibility: Privacy, Surveillance, and the False Promise of Body-Worn Cameras (2021)), https://cyber.jotwell.com/debunking-the-myth-that-police-body-cams-are-civil-rights-tool/.

How to Regulate Harmful Inferences

Alicia Solow-Niederman, Information Privacy and the Inference Economy (Sept. 10, 2021), available at SSRN.

A decade ago, Charles Duhigg wrote a story for the New York Times that still resonates today, revealing that Target could predict its customers’ pregnancies and delivery dates from changes in their shopping habits. This and similar revelations pose a difficult question: how do we protect vulnerable people from the power of inferences? At the time, I wondered aloud whether we ought to regulate harmful data-driven inferences and how we would do it, which sparked characteristically overheated responses from the libertarian punditry.

A decade on, the ceaseless progress of machine learning (ML) has exacerbated these problems, as advances in the state-of-the-art of prediction make Target’s old algorithm seem like child’s play. ML techniques have become more accessible and more powerful, fueled by advances in algorithms, improvements in hardware, and the collection and distribution of massive datasets chronicling aspects of people’s lives we have never before been able to scrutinize or study. Today, obscure startups can build powerful ML models to predict the behavior and reveal the secrets of millions of people.

This important draft by Alicia Solow-Niederman argues that information privacy law is unequipped to deal with the increasing and sometimes-harmful power of ML-fueled inference. The laws and regulations on the books, with their focus on user control and notice-and-choice, say very little about the harmful inferences of companies like Clearview AI, which notoriously scraped millions of photos from Facebook, LinkedIn, and Venmo, using them as ML training data to build a powerful facial-recognition service it sells exclusively to law enforcement agencies. Unlike Target, which had a contractual relationship with its customers and gathered the data for its algorithm itself, Clearview AI had no connection to the individuals it identified, suggesting that protections cannot lie in laws focused primarily on user consent and control.

The first very useful contribution of this article is its important summary of recent advances in ML, how they raise the possibility of harmful inferences, and how they challenge outdated privacy laws built upon notice-and-choice. This makes Part II of the article an accessible primer on a decade’s worth of ML advances for the non-technical privacy expert.

Solow-Niederman’s most important move, in Part IV of the article, is to ask us to focus on actors beyond the dyad of provider and user. Like Salome Viljoen’s magisterial work on Democratic Data (previously reviewed in these pages), Solow-Niederman deploys geometry. Where Viljoen added the horizontal dimension of people outside the vertical user/service relationship, Solow-Niederman asks us to move beyond the “linear” to the “triangular.” She urges us to look outside the GDPR-style relationship between data subject and data controller, to consider the actions of so-called “information processors.” These are companies like Clearview that amass massive data sets about millions of individuals to train machine learning models to infer the secrets and predict the habits not just of those people but also of others. We cannot protect privacy, Solow-Niederman argues, unless we develop new governance approaches for these actors.

This move — relational and geometric — leads her to focus on actors and relationships that get short shrift in other work. If we worry about the power of inference to harm groups and individuals, we need to scrutinize that which gives power to inference, she argues. Solow-Niederman focuses, for example, on how information processors amass “compute”: the computer-processing infrastructure needed to harness massive data sets. She provocatively suggests that regulators might cast extra scrutiny on mergers and acquisitions that lead companies to increase compute power, citing for inspiration the work of now-FTC-Chair Lina Khan, who has argued for similar shifts in antitrust law.

The triangular view also focuses attention on how companies like Clearview obtain data. Other commentators have been loath to focus on Clearview’s scraping as the source of the problem, because many tend to be wary of aggressive anti-scraping restrictions, such as expansive interpretations of the Computer Fraud and Abuse Act (CFAA). Solow-Niederman suggests, contrary to the conventional wisdom, that the CFAA could have been useful in thwarting Clearview AI, had Facebook detected the massive scraping operation, asserted its Terms of Service, and sued under the CFAA. She even suggests FTC action against companies that purport to prohibit scraping yet fail to detect or stop scrapers.

These are two genuinely novel, even counter-intuitive, prescriptions that flow directly from Solow-Niederman’s triangular intervention. They suggest the power of the approach, and we would be well-advised to see how it might lead us to other prescriptions we might be missing due to our linear mindsets.

To be clear, as I learned a decade ago, protecting people from the power of inference will raise difficult and important questions about the thin line between intellectual exploration and harm production. Inference can be harm, Solow-Niederman suggests, but she acknowledges that inference can also be science. Preventing the former while permitting the latter is a challenging undertaking, and this article defers to later work some of the difficult questions this differentiation will raise. But by focusing attention and energy on the ever-growing power of ML inference, by compellingly exploring how conventional information privacy law and scholarship cannot rise to the challenge of these questions, and by suggesting new means for considering and addressing inferential harm, Solow-Niederman makes an important and overdue contribution.

Cite as: Paul Ohm, How to Regulate Harmful Inferences, JOTWELL (December 22, 2021) (reviewing Alicia Solow-Niederman, Information Privacy and the Inference Economy (Sept. 10, 2021), available at SSRN), https://cyber.jotwell.com/how-to-regulate-harmful-inferences/.

The Hotel California Effect: The Future of E.U. Data Protection Influence in the U.K.

Paul M. Schwartz, The Data Privacy Law of Brexit: Theories of Preference Change, 22(2) Theoretical Inquires in Law 111 (2021).

The tension between the forces of nationalism and globalism has reached its peak with the United Kingdom’s decision to break with the European Union. This dramatic move continues to impact countless economic sectors and, more importantly, the lives of many citizens. Yet all is calm on the data protection front. The U.K. has decided to continue applying the E.U.’s strict GDPR. In this timely and intriguing article, Paul Schwartz strives to explain why this happened, as well predict what’s next for data protection and the British Isles.

GDPR is a four-letter word. Its strict rules and heavy fines have changed the world of data protection forever. Ninety-nine articles, one hundred and seventy-three recitals, thousands of pages of commentary, and the many millions of dollars spent preparing for it only tell us part of the story. Now that the U.K. can escape the grasp of this vast and overarching regulatory framework, why hasn’t it “checked out”? Rather, just a few days prior to Brexit, the U.K. adopted a local law which is almost identical to the GDPR. This outcome is especially surprising to me personally, as I have argued that the GDPR substantially encumbers innovation in the age of big data (although it is quite possible I was wrong).

The simple answer to the GDPR’s persistence in the U.K. relates to the business importance of international data transfers from the E.U. For such transfers to continue unfettered, the U.K. must maintain laws that are “adequate.” This is because, post-Brexit, the U.K. is rendered a “third country” in terms of data transfers for all E.U. nations. (P. 128.) “Adequacy,” according to current E.U. jurisprudence, requires a legal regime of “essential equivalence” to that of the E.U. Without such “equivalent” laws, data transfers to the U.K. would be forbidden (or at least rendered very complicated) and economic loss in multiple industries would follow.

But this reason is unsatisfactory. The decision to maintain the GDPR seems to run counter to the explicit political agenda of the U.K.’s ruling Conservative party, which constantly promised to “take back control.” Schwartz even quotes the U.K. Prime Minister Boris Johnson stating (and possibly making an intentional reference to this journal): “We have taken back control of laws and our destiny. We have taken back control of every jot and tittle of our regulation” (emphasis added – T.Z). (P. 145.) Why spare the many jots making up the GDPR? After all, the U.K. might be able to achieve adequacy without carbon copying the GDPR; several countries currently holding an adequacy status have laws that substantially vary from the E.U.’s harsh regime.

To provide a response to this intriguing legal and political question, Paul Schwartz develops a sophisticated set of models. These models are compared to the (fifth) “Brussels Effect” paradigm – a model Anu Bradford maps out in her recent book. Bradford explains how nations worldwide are both de jure and de facto swayed to accept the E.U.’s influence, thus explaining why the U.K. will hold on to the GDPR. In addition to the Brussels Effect, Schwartz explains that the GDPR might have been applied in the U.K. due to (1) a change in the U.K.’s preference to accept to E.U.’s data protection norms, as reflected in the GDPR. This could be manifested in either U.K. public opinion, or in the preferences of the legal system (which reflects the preferences of the elite). Schwartz develops this model on the basis of the work of his colleague Bob Cooter, which focuses on individual preferences. (2) the U.K.’s data protection preferences were always aligned with those of the E.U. (3) the U.K. changed its values (rather than preferences) to align with those of the E.U. through a process of persuasion or acculturation (P. 117), and (4) the easy accessibility of a legal transplant (the E.U. data protection regime) has led the U.K. to opt for this simple and cheap option. In the article’s final segment, Schwartz uses these five models to explore whether the U.K. will remain aligned with the E.U.’s data protection regime. The answer will depend on which of the five models proves most dominant in the years to come.

Beyond Schwartz’s models, the U.K.’s decision regarding the GDPR is unique as it was somewhat passive; or as Schwartz notes, a decision not to reject, or “un-transfer” E.U. data protection law. It is a decision to maintain stability and sidestep the high costs associated with changing the law. (P. 137.) In other words, the U.K. adopted the GDPR when it was part of the E.U. and is now “stuck” with this “sticky” default. Switching a default is far more difficult than accepting an external legal regime. This, in fact, was a theme Schwartz explored almost 20 years ago when considering the privacy rules of the GLB Act. In other words, this situation is so unique that unless another member state breaks from the EU, we will probably not witness a similar dynamic involving such migration of data protection norms. As opposed to the “Brussels Effect” which was influenced by the earlier “California Effect”, the situation at hand might be featuring a “Hotel California” Effect – even though the U.K. wants to check out of this aggressive regulatory framework, it is finding that it “can never leave.” as its bureaucracy has grown accustomed to it.

Therefore, the GDPR-Brexit dynamic is a unique example of the “Brussels Effect.” Yet as Schwartz has shown in another important article discussing data protection and the “Brussels Effect,” there are many unique examples. In his other work, Schwartz explained that the U. S’s adoption of the (now defunct) “Privacy Shield” and the E.U-Japan mutual adequacy agreement did not fit a “cookie cutter” paradigm of E.U. influence. All these examples demonstrate that while Bradford’s description of the “Brussels Effect” is appealing (might I say, brilliant) in its simplicity and elegance, reality is often more complex. Thus, the Brussels Effect is merely one of several explanations for the GDPR’s growing influence.

Schwartz’s taxonomy will prove helpful in understanding what happens next in the U.K.. Just recently (on August 26, 2021), the U.K. announced its intent to promote data adequacy partnerships with several nations, including the United States. Specifically, regarding the U.S., the relevant press release noted the U.K.’s disappointment with the Schrems II ruling and the importance of facilitating seamless data transfers to the U.S. It further stated that the U.K. is free to enable such transfers “now it has left the E.U..”

Should these plans move forward (they currently are in their early stages), they would create substantial (though possibly workable) challenges for the U.K’s “adequacy” status. Such developments possibly indicate that the U.K. did not move to adopt E.U privacy norms, or even cave to the economic pressures of commercial entities. Rather, it was the ease of remaining within a familiar scheme that led the U.K. to stick with the GDPR, and not check out of this notorious hotel. Yet perhaps this final assertion is too superficial. Time will tell as to whether Schwartz’s nuanced analysis of changing preferences, Bradford’s hypothesis regarding global influence, or other models best predict and explain what comes next for the U.K. and the GDPR.

Cite as: Tal Zarsky, The Hotel California Effect: The Future of E.U. Data Protection Influence in the U.K., JOTWELL (November 23, 2021) (reviewing Paul M. Schwartz, The Data Privacy Law of Brexit: Theories of Preference Change, 22(2) Theoretical Inquires in Law 111 (2021)), https://cyber.jotwell.com/the-hotel-california-effect-the-future-of-e-u-data-protection-influence-in-the-u-k/.

The Law of AI

Michael Veale and Frederik Zuiderveen Borgesius, Demystifying the Draft EU Artificial Intelligence Act 22(4) Computer L. Rev. Int'l 97-112 (2021).

The question of whether new technology requires new law is central to the field of law and technology. From Frank Easterbrook’s “law of the horse” to Ryan Calo’s law of robotics, scholars have debated the what, why, and how of technological, social, and legal co-development and construction. Given how rarely lawmakers create new legal regimes around a particular technology, the EU’s proposed “AI Act” (Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts) should put tech-law scholars on high alert. Leaked early this spring and officially released in April 2021, the AI Act aims to establish a comprehensive European approach to AI risk-management and compliance, including bans on some AI systems.

In Demystifying the Draft EU Artificial Intelligence Act, Michael Veale and Frederik Zuiderveen Borgesius provide a helpful and evenhanded entrée into this “world-first attempt at horizontal regulation of AI systems.” One the one hand, they admire the Act’s “sensible” aspects, including its risk-based approach, prohibitions of certain systems, and attempts at establishing public transparency. On the other, they note its “severe weaknesses” including its reliance on “1980s product safety regulation” and “standardisation bodies with no fundamental rights experience.”. For U.S. (and EU!) readers looking for a thoughtful overview and contextualization of a complex and somewhat inscrutable new legal system, this Article brings much to the table at a relatively concise length.

As an initial matter, it’s important to understand that the Draft AI Act is just the beginning of the European legislative process. Much can still change. And the Act must be understood in legal context: it is entwined with other EU Regulations (such as the GDPR), Directives (such as the Law Enforcement Directive and Unfair Commercial Practices Directive), and AI-specific initiatives in progress (such as the draft Data Governance Act and forthcoming product liability revisions).

The AI Act itself focuses on risk management and compliance, looking at threats to physical safety and fundamental rights. At its core, the Act is an attempt to reduce trade barriers while also addressing fundamental rights concerns. According to Veale and Borgesius, by primarily relying on product safety regulations and bodies, the AI Act gets the balance wrong.

Not all is bad, however. Veale and Borgesius appreciate the AI Act’s division of AI practices into four risk levels: unacceptable (Title II), high (Title III), limited (Title IV), and minimal (Title IX). AI systems with unacceptable risks trigger full or partial prohibitions, while high risk systems are regulated based on the EU approach to products safety (the New Legislative Framework or NLF). But Veale and Borgesius note that at closer examination, neither the prohibitions nor the regulations are as robust as they might appear.

For example, take the ban on biometric systems, which at first appears to be precisely what some scholars have called for. The Act bans most “real-time” and “remote” law enforcement uses of biometric systems in publicly accessible spaces (Art. 5(1)(d)). Notably, systems that analyze footage after-the-fact are not included. Nor is live biometric identification online, nor is the use of remote biometric identification for non-law enforcement purposes, which falls under the GDPR. And Member States may create yet more exceptions, by authorizing certain law enforcement uses of real-time biometrics, so long as they include certain safeguards. Veale and Borgesius rightly point out that the ample exceptions to the Act’s limited biometrics ban mean that the infrastructure for biometrics systems will still be installed, leading some to claim that the Act “legitimises rather than prohibits population-scale surveillance.” Moreover, nothing in the Act prevents EU companies from marketing such biometrics systems to oppressive regimes abroad.

The most complex and unfamiliar aspect of the Act is its regulation of high-risk systems. There, according to Veale and Borgesius, the Act collapses the protection of fundamental rights into the EU’s approach to product safety, to its detriment. The NLF is used to regulate toys, elevators, and personal protective equipment, and is completely unfamiliar to most information law scholars (we will have to learn fast!). Under the NLF, manufacturers perform a “conformity assessment” and effectively self-certify that they are in compliance with “essential requirements” under the law. Here, those requirements are listed in Chapter 2 of the Act, and include a quality management system, a risk management system, and data quality criteria, among other things. Manufacturers can mark conforming products with “CE,” which guarantees freedom of movement within the EU.

By contrast, Veale and Borgesius point to the path not taken: EU pharmaceutical regulation requires pre-marketing assessment and licensing by a public authority. Here, the public sector has a much more limited role to play. There are “almost no situations” in which such industry AI self-assessments will require approval by an independent technical organization, and even then, such organizations are usually private sector certification firms accredited by Member States.

Post-marketing, the AI Act again reflects the NLF by giving “market surveillance authorities” (MSAs)—typically existing regulatory agencies—the power to obtain information, apply penalties, withdraw products, etc. While AI providers must inform MSAs if their own post-market monitoring reveals risks, Member States have discretion as to which authorities will be responsible for monitoring and enforcing against standalone high-risk AI systems. In practice, Veale and Borgesius observe that this will put technocratic government agencies ordinarily concerned with product regulation in charge of a range of tasks well outside their usual purview: “to look for synthetic content on social networks, assess manipulative digital practices of any professional user, and scrutinise the functioning of the digital welfare state…[t]his is far from product regulation.”

Moreover, Veale and Borgesius point out that private standards-setting organizations will determine much of the content of the law in practice. The European Commission will likely mandate that several European Standardisation Organizations develop harmonized standards relating to the Act that companies can follow to be in compliance with it. For internet governance buffs, the problems with deciding on fundamental values through privatized processes are familiar, even old hat. But as Veale and Borgesius observe, the Act’s “incorporation of broad fundamental rights topics into the NLF [regime]… spotlight[s] this tension of legitimacy” in the EU products safety context.

This Article contains many additionally helpful sections, including a summary of the Act’s transparency provisions, approach to human oversight, and the potential confusion around and problems with the scope of the Act’s harmonization efforts. I do wish the authors had spent more time on the lack of rights, protections, and complaint mechanisms for what they call “AI-systems-subjects”—the individuals and communities impacted by the use of AI. As Veale and Borgesius observe, neither the standards-setting organizations nor the relevant government bodies are required to take input or complaints from impacted persons. They characterize this primarily as bad regulatory design, noting that “the Draft AI Act lacks a bottom-up force to hold regulators to account for weak enforcement.” To those of us steeped in the GDPR’s emphasis on individual rights, the absence of individual rights here is more shocking. I would be curious to learn about whether this choice/oversight is a real problem, or whether other EU laws nonetheless enable affected individuals to participate in the EU governance of AI.

Overall, this article is a much-needed guide to an immensely significant regulatory effort. For scholars, it raises complex questions about not just when new technology leads to new law, but how the choice of legal regime (here, product safety) establishes path dependencies that construct a technology in particular ways. Veale and Borgesius are to be applauded for their noted expertise in this space, and for doing the work to make this this regime more accessible to all.

Cite as: Margot Kaminski, The Law of AI, JOTWELL (October 25, 2021) (reviewing Michael Veale and Frederik Zuiderveen Borgesius, Demystifying the Draft EU Artificial Intelligence Act 22(4) Computer L. Rev. Int'l 97-112 (2021)), https://cyber.jotwell.com/the-law-of-ai/.

Automated Algorithmic Decision-Making Systems and ALPRs in Consumer Lending Transactions

Nicole McConlogue, Discrimination on Wheels: How Big Data Uses License Plate Surveillance to Put the Brakes on Disadvantaged Drivers, 18 Stan. J. Civ. Rts. & Civ. Lib. __ (forthcoming, 2022), available at SSRN.

Over the last decade the use of automated license plate reader (ALPR) technology has increased significantly.Several states have adopted legislation regulating the use of ALPRs and associated data.1 At the federal level, bills have been proposed to address law enforcement agencies’ use of ALPRs and companies’ use of automated algorithmic decision-making systems.2 There has been significant debate about the privacy and constitutional implications of government actors’ use of ALPR technology and ALPR data.

However, as Professor Nicole McConlogue observes in her excellent forthcoming article, Discrimination on Wheels: How Big Data Uses License Plate Surveillance to Put the Brakes on Disadvantaged Drivers, less attention has been paid to corporate actors and the way their use of ALPRs connects with their use of automated algorithmic decision-making. Corporate entities are increasingly using data collected by ALPRs together with predictive analytics programs to determine the types of opportunities that consumers receive. Professor McConlogue makes an important contribution to scholarship in the consumer and technology law fields by exposing the relationship between ALPR technology and automated algorithmic decision-making in the automobile lending industry. Her work links what are often distinct discussions of surveillance technologies and automated decision-making, as used by the private sector in consumer transactions, thus bridging the fields of consumer law and technology law.

Professor McConlogue argues that in contrast to government actors’ use of ALPRs, less attention has been given to the privacy and commercial implications of private entities’ use of ALPR data in financial transactions involving consumers. The article begins by exploring the connections between ALPR technology and the “predictive risk analysis tools” used by lenders and other entities. Professor McConlogue notes that proponents of these technologies suggest that they can be used to “democratize” access to automobiles, thereby helping to address “the discriminatory history of auto access and consumer scoring.”

However, Professor McConlogue contends that the unchecked use of these technologies is more likely to further facilitate discrimination against vulnerable groups of consumers on the basis of race and class. She convincingly argues that automobile consumer scoring using predictive analytics does not “address the points at which bias enters the scoring process.” This defect is further complicated by lenders’ and insurers’ use of ALPR-based data. Once combined with other sources of data, ALPR data and predictive analytics programs can be used by automobile lenders and insurers to determine contract terms, rates, and price adjustments that further enable income and wealth disparities. Professor McConlogue’s research indicates that at least one ALPR data vendor has encouraged insurers to evaluate consumers’ vehicle location history to better determine rates when issuing and renewing policies. Companies, too, can use data generated by ALPR technology to aid in the repossession of consumers’ encumbered collateral post-default, which mostly impacts underprivileged consumers.

Professor McConlogue’s article contains useful graphical depictions of the various points at which discrimination enters the lending cycle. She aptly uses these visual depictions along with examples to highlight the potential discriminatory nature of ALPR technology and predictive analytics.ALPR technology can reveal location data. Professor McConlogue argues that the location of a consumer’s home can be impacted by the historic legacies of redlining and segregation. Predictive analytics programs that incorporate location data such as those obtained from ALPR technology to determine consumers’ scores, contract terms and price can replicate these discriminatory practices.

Linking privacy to broader consumer protection, Professor McConlogue offers convincing critiques of existing consumer protection laws. The article highlights inadequacies in several sources of law, including the Equal Credit Opportunity Act and the Fair Credit Reporting Act. Professor McConlogue offers a novel way forward that recognizes that multi-faceted comprehensive solutions are necessary to address the problems she highlights. She provides multiple recommendations to fill gaps in existing laws to combat discrimination, and offers other proposals that include prohibiting commercial entities’ use of ALPR technology and restricting companies’ ability to use trade secret protection to obscure their “consumer scoring models.” Professor McConlogue’s most valuable contribution is exposing the important connection between ALPR technology and algorithmic decision-making in consumer lending transactions.

  1. Privacy Law §1.08, Law Journal Press (ALM Media Properties, 2021); Nat’l Conf. State Legislatures, Automobile License Plate Readers: State Statutes (Apr. 9, 2021).
  2. Reasonable Policies on Automated License Plate Readers Act, H.R. 4303, 115th Cong. (2017); Consumer Online Privacy Rights Act, S. 2968, 116th Cong. (2019).
Cite as: Stacy-Ann Elvy, Automated Algorithmic Decision-Making Systems and ALPRs in Consumer Lending Transactions, JOTWELL (September 24, 2021) (reviewing Nicole McConlogue, Discrimination on Wheels: How Big Data Uses License Plate Surveillance to Put the Brakes on Disadvantaged Drivers, 18 Stan. J. Civ. Rts. & Civ. Lib. __ (forthcoming, 2022), available at SSRN), https://cyber.jotwell.com/automated-algorithmic-decision-making-systems-and-alprs-in-consumer-lending-transactions/.

The Ideology of Bridging the Digital Divide

Daniel Greene’s The Promise of Access: Technology, Inequality, and the Political Economy of Hope has both a sharp theoretical point of view and fascinating ethnographic accounts of a tech startup, a school, and a library in Washington, DC, all trying to navigate a neoliberal economy in which individuals are required to invest in their own skills, education, and ability to change in response to institutional imperatives. Although it doesn’t directly address law, this short book’s critique of technology-focused reimaginings of public institutions suggests ways in which cyberlaw scholars should think about what institutions can, and can’t, do with technology.

Greene argues that many people in libraries and schools have, for understandable reasons, accepted key premises that are appealing but self-defeating. One such premise is that there is a “digital divide” that is a primary barrier that prevents poor people from succeeding. It follows that schools and libraries must reconfigure themselves around making the populations they serve into better competitors in the new economy. This orientation entails the faith that the professional strategies that worked for the disproportionately white people in administrative/oversight positions would work for the poor, disproportionately Black and Latino populations they are trying to help. In this worldview, startup culture is touted as a good model for libraries and schools even though those institutions can’t pivot to serve different clients but can only “bootstrap,” which is to say continually (re)invent strategies and tactics in order to convince policymakers and grantmakers to give them ever-more-elusive resources. Because poverty persists for reasons outside the control of schools and libraries, however, these new strategies can never reduce poverty on a broad scale.

Fights over how to properly use the library’s computers—for job searches, not for watching porn or playing games, even though the former might well be futile and the latter two might produce more individual utility—play out in individual negotiations between patrons and librarians (and the library police who link the library to the carceral state). Likewise, in the school, teachers model appropriate/white professional online use: the laptop is better than the phone; any minute of free time should be used to answer emails or in other “productive” ways rather than texting with friends or posting on social media. The school’s racial justice commitments, which had led it to bar most coercive discipline, eventually give way when the pressure to get test scores up gets intense. The abandonment is physically represented by the school’s conversion of a space that students had used to hang out in and charge their phones into a high-stakes testing center with makeshift cardboard barriers separating individual students.

Legal scholars may find interest in Greene’s analysis of the ruinous attractions of the startup model. That model valorizes innovation in ways that leave no room for “losers” who are written out of the narrative but still need to stay alive somehow; it demands, sometimes explicitly, that workers give over their entire lives to work because work is supposed to be its own reward. The startup model is seductive to mayors and others trying to sustain struggling cities, schools, or libraries, but its promises are often mirages. Government institutions can’t—or at least shouldn’t—fire their citizens and get new ones for a new mission when the old model isn’t working. Scholars interested in innovation may learn from Greene’s account of how startup ideology has been so successful in encouraging longstanding institutions to reconfigure themselves, both because that’s a strategy to access resources in a climate of austerity and because the model promises genuinely rewarding work for the professionals in charge.

Another reason for cyberlaw scholars to read Greene’s book is to encounter his challenge to subject matter divides that insulate certain foundational ideas from inspection. To label a problem as one of access to online resources is to suggest that the solution lies in making internet access, and perhaps internet-based training, available. But most of the poor people Greene interviews have smartphones; what they lack are safe physical spaces. Greene recounts how some of the people he talks to successfully execute multiple searches to find open shelter beds, creating a list and dividing responsibilities for making calls to different locations. Many of them are computer-literate, and more job training wouldn’t let them fit into the startup culture that is literally separated from them in the library by a glass wall (entrepreneurs—mostly white—can reserve a separate workspace behind this wall, while ordinary patrons—mostly Black—have to sign up for short-term access to library computers). As with platform regulation debates, when we ask cyberlaw to solve non-cyberlaw problems, we are setting ourselves up for failure.

Moreover, as Greene points out, other governance models are possible. Other countries fund and regulate internet connectivity more aggressively than the US does, meaning that libraries and schools don’t have to be connectors of last resort. Models of libraries and schools as places that empower citizens, rather than places that prepare individuals to go out and compete economically in an otherwise atomized world, are also imaginable—and they have been imagined and attempted before. Much as Stephanie Plamondon Bair’s Impoverished IP widens the focus of IP’s incentives/access model to examine the harms of poverty and inequality on creativity and innovation, Greene’s book calls attention to the fact that “the digital divide” is not, at its heart, about internet access but about economic and social inequality.

Cite as: Rebecca Tushnet, The Ideology of Bridging the Digital Divide, JOTWELL (August 10, 2021) (reviewing Daniel Greene, The Promise of Access: Technology, Inequality, and the Political Economy of Hope (2021)), https://cyber.jotwell.com/the-ideology-of-bridging-the-digital-divide/.

What’s the Harm? The Answer is Many

Danielle Keats Citron & Daniel J. Solove, Privacy Harms, Geo. Wash. U. L. Stud. Res. Paper No. 2021-11 (Mar. 16, 2021), available at SSRN.

Privacy law scholars have long contended with the retort, “what’s the harm?” In their seminal 1890 article The Right to Privacy, Samuel Warren and Louis Brandeis wrote: “That the individual shall have full protection in person and in property is a principle as old as the common law; but it has been found necessary from time to time to define anew the exact nature and extent of such protection.” Other legal scholars have noted that the digital age brings added challenges to the work of defining which privacy harms should be cognizable under the law and should entitle the complainant to legal redress. In Privacy Harms, an article that is sure to become part of the canon of privacy law scholarship, Danielle Citron and Daniel Solove provide a much needed and definitive update to the privacy harms debate. It is especially notable that the authors engage the full gamut of the debate, by parsing both who has standing to bring suit for a privacy litigation and also what damages should apply. This important update to privacy law literature builds upon prior solo and joint influential work by the two authors, such as Solove’s Taxonomy of Privacy, and Citron’s Sexual Privacy, and their joint article Risk and Anxiety.

The article furnishes three major contributions to law and tech scholarship. First, it highlights the challenges deriving from the incoherent and piecemeal patchwork of privacy laws in the U.S., exacerbated by what other scholars have noted are the exceedingly higher showings of harm demanded for privacy litigation versus other types of litigation. Second, the authors construct a road map for understanding the different genre of privacy harms with a detailed typology. Third, Citron and Solove helpfully provide an in-depth discussion of when and how privacy regulations should be enforced. That exercise is predicated on their viewpoint that there is currently a misalignment of the goals of privacy law and available legal remedies.

As Citron and Solove note, the higher prerequisite for a showing of privacy harm serves as an unreasonable gatekeeper to legal remedies for privacy violations. As such harm is difficult to define and proof of harm is elusive in some cases, such gatekeeping sends a dangerous signal to organizations, telling them that they do not need to heed legal obligations for privacy, so long as it remains difficult to prove harm.

Citron and Solove then provide a comprehensive typology of privacy harms. This exhaustive typology, which the authors meticulously illustrate with factual vignettes drawn from caselaw, is an especially useful resource for legal scholars, practitioners, and judges attempting to make sense of the morass that is privacy law in the United States. Citron and Solove’s typology encompasses 14 types of privacy harms: 1) physical harms, 2) economic harms, 3) reputational harms, 4) emotional harms, 5) relationship harms, 6) chilling effect harms, 7) discrimination harms, 8) thwarted expectation harms, 9) control harms, 10) data quality harms, 11) informed choice harms, 12) vulnerability harms, 13) disturbance harms, 14) autonomy harms. While some might quibble about whether some of the harms delineated are truly distinct from each other, the typology is an accessible and deft heuristic for contextualizing privacy harms both in terms of their origin and their societal effects. Two striking features of this taxonomy: first, in a departure from the authors’ previous solo and collective work, this taxonomy does not focus on the type of information breached and does not attempt to establish distinct privacy rights (see, for example, Citron’s Sexual Privacy, arguing for a novel privacy right regarding certain sexually abusive behaviors). Rather, this new taxonomy is concerned with the harmful effects of the privacy violation. Second, the taxonomy goes beyond individual level harms to introduce privacy harms that could also be seen as collective, such as chilling effect harms and vulnerability harms.

The Article’s final contribution is a discerning examination of when and how privacy harms should be recognized and regulated. This last discussion is important because, as the authors reveal, a focus on legally recognizing only those privacy harms that are easily provable, immediate, or handily quantifiable in monetary terms is detrimental to societal goals. The same can be said when the court’s focus is on a showing of what individual harm has resulted from a privacy violation.

As Citron and Solove remind us, and others have written, privacy harms are not merely individual harms, they are also societal wounds. Privacy as a human right allows for personhood, autonomy, and also the free exercise of democracy. Thus, the authors underscore that an undue emphasis on compensation, as a remedial goal for privacy violation, neglects other important societal considerations.

They observe that privacy regulations do not just compensate for harm, but serve the useful purpose of deterrence. A requirement of measurable economic or physical harm is only truly necessary to decide on compensation. If we have the clear aim of preserving privacy, merely for the benefit of what privacy affords us, rather than the objective of compensating for the injury of privacy violations, a decisive query for cutting through the bog is: what amount of damages would be optimal for deterrence?

With this keen analysis, Citron and Solove provide a way forward for determining when and how to adjudicate privacy litigation. As they conclude, for tort cases launched to demand compensation, a showing of harm may be requisite, but for other types of cases, when monetary damages are not sought, a showing of measurable economic or physical harm may be unnecessary.

In conclusion, Citron and Solove have written a truly useful article that provides a vital guardrail for navigating the quagmire of privacy litigation. Yet, their article is much more than a practitioner’s guide or judiciary touchstone. In plumbing the profundity of privacy harms, Citron and Solove have also started a cardinal socio-legal discourse on the human need for privacy and the societal ends that privacy insures. This is a conversation that has become even more urgent in the digital era.

Cite as: Ifeoma Ajunwa, What’s the Harm? The Answer is Many, JOTWELL (July 9, 2021) (reviewing Danielle Keats Citron & Daniel J. Solove, Privacy Harms, Geo. Wash. U. L. Stud. Res. Paper No. 2021-11 (Mar. 16, 2021), available at SSRN), https://cyber.jotwell.com/whats-the-harm-the-answer-is-many/.