The Journal of Things We Like (Lots)
Select Page

Debunking the Myth that Police Body Cams are Civil Rights Tool

Body-worn cameras are proliferating with astounding speed in police departments throughout the country. Depending on the conditions under which cameras are used, the spread of this technology has been defended by certain civil liberties organizations as a means of holding police accountable for excessive force used disproportionately against Black, Brown, and queer people. In his new book, Police Visibility, Professor Bryce Clayton Newell musters empirical fieldwork on police deployment of body cameras to slow the rush to implement this potentially pernicious law enforcement surveillance tool.

This book is a careful and in-depth study by a leading scholar of police technology. Specifically, Newell questions whether the prescription (police cameras) will meaningfully treat the illness (structural racism and police violence). As he puts it, “[i]n the absence of broader police reforms, the cameras may offer a Band-Aid … but they do not promise a cure.” (P. 40.) As Newell notes, body-worn cameras “serve the coercive aims of the state” and increase police power because the cameras are evidentiary tools controlled by the police that can be used to surveil and incarcerate more people.

According to Newell, police body cameras may lend police false legitimacy, lending a modicum of visibility without real transparency given that police officers and departments may in many instances limit access to and dissemination of the videos. More broadly, any single instance of police officer accountability may not lead to broader structural reforms. To that end, Newell notes the widespread (though not universal) approval of such cameras by the rank and file police officers he surveyed—one indicator that police cameras may not be the solution civil rights advocates hope.

All told, body cameras may not be a reform at all, but instead could aggravate our broken and racist carceral system and the surveillance that enables it. (One quibble: borrowing the perspective of those advocating for police cameras, Newell refers to surveillance of civilians as “collateral,” suggesting that the police are primary targets of the cameras’ lens. Centering the surveillance of civilians as the primary target would have been more accurate and rhetorically powerful.)

In light of these shortcomings, Newell offers a few suggestions for reform. As a background policy norm militating against implementation of police cameras in the first instance, he emphasizes that bystander videos of police conduct are a preferable form of sousveillance against the police because police departments do not serve as gatekeepers of who can and cannot access the videos and under what conditions. This is critically important, though not without drawbacks of its own as a means of police regulation. I’ve argued that such citizen recordings are themselves not without meaningful privacy harms. Safiya Noble has powerfully explained that they may contribute to the commodification of black death through profiteering by social media companies when images of police violence against people of color are viewed online.

If police body cameras are deployed, to counteract police power over how cameras are used, Newell believes that departments should not be able to institute body cameras through unregulated procurement policies prior to public deliberation and consent. And to guide that deliberation, Newell offers a few overarching principles to help better insure that police body cameras are a tool of antipower preventing further state domination: (1) independent oversight (not just for camera policies, but for officer conduct more broadly), (2) a right to access for anyone captured on film, (3) redaction/blurring of all identifying information of both victims and bystanders, and (4) default restrictions on accessing video of people’s private spaces.

These are trenchant suggestions for regulating police body cameras in that they try to maximize the extent to which cameras hold police accountable while minimizing (albeit not eliminating) the extent to which they can be used to invade others’ privacy. However, Newell’s recommendations do less work in preventing the cameras from serving as an evidentiary surveillance tool.

Compelling arguments can be made that attempting to bureaucratize the regulation of surveillance technologies is more cumbersome and less effective than outright banning them (as others have rightly argued in similar contexts such as police use of facial recognition technology). However, Newell’s informed recommendations move the policy conversation in a productive direction. They serve as an important bulwark against the “surveil now, ask questions later” ethos undergirding much of the body camera policies currently in place.

Cite as: Scott Skinner-Thompson, Debunking the Myth that Police Body Cams are Civil Rights Tool, JOTWELL (January 28, 2022) (reviewing Bryce Clayton Newell, Police Visibility: Privacy, Surveillance, and the False Promise of Body-Worn Cameras (2021)), https://cyber.jotwell.com/debunking-the-myth-that-police-body-cams-are-civil-rights-tool/.

How to Regulate Harmful Inferences

Alicia Solow-Niederman, Information Privacy and the Inference Economy (Sept. 10, 2021), available at SSRN.

A decade ago, Charles Duhigg wrote a story for the New York Times that still resonates today, revealing that Target could predict its customers’ pregnancies and delivery dates from changes in their shopping habits. This and similar revelations pose a difficult question: how do we protect vulnerable people from the power of inferences? At the time, I wondered aloud whether we ought to regulate harmful data-driven inferences and how we would do it, which sparked characteristically overheated responses from the libertarian punditry.

A decade on, the ceaseless progress of machine learning (ML) has exacerbated these problems, as advances in the state-of-the-art of prediction make Target’s old algorithm seem like child’s play. ML techniques have become more accessible and more powerful, fueled by advances in algorithms, improvements in hardware, and the collection and distribution of massive datasets chronicling aspects of people’s lives we have never before been able to scrutinize or study. Today, obscure startups can build powerful ML models to predict the behavior and reveal the secrets of millions of people.

This important draft by Alicia Solow-Niederman argues that information privacy law is unequipped to deal with the increasing and sometimes-harmful power of ML-fueled inference. The laws and regulations on the books, with their focus on user control and notice-and-choice, say very little about the harmful inferences of companies like Clearview AI, which notoriously scraped millions of photos from Facebook, LinkedIn, and Venmo, using them as ML training data to build a powerful facial-recognition service it sells exclusively to law enforcement agencies. Unlike Target, which had a contractual relationship with its customers and gathered the data for its algorithm itself, Clearview AI had no connection to the individuals it identified, suggesting that protections cannot lie in laws focused primarily on user consent and control.

The first very useful contribution of this article is its important summary of recent advances in ML, how they raise the possibility of harmful inferences, and how they challenge outdated privacy laws built upon notice-and-choice. This makes Part II of the article an accessible primer on a decade’s worth of ML advances for the non-technical privacy expert.

Solow-Niederman’s most important move, in Part IV of the article, is to ask us to focus on actors beyond the dyad of provider and user. Like Salome Viljoen’s magisterial work on Democratic Data (previously reviewed in these pages), Solow-Niederman deploys geometry. Where Viljoen added the horizontal dimension of people outside the vertical user/service relationship, Solow-Niederman asks us to move beyond the “linear” to the “triangular.” She urges us to look outside the GDPR-style relationship between data subject and data controller, to consider the actions of so-called “information processors.” These are companies like Clearview that amass massive data sets about millions of individuals to train machine learning models to infer the secrets and predict the habits not just of those people but also of others. We cannot protect privacy, Solow-Niederman argues, unless we develop new governance approaches for these actors.

This move — relational and geometric — leads her to focus on actors and relationships that get short shrift in other work. If we worry about the power of inference to harm groups and individuals, we need to scrutinize that which gives power to inference, she argues. Solow-Niederman focuses, for example, on how information processors amass “compute”: the computer-processing infrastructure needed to harness massive data sets. She provocatively suggests that regulators might cast extra scrutiny on mergers and acquisitions that lead companies to increase compute power, citing for inspiration the work of now-FTC-Chair Lina Khan, who has argued for similar shifts in antitrust law.

The triangular view also focuses attention on how companies like Clearview obtain data. Other commentators have been loath to focus on Clearview’s scraping as the source of the problem, because many tend to be wary of aggressive anti-scraping restrictions, such as expansive interpretations of the Computer Fraud and Abuse Act (CFAA). Solow-Niederman suggests, contrary to the conventional wisdom, that the CFAA could have been useful in thwarting Clearview AI, had Facebook detected the massive scraping operation, asserted its Terms of Service, and sued under the CFAA. She even suggests FTC action against companies that purport to prohibit scraping yet fail to detect or stop scrapers.

These are two genuinely novel, even counter-intuitive, prescriptions that flow directly from Solow-Niederman’s triangular intervention. They suggest the power of the approach, and we would be well-advised to see how it might lead us to other prescriptions we might be missing due to our linear mindsets.

To be clear, as I learned a decade ago, protecting people from the power of inference will raise difficult and important questions about the thin line between intellectual exploration and harm production. Inference can be harm, Solow-Niederman suggests, but she acknowledges that inference can also be science. Preventing the former while permitting the latter is a challenging undertaking, and this article defers to later work some of the difficult questions this differentiation will raise. But by focusing attention and energy on the ever-growing power of ML inference, by compellingly exploring how conventional information privacy law and scholarship cannot rise to the challenge of these questions, and by suggesting new means for considering and addressing inferential harm, Solow-Niederman makes an important and overdue contribution.

Cite as: Paul Ohm, How to Regulate Harmful Inferences, JOTWELL (December 22, 2021) (reviewing Alicia Solow-Niederman, Information Privacy and the Inference Economy (Sept. 10, 2021), available at SSRN), https://cyber.jotwell.com/how-to-regulate-harmful-inferences/.

The Hotel California Effect: The Future of E.U. Data Protection Influence in the U.K.

Paul M. Schwartz, The Data Privacy Law of Brexit: Theories of Preference Change, 22(2) Theoretical Inquires in Law 111 (2021).

The tension between the forces of nationalism and globalism has reached its peak with the United Kingdom’s decision to break with the European Union. This dramatic move continues to impact countless economic sectors and, more importantly, the lives of many citizens. Yet all is calm on the data protection front. The U.K. has decided to continue applying the E.U.’s strict GDPR. In this timely and intriguing article, Paul Schwartz strives to explain why this happened, as well predict what’s next for data protection and the British Isles.

GDPR is a four-letter word. Its strict rules and heavy fines have changed the world of data protection forever. Ninety-nine articles, one hundred and seventy-three recitals, thousands of pages of commentary, and the many millions of dollars spent preparing for it only tell us part of the story. Now that the U.K. can escape the grasp of this vast and overarching regulatory framework, why hasn’t it “checked out”? Rather, just a few days prior to Brexit, the U.K. adopted a local law which is almost identical to the GDPR. This outcome is especially surprising to me personally, as I have argued that the GDPR substantially encumbers innovation in the age of big data (although it is quite possible I was wrong).

The simple answer to the GDPR’s persistence in the U.K. relates to the business importance of international data transfers from the E.U. For such transfers to continue unfettered, the U.K. must maintain laws that are “adequate.” This is because, post-Brexit, the U.K. is rendered a “third country” in terms of data transfers for all E.U. nations. (P. 128.) “Adequacy,” according to current E.U. jurisprudence, requires a legal regime of “essential equivalence” to that of the E.U. Without such “equivalent” laws, data transfers to the U.K. would be forbidden (or at least rendered very complicated) and economic loss in multiple industries would follow.

But this reason is unsatisfactory. The decision to maintain the GDPR seems to run counter to the explicit political agenda of the U.K.’s ruling Conservative party, which constantly promised to “take back control.” Schwartz even quotes the U.K. Prime Minister Boris Johnson stating (and possibly making an intentional reference to this journal): “We have taken back control of laws and our destiny. We have taken back control of every jot and tittle of our regulation” (emphasis added – T.Z). (P. 145.) Why spare the many jots making up the GDPR? After all, the U.K. might be able to achieve adequacy without carbon copying the GDPR; several countries currently holding an adequacy status have laws that substantially vary from the E.U.’s harsh regime.

To provide a response to this intriguing legal and political question, Paul Schwartz develops a sophisticated set of models. These models are compared to the (fifth) “Brussels Effect” paradigm – a model Anu Bradford maps out in her recent book. Bradford explains how nations worldwide are both de jure and de facto swayed to accept the E.U.’s influence, thus explaining why the U.K. will hold on to the GDPR. In addition to the Brussels Effect, Schwartz explains that the GDPR might have been applied in the U.K. due to (1) a change in the U.K.’s preference to accept to E.U.’s data protection norms, as reflected in the GDPR. This could be manifested in either U.K. public opinion, or in the preferences of the legal system (which reflects the preferences of the elite). Schwartz develops this model on the basis of the work of his colleague Bob Cooter, which focuses on individual preferences. (2) the U.K.’s data protection preferences were always aligned with those of the E.U. (3) the U.K. changed its values (rather than preferences) to align with those of the E.U. through a process of persuasion or acculturation (P. 117), and (4) the easy accessibility of a legal transplant (the E.U. data protection regime) has led the U.K. to opt for this simple and cheap option. In the article’s final segment, Schwartz uses these five models to explore whether the U.K. will remain aligned with the E.U.’s data protection regime. The answer will depend on which of the five models proves most dominant in the years to come.

Beyond Schwartz’s models, the U.K.’s decision regarding the GDPR is unique as it was somewhat passive; or as Schwartz notes, a decision not to reject, or “un-transfer” E.U. data protection law. It is a decision to maintain stability and sidestep the high costs associated with changing the law. (P. 137.) In other words, the U.K. adopted the GDPR when it was part of the E.U. and is now “stuck” with this “sticky” default. Switching a default is far more difficult than accepting an external legal regime. This, in fact, was a theme Schwartz explored almost 20 years ago when considering the privacy rules of the GLB Act. In other words, this situation is so unique that unless another member state breaks from the EU, we will probably not witness a similar dynamic involving such migration of data protection norms. As opposed to the “Brussels Effect” which was influenced by the earlier “California Effect”, the situation at hand might be featuring a “Hotel California” Effect – even though the U.K. wants to check out of this aggressive regulatory framework, it is finding that it “can never leave.” as its bureaucracy has grown accustomed to it.

Therefore, the GDPR-Brexit dynamic is a unique example of the “Brussels Effect.” Yet as Schwartz has shown in another important article discussing data protection and the “Brussels Effect,” there are many unique examples. In his other work, Schwartz explained that the U. S’s adoption of the (now defunct) “Privacy Shield” and the E.U-Japan mutual adequacy agreement did not fit a “cookie cutter” paradigm of E.U. influence. All these examples demonstrate that while Bradford’s description of the “Brussels Effect” is appealing (might I say, brilliant) in its simplicity and elegance, reality is often more complex. Thus, the Brussels Effect is merely one of several explanations for the GDPR’s growing influence.

Schwartz’s taxonomy will prove helpful in understanding what happens next in the U.K.. Just recently (on August 26, 2021), the U.K. announced its intent to promote data adequacy partnerships with several nations, including the United States. Specifically, regarding the U.S., the relevant press release noted the U.K.’s disappointment with the Schrems II ruling and the importance of facilitating seamless data transfers to the U.S. It further stated that the U.K. is free to enable such transfers “now it has left the E.U..”

Should these plans move forward (they currently are in their early stages), they would create substantial (though possibly workable) challenges for the U.K’s “adequacy” status. Such developments possibly indicate that the U.K. did not move to adopt E.U privacy norms, or even cave to the economic pressures of commercial entities. Rather, it was the ease of remaining within a familiar scheme that led the U.K. to stick with the GDPR, and not check out of this notorious hotel. Yet perhaps this final assertion is too superficial. Time will tell as to whether Schwartz’s nuanced analysis of changing preferences, Bradford’s hypothesis regarding global influence, or other models best predict and explain what comes next for the U.K. and the GDPR.

Cite as: Tal Zarsky, The Hotel California Effect: The Future of E.U. Data Protection Influence in the U.K., JOTWELL (November 23, 2021) (reviewing Paul M. Schwartz, The Data Privacy Law of Brexit: Theories of Preference Change, 22(2) Theoretical Inquires in Law 111 (2021)), https://cyber.jotwell.com/the-hotel-california-effect-the-future-of-e-u-data-protection-influence-in-the-u-k/.

The Law of AI

Michael Veale and Frederik Zuiderveen Borgesius, Demystifying the Draft EU Artificial Intelligence Act 22(4) Computer L. Rev. Int'l 97-112 (2021).

The question of whether new technology requires new law is central to the field of law and technology. From Frank Easterbrook’s “law of the horse” to Ryan Calo’s law of robotics, scholars have debated the what, why, and how of technological, social, and legal co-development and construction. Given how rarely lawmakers create new legal regimes around a particular technology, the EU’s proposed “AI Act” (Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts) should put tech-law scholars on high alert. Leaked early this spring and officially released in April 2021, the AI Act aims to establish a comprehensive European approach to AI risk-management and compliance, including bans on some AI systems.

In Demystifying the Draft EU Artificial Intelligence Act, Michael Veale and Frederik Zuiderveen Borgesius provide a helpful and evenhanded entrée into this “world-first attempt at horizontal regulation of AI systems.” One the one hand, they admire the Act’s “sensible” aspects, including its risk-based approach, prohibitions of certain systems, and attempts at establishing public transparency. On the other, they note its “severe weaknesses” including its reliance on “1980s product safety regulation” and “standardisation bodies with no fundamental rights experience.”. For U.S. (and EU!) readers looking for a thoughtful overview and contextualization of a complex and somewhat inscrutable new legal system, this Article brings much to the table at a relatively concise length.

As an initial matter, it’s important to understand that the Draft AI Act is just the beginning of the European legislative process. Much can still change. And the Act must be understood in legal context: it is entwined with other EU Regulations (such as the GDPR), Directives (such as the Law Enforcement Directive and Unfair Commercial Practices Directive), and AI-specific initiatives in progress (such as the draft Data Governance Act and forthcoming product liability revisions).

The AI Act itself focuses on risk management and compliance, looking at threats to physical safety and fundamental rights. At its core, the Act is an attempt to reduce trade barriers while also addressing fundamental rights concerns. According to Veale and Borgesius, by primarily relying on product safety regulations and bodies, the AI Act gets the balance wrong.

Not all is bad, however. Veale and Borgesius appreciate the AI Act’s division of AI practices into four risk levels: unacceptable (Title II), high (Title III), limited (Title IV), and minimal (Title IX). AI systems with unacceptable risks trigger full or partial prohibitions, while high risk systems are regulated based on the EU approach to products safety (the New Legislative Framework or NLF). But Veale and Borgesius note that at closer examination, neither the prohibitions nor the regulations are as robust as they might appear.

For example, take the ban on biometric systems, which at first appears to be precisely what some scholars have called for. The Act bans most “real-time” and “remote” law enforcement uses of biometric systems in publicly accessible spaces (Art. 5(1)(d)). Notably, systems that analyze footage after-the-fact are not included. Nor is live biometric identification online, nor is the use of remote biometric identification for non-law enforcement purposes, which falls under the GDPR. And Member States may create yet more exceptions, by authorizing certain law enforcement uses of real-time biometrics, so long as they include certain safeguards. Veale and Borgesius rightly point out that the ample exceptions to the Act’s limited biometrics ban mean that the infrastructure for biometrics systems will still be installed, leading some to claim that the Act “legitimises rather than prohibits population-scale surveillance.” Moreover, nothing in the Act prevents EU companies from marketing such biometrics systems to oppressive regimes abroad.

The most complex and unfamiliar aspect of the Act is its regulation of high-risk systems. There, according to Veale and Borgesius, the Act collapses the protection of fundamental rights into the EU’s approach to product safety, to its detriment. The NLF is used to regulate toys, elevators, and personal protective equipment, and is completely unfamiliar to most information law scholars (we will have to learn fast!). Under the NLF, manufacturers perform a “conformity assessment” and effectively self-certify that they are in compliance with “essential requirements” under the law. Here, those requirements are listed in Chapter 2 of the Act, and include a quality management system, a risk management system, and data quality criteria, among other things. Manufacturers can mark conforming products with “CE,” which guarantees freedom of movement within the EU.

By contrast, Veale and Borgesius point to the path not taken: EU pharmaceutical regulation requires pre-marketing assessment and licensing by a public authority. Here, the public sector has a much more limited role to play. There are “almost no situations” in which such industry AI self-assessments will require approval by an independent technical organization, and even then, such organizations are usually private sector certification firms accredited by Member States.

Post-marketing, the AI Act again reflects the NLF by giving “market surveillance authorities” (MSAs)—typically existing regulatory agencies—the power to obtain information, apply penalties, withdraw products, etc. While AI providers must inform MSAs if their own post-market monitoring reveals risks, Member States have discretion as to which authorities will be responsible for monitoring and enforcing against standalone high-risk AI systems. In practice, Veale and Borgesius observe that this will put technocratic government agencies ordinarily concerned with product regulation in charge of a range of tasks well outside their usual purview: “to look for synthetic content on social networks, assess manipulative digital practices of any professional user, and scrutinise the functioning of the digital welfare state…[t]his is far from product regulation.”

Moreover, Veale and Borgesius point out that private standards-setting organizations will determine much of the content of the law in practice. The European Commission will likely mandate that several European Standardisation Organizations develop harmonized standards relating to the Act that companies can follow to be in compliance with it. For internet governance buffs, the problems with deciding on fundamental values through privatized processes are familiar, even old hat. But as Veale and Borgesius observe, the Act’s “incorporation of broad fundamental rights topics into the NLF [regime]… spotlight[s] this tension of legitimacy” in the EU products safety context.

This Article contains many additionally helpful sections, including a summary of the Act’s transparency provisions, approach to human oversight, and the potential confusion around and problems with the scope of the Act’s harmonization efforts. I do wish the authors had spent more time on the lack of rights, protections, and complaint mechanisms for what they call “AI-systems-subjects”—the individuals and communities impacted by the use of AI. As Veale and Borgesius observe, neither the standards-setting organizations nor the relevant government bodies are required to take input or complaints from impacted persons. They characterize this primarily as bad regulatory design, noting that “the Draft AI Act lacks a bottom-up force to hold regulators to account for weak enforcement.” To those of us steeped in the GDPR’s emphasis on individual rights, the absence of individual rights here is more shocking. I would be curious to learn about whether this choice/oversight is a real problem, or whether other EU laws nonetheless enable affected individuals to participate in the EU governance of AI.

Overall, this article is a much-needed guide to an immensely significant regulatory effort. For scholars, it raises complex questions about not just when new technology leads to new law, but how the choice of legal regime (here, product safety) establishes path dependencies that construct a technology in particular ways. Veale and Borgesius are to be applauded for their noted expertise in this space, and for doing the work to make this this regime more accessible to all.

Cite as: Margot Kaminski, The Law of AI, JOTWELL (October 25, 2021) (reviewing Michael Veale and Frederik Zuiderveen Borgesius, Demystifying the Draft EU Artificial Intelligence Act 22(4) Computer L. Rev. Int'l 97-112 (2021)), https://cyber.jotwell.com/the-law-of-ai/.

Automated Algorithmic Decision-Making Systems and ALPRs in Consumer Lending Transactions

Nicole McConlogue, Discrimination on Wheels: How Big Data Uses License Plate Surveillance to Put the Brakes on Disadvantaged Drivers, 18 Stan. J. Civ. Rts. & Civ. Lib. __ (forthcoming, 2022), available at SSRN.

Over the last decade the use of automated license plate reader (ALPR) technology has increased significantly.Several states have adopted legislation regulating the use of ALPRs and associated data.1 At the federal level, bills have been proposed to address law enforcement agencies’ use of ALPRs and companies’ use of automated algorithmic decision-making systems.2 There has been significant debate about the privacy and constitutional implications of government actors’ use of ALPR technology and ALPR data.

However, as Professor Nicole McConlogue observes in her excellent forthcoming article, Discrimination on Wheels: How Big Data Uses License Plate Surveillance to Put the Brakes on Disadvantaged Drivers, less attention has been paid to corporate actors and the way their use of ALPRs connects with their use of automated algorithmic decision-making. Corporate entities are increasingly using data collected by ALPRs together with predictive analytics programs to determine the types of opportunities that consumers receive. Professor McConlogue makes an important contribution to scholarship in the consumer and technology law fields by exposing the relationship between ALPR technology and automated algorithmic decision-making in the automobile lending industry. Her work links what are often distinct discussions of surveillance technologies and automated decision-making, as used by the private sector in consumer transactions, thus bridging the fields of consumer law and technology law.

Professor McConlogue argues that in contrast to government actors’ use of ALPRs, less attention has been given to the privacy and commercial implications of private entities’ use of ALPR data in financial transactions involving consumers. The article begins by exploring the connections between ALPR technology and the “predictive risk analysis tools” used by lenders and other entities. Professor McConlogue notes that proponents of these technologies suggest that they can be used to “democratize” access to automobiles, thereby helping to address “the discriminatory history of auto access and consumer scoring.”

However, Professor McConlogue contends that the unchecked use of these technologies is more likely to further facilitate discrimination against vulnerable groups of consumers on the basis of race and class. She convincingly argues that automobile consumer scoring using predictive analytics does not “address the points at which bias enters the scoring process.” This defect is further complicated by lenders’ and insurers’ use of ALPR-based data. Once combined with other sources of data, ALPR data and predictive analytics programs can be used by automobile lenders and insurers to determine contract terms, rates, and price adjustments that further enable income and wealth disparities. Professor McConlogue’s research indicates that at least one ALPR data vendor has encouraged insurers to evaluate consumers’ vehicle location history to better determine rates when issuing and renewing policies. Companies, too, can use data generated by ALPR technology to aid in the repossession of consumers’ encumbered collateral post-default, which mostly impacts underprivileged consumers.

Professor McConlogue’s article contains useful graphical depictions of the various points at which discrimination enters the lending cycle. She aptly uses these visual depictions along with examples to highlight the potential discriminatory nature of ALPR technology and predictive analytics.ALPR technology can reveal location data. Professor McConlogue argues that the location of a consumer’s home can be impacted by the historic legacies of redlining and segregation. Predictive analytics programs that incorporate location data such as those obtained from ALPR technology to determine consumers’ scores, contract terms and price can replicate these discriminatory practices.

Linking privacy to broader consumer protection, Professor McConlogue offers convincing critiques of existing consumer protection laws. The article highlights inadequacies in several sources of law, including the Equal Credit Opportunity Act and the Fair Credit Reporting Act. Professor McConlogue offers a novel way forward that recognizes that multi-faceted comprehensive solutions are necessary to address the problems she highlights. She provides multiple recommendations to fill gaps in existing laws to combat discrimination, and offers other proposals that include prohibiting commercial entities’ use of ALPR technology and restricting companies’ ability to use trade secret protection to obscure their “consumer scoring models.” Professor McConlogue’s most valuable contribution is exposing the important connection between ALPR technology and algorithmic decision-making in consumer lending transactions.

  1. Privacy Law §1.08, Law Journal Press (ALM Media Properties, 2021); Nat’l Conf. State Legislatures, Automobile License Plate Readers: State Statutes (Apr. 9, 2021).
  2. Reasonable Policies on Automated License Plate Readers Act, H.R. 4303, 115th Cong. (2017); Consumer Online Privacy Rights Act, S. 2968, 116th Cong. (2019).
Cite as: Stacy-Ann Elvy, Automated Algorithmic Decision-Making Systems and ALPRs in Consumer Lending Transactions, JOTWELL (September 24, 2021) (reviewing Nicole McConlogue, Discrimination on Wheels: How Big Data Uses License Plate Surveillance to Put the Brakes on Disadvantaged Drivers, 18 Stan. J. Civ. Rts. & Civ. Lib. __ (forthcoming, 2022), available at SSRN), https://cyber.jotwell.com/automated-algorithmic-decision-making-systems-and-alprs-in-consumer-lending-transactions/.

The Ideology of Bridging the Digital Divide

Daniel Greene’s The Promise of Access: Technology, Inequality, and the Political Economy of Hope has both a sharp theoretical point of view and fascinating ethnographic accounts of a tech startup, a school, and a library in Washington, DC, all trying to navigate a neoliberal economy in which individuals are required to invest in their own skills, education, and ability to change in response to institutional imperatives. Although it doesn’t directly address law, this short book’s critique of technology-focused reimaginings of public institutions suggests ways in which cyberlaw scholars should think about what institutions can, and can’t, do with technology.

Greene argues that many people in libraries and schools have, for understandable reasons, accepted key premises that are appealing but self-defeating. One such premise is that there is a “digital divide” that is a primary barrier that prevents poor people from succeeding. It follows that schools and libraries must reconfigure themselves around making the populations they serve into better competitors in the new economy. This orientation entails the faith that the professional strategies that worked for the disproportionately white people in administrative/oversight positions would work for the poor, disproportionately Black and Latino populations they are trying to help. In this worldview, startup culture is touted as a good model for libraries and schools even though those institutions can’t pivot to serve different clients but can only “bootstrap,” which is to say continually (re)invent strategies and tactics in order to convince policymakers and grantmakers to give them ever-more-elusive resources. Because poverty persists for reasons outside the control of schools and libraries, however, these new strategies can never reduce poverty on a broad scale.

Fights over how to properly use the library’s computers—for job searches, not for watching porn or playing games, even though the former might well be futile and the latter two might produce more individual utility—play out in individual negotiations between patrons and librarians (and the library police who link the library to the carceral state). Likewise, in the school, teachers model appropriate/white professional online use: the laptop is better than the phone; any minute of free time should be used to answer emails or in other “productive” ways rather than texting with friends or posting on social media. The school’s racial justice commitments, which had led it to bar most coercive discipline, eventually give way when the pressure to get test scores up gets intense. The abandonment is physically represented by the school’s conversion of a space that students had used to hang out in and charge their phones into a high-stakes testing center with makeshift cardboard barriers separating individual students.

Legal scholars may find interest in Greene’s analysis of the ruinous attractions of the startup model. That model valorizes innovation in ways that leave no room for “losers” who are written out of the narrative but still need to stay alive somehow; it demands, sometimes explicitly, that workers give over their entire lives to work because work is supposed to be its own reward. The startup model is seductive to mayors and others trying to sustain struggling cities, schools, or libraries, but its promises are often mirages. Government institutions can’t—or at least shouldn’t—fire their citizens and get new ones for a new mission when the old model isn’t working. Scholars interested in innovation may learn from Greene’s account of how startup ideology has been so successful in encouraging longstanding institutions to reconfigure themselves, both because that’s a strategy to access resources in a climate of austerity and because the model promises genuinely rewarding work for the professionals in charge.

Another reason for cyberlaw scholars to read Greene’s book is to encounter his challenge to subject matter divides that insulate certain foundational ideas from inspection. To label a problem as one of access to online resources is to suggest that the solution lies in making internet access, and perhaps internet-based training, available. But most of the poor people Greene interviews have smartphones; what they lack are safe physical spaces. Greene recounts how some of the people he talks to successfully execute multiple searches to find open shelter beds, creating a list and dividing responsibilities for making calls to different locations. Many of them are computer-literate, and more job training wouldn’t let them fit into the startup culture that is literally separated from them in the library by a glass wall (entrepreneurs—mostly white—can reserve a separate workspace behind this wall, while ordinary patrons—mostly Black—have to sign up for short-term access to library computers). As with platform regulation debates, when we ask cyberlaw to solve non-cyberlaw problems, we are setting ourselves up for failure.

Moreover, as Greene points out, other governance models are possible. Other countries fund and regulate internet connectivity more aggressively than the US does, meaning that libraries and schools don’t have to be connectors of last resort. Models of libraries and schools as places that empower citizens, rather than places that prepare individuals to go out and compete economically in an otherwise atomized world, are also imaginable—and they have been imagined and attempted before. Much as Stephanie Plamondon Bair’s Impoverished IP widens the focus of IP’s incentives/access model to examine the harms of poverty and inequality on creativity and innovation, Greene’s book calls attention to the fact that “the digital divide” is not, at its heart, about internet access but about economic and social inequality.

Cite as: Rebecca Tushnet, The Ideology of Bridging the Digital Divide, JOTWELL (August 10, 2021) (reviewing Daniel Greene, The Promise of Access: Technology, Inequality, and the Political Economy of Hope (2021)), https://cyber.jotwell.com/the-ideology-of-bridging-the-digital-divide/.

What’s the Harm? The Answer is Many

Danielle Keats Citron & Daniel J. Solove, Privacy Harms, Geo. Wash. U. L. Stud. Res. Paper No. 2021-11 (Mar. 16, 2021), available at SSRN.

Privacy law scholars have long contended with the retort, “what’s the harm?” In their seminal 1890 article The Right to Privacy, Samuel Warren and Louis Brandeis wrote: “That the individual shall have full protection in person and in property is a principle as old as the common law; but it has been found necessary from time to time to define anew the exact nature and extent of such protection.” Other legal scholars have noted that the digital age brings added challenges to the work of defining which privacy harms should be cognizable under the law and should entitle the complainant to legal redress. In Privacy Harms, an article that is sure to become part of the canon of privacy law scholarship, Danielle Citron and Daniel Solove provide a much needed and definitive update to the privacy harms debate. It is especially notable that the authors engage the full gamut of the debate, by parsing both who has standing to bring suit for a privacy litigation and also what damages should apply. This important update to privacy law literature builds upon prior solo and joint influential work by the two authors, such as Solove’s Taxonomy of Privacy, and Citron’s Sexual Privacy, and their joint article Risk and Anxiety.

The article furnishes three major contributions to law and tech scholarship. First, it highlights the challenges deriving from the incoherent and piecemeal patchwork of privacy laws in the U.S., exacerbated by what other scholars have noted are the exceedingly higher showings of harm demanded for privacy litigation versus other types of litigation. Second, the authors construct a road map for understanding the different genre of privacy harms with a detailed typology. Third, Citron and Solove helpfully provide an in-depth discussion of when and how privacy regulations should be enforced. That exercise is predicated on their viewpoint that there is currently a misalignment of the goals of privacy law and available legal remedies.

As Citron and Solove note, the higher prerequisite for a showing of privacy harm serves as an unreasonable gatekeeper to legal remedies for privacy violations. As such harm is difficult to define and proof of harm is elusive in some cases, such gatekeeping sends a dangerous signal to organizations, telling them that they do not need to heed legal obligations for privacy, so long as it remains difficult to prove harm.

Citron and Solove then provide a comprehensive typology of privacy harms. This exhaustive typology, which the authors meticulously illustrate with factual vignettes drawn from caselaw, is an especially useful resource for legal scholars, practitioners, and judges attempting to make sense of the morass that is privacy law in the United States. Citron and Solove’s typology encompasses 14 types of privacy harms: 1) physical harms, 2) economic harms, 3) reputational harms, 4) emotional harms, 5) relationship harms, 6) chilling effect harms, 7) discrimination harms, 8) thwarted expectation harms, 9) control harms, 10) data quality harms, 11) informed choice harms, 12) vulnerability harms, 13) disturbance harms, 14) autonomy harms. While some might quibble about whether some of the harms delineated are truly distinct from each other, the typology is an accessible and deft heuristic for contextualizing privacy harms both in terms of their origin and their societal effects. Two striking features of this taxonomy: first, in a departure from the authors’ previous solo and collective work, this taxonomy does not focus on the type of information breached and does not attempt to establish distinct privacy rights (see, for example, Citron’s Sexual Privacy, arguing for a novel privacy right regarding certain sexually abusive behaviors). Rather, this new taxonomy is concerned with the harmful effects of the privacy violation. Second, the taxonomy goes beyond individual level harms to introduce privacy harms that could also be seen as collective, such as chilling effect harms and vulnerability harms.

The Article’s final contribution is a discerning examination of when and how privacy harms should be recognized and regulated. This last discussion is important because, as the authors reveal, a focus on legally recognizing only those privacy harms that are easily provable, immediate, or handily quantifiable in monetary terms is detrimental to societal goals. The same can be said when the court’s focus is on a showing of what individual harm has resulted from a privacy violation.

As Citron and Solove remind us, and others have written, privacy harms are not merely individual harms, they are also societal wounds. Privacy as a human right allows for personhood, autonomy, and also the free exercise of democracy. Thus, the authors underscore that an undue emphasis on compensation, as a remedial goal for privacy violation, neglects other important societal considerations.

They observe that privacy regulations do not just compensate for harm, but serve the useful purpose of deterrence. A requirement of measurable economic or physical harm is only truly necessary to decide on compensation. If we have the clear aim of preserving privacy, merely for the benefit of what privacy affords us, rather than the objective of compensating for the injury of privacy violations, a decisive query for cutting through the bog is: what amount of damages would be optimal for deterrence?

With this keen analysis, Citron and Solove provide a way forward for determining when and how to adjudicate privacy litigation. As they conclude, for tort cases launched to demand compensation, a showing of harm may be requisite, but for other types of cases, when monetary damages are not sought, a showing of measurable economic or physical harm may be unnecessary.

In conclusion, Citron and Solove have written a truly useful article that provides a vital guardrail for navigating the quagmire of privacy litigation. Yet, their article is much more than a practitioner’s guide or judiciary touchstone. In plumbing the profundity of privacy harms, Citron and Solove have also started a cardinal socio-legal discourse on the human need for privacy and the societal ends that privacy insures. This is a conversation that has become even more urgent in the digital era.

Cite as: Ifeoma Ajunwa, What’s the Harm? The Answer is Many, JOTWELL (July 9, 2021) (reviewing Danielle Keats Citron & Daniel J. Solove, Privacy Harms, Geo. Wash. U. L. Stud. Res. Paper No. 2021-11 (Mar. 16, 2021), available at SSRN), https://cyber.jotwell.com/whats-the-harm-the-answer-is-many/.

Update of Jotwell Mailing Lists

Many Jotwell readers choose to subscribe to Jotwell either by RSS or by email.

For a long time Jotwell has run two parallel sets of email mailing lists, one of which serves only long-time subscribers. The provider of that legacy service is closing its email portal next week, so we are going to merge the lists. We hope and intend that this will be a seamless process, but if you find you are not receiving the Jotwell email updates you expect from the Techlaw section, then you may need to resubscribe via the subscribe to Jotwell portal. This change to email delivery should not affect subscribers to the RSS feed.

The links at the subscription portal already point to the new email delivery system. It is open to all readers whether or not they previously subscribed for email delivery. From there you can choose to subscribe to all Jotwell content, or only the sections that most interest you.

Gauging Genetic Privacy

James W. Hazel & Christopher Slobogin, “World of Difference”? Law Enforcement, Genetic Data, and the Fourth Amendment, 70 Duke L.J. 705 (2021).

Human beings leave trails of genetic data wherever we go. We unavoidably leave genetic traces on the doorknobs we touch, the items we handle, the bottles and cups we drink from, and the detritus we throw away. We also leave a trail of genetic data with the physicians we visit, who may order genetic analysis to help treat a cancer or to assist a couple in assessing their pre-conception genetic risks. Our genetic data, often but not always shorn of obvious identifiers, may be repurposed for research use. If we seek to learn about our ancestry, we may send a DNA sample to a consumer genetics service, like 23andMe, or share the resulting data on a cross-service platform like GEDmatch. If we are arrested or convicted of a crime, we may be compelled to give a DNA sample for perpetual inclusion in an official law-enforcement database. Law enforcement might use each of these trails of genetic data to learn about or identify us—or our genetic relatives.

Should law enforcement be permitted to make use of each and every one of these forms of genetic data, consistent with the Fourth Amendment of the U.S. Constitution? That is the question that motivates James W. Hazel and Chris Slobogin’s recent article, “World of Difference”? Law Enforcement, Genetic Data, and the Fourth Amendment. Hazel and Slobogin take an empirical approach to the Fourth Amendment inquiry, reporting results of a survey of more than 1500 respondents and probing which types of data access respondents deemed “intrusive” or treading upon an “expectation of privacy.” Their findings indicate that the public often perceives police access to genetic data sources as highly intrusive, even where traditional Fourth Amendment doctrine might not. As Hazel and Slobogin put it, “our subjects appeared to focus on the location of the information, not its provenance or content.” That is, intrusiveness turns more on who holds the data, rather than on how it was first collected or analyzed. Hazel and Slobogin conclude that their findings “support an argument in favor of judicial authorization both when police access nongovernmental genetic databases and when police collect DNA from individuals who have not yet been arrested.”

Hazel and Slobogin’s analysis is firmly rooted in existing doctrine. As they observe, much genetic data collection, analysis, and use has traditionally been beyond the scope of the Fourth Amendment. The Fourth Amendment extends its protections only to “searches” and “seizures,” and existing doctrine defines government intrusion as a search, in large measure, based on whether government action intrudes upon an “expectation of privacy” that society is prepared to recognize as “reasonable.” Under the so-called “third-party doctrine,” “if you share information, you do not have an expectation of privacy in it.” But in its recent Fourth Amendment decision in United States v. Carpenter, the Supreme Court suggested that the third-party doctrine is not categorical. As Hazel and Slobogin aptly summarize, “In the wake of Carpenter, considerable uncertainty exists about the applicability of the third-party doctrine to genetic information.” Indeed, Justice Gorsuch, dissenting in Carpenter, “used DNA access as an example” of information in which individuals typically expect privacy, despite having entrusted that information to third parties.

Hazel and Slobogin provide an empirical response to this uncertainty. They survey public attitudes regarding the privacy of certain sources of genetic data, and the intrusiveness of investigative access to that data. In assessing these attitudes, the authors also queried respondents about a range of non-genetic scenarios, including some both clearly within and beyond existing Fourth Amendment regulation, in order to better gauge relative findings of intrusiveness and privacy. The authors appropriately acknowledge that the platform they utilized to complete the survey—Amazon Mechanical Turk—and the population they recruited to participate may be imperfectly representative of the general public. They discuss countermeasures they took to minimize biases in their results, including excluding responses received in under five minutes (which “are indicative that the individual did not answer thoughtfully”).

The results indicate that law-enforcement access to many sources of genetic data ranked as highly intrusive and infringing upon an expectation of privacy. Among other findings, “police access to public genealogy, direct-to-consumer and research databases, as well as the creation of a universal DNA database, were … ranked among the most intrusive activities.” These government activities ranked similarly to searches of bedrooms and emails, and as both more intrusive and more infringing on a reasonable expectation of privacy than “cell location”—the data at issue in the Carpenter case itself. Yet many already-common police collections of genetic data, including surreptitious collection of “discarded” DNA, compelled DNA collection from arrested or convicted persons, and even familial searches in official law enforcement DNA databases ranked as among the least intrusive or privacy-offending activities.

Hazel and Slobogin suggest that Fourth Amendment doctrine should be attentive to societal views about privacy, such as the data uncovered in their survey, and that this should prompt closer scrutiny of the “situs of genetic information” in assessing expectations of privacy. The role of survey data in Fourth Amendment analysis is contested, but one need not subscribe to Hazel and Slobogin’s view of the importance of this data to Fourth Amendment analysis to appreciate their insights.

For one thing, Hazel and Slobogin’s data provide an antidote to claims of broad public support for law enforcement use of consumer genetics platforms to investigate crimes. According to Hazel and Slobogin, government access to consumer genetics data consistently ranked as highly intrusive and privacy-invasive. These findings also lend weight to Justice Gorsuch’s intuition in Carpenter that government access to genetic data from these sources ought to require a warrant or probable cause.

In addition to the Fourth Amendment, moreover, Hazel and Slobogin’s findings suggest that Congress or the Department of Health or Human Services ought to act to better protect medical data, especially genetic data in medical records. Survey respondents “ranked law enforcement access to genetic data from an individual’s doctor as the most intrusive of all scenarios, just above police access to other information in medical records.” Under existing law, these records are typically protected from nonconsensual disclosure under the HIPAA Privacy Rule, and physicians and their patients share a fiduciary relationship that is often privacy protective. But the HIPAA Privacy Rule codifies a gaping exception to nonconsensual disclosure for law enforcement purposes. As Hazel and Slobogin recognize, the Privacy Rule permits genetic information to be disclosed to law enforcement upon as little as an “administrative request.” That minimal standard runs contrary to the strongly held attitudes of privacy and intrusiveness that Hazel and Slobogin’s study reveals. These findings should provide impetus to act to better protect medical records from government access.

We ought not, however, overinterpret the authors’ results. Their findings indicate limited concern about the most well-known forms of genetic surveillance, through compelled DNA collection from individuals arrested or convicted of crimes or from surreptitiously collected items containing trace DNA that individuals cannot help but leave behind. Perhaps these results reflect a genuine lack of concern with these practices—or perhaps they merely reflect that individuals expect what they know the government is already doing. A one-way ratchet of public acceptance ought to give us pause about findings of non-intrusiveness for well-known police practices.

In sum, Hazel and Slobogin’s article yields important new data suggesting that government access to many sources of genetic data is indeed highly intrusive. That data may inform Fourth Amendment analysis. It also may inform discussions about the fitness of existing statutory and regulatory protections for genetic data, the need for new protections, and the credibility of existing claims of public support for certain uses of such data.

Cite as: Natalie Ram, Gauging Genetic Privacy, JOTWELL (June 10, 2021) (reviewing James W. Hazel & Christopher Slobogin, “World of Difference”? Law Enforcement, Genetic Data, and the Fourth Amendment, 70 Duke L.J. 705 (2021)), https://cyber.jotwell.com/gauging-genetic-privacy/.

Illegal Sex Toy Patents

Sarah R. Wasserman Rajec and Andrew Gilden, Patenting Pleasure (Feb. 25, 2021), available at SSRN.

In Patenting Pleasure, Professors Sarah Rajec and Andrew Gilden highlight a surprising incongruity: while many areas of U.S. law are profoundly hostile to sexuality in general and the technology of sex in particular, the patent system is not. Instead, the U.S. Patent and Trademark Office (USPTO) has over the decades issued thousands of patents on sex toys—from vibrators to AI, and everything in between.

This incongruity is especially odd because patent law has long incorporated a doctrine that specifically tied patentability to the usefulness of the invention, and up until the end of the 20th century one strand of that doctrine held that inventions “injurious to society” failed the utility test. And until about that time—and in some states and localities, even today—the law was exceptionally clear that sex toys were immoral and illegal. Patents issued nonetheless. How did inventors show that their sex toys were useful, despite being barred from relying on their most obvious use? Gilden and Rajec examine hundreds of issued patents to weave an engrossing narrative about sex, patents, and the law.

Two very nice background sections are each worth the price of admission. “The Law of the Sex Toy” canvasses the many ways U.S. law has been historically hostile to sex toys, including U.S. Postal Inspector Anthony Comstock’s 19th century crusade against “articles for self-pollution.” (Comstock, of “Comstock laws” fame, seized over 60,000 “immoral” rubber articles.) Efforts to criminalize sex toys continued in the late 20th century as well; many of these laws are still on the books, including in Texas, Alabama, and Mississippi, and some, including Alabama’s, are still enforced. At the federal level, the 2020 CARES Act included over half a trillion dollars in small business loans as pandemic relief; those making or selling sex toys (as well as other sex-based businesses) were excluded.

What’s all this got to do with patent law? For one thing, patenting illegal sex toys seems a fruitless errand, since it makes little sense (most of the time) to patent things you can’t make or sell. This puzzle goes unaddressed by the authors. At a doctrinal level, patent law’s utility requirement long barred patents on inventions injurious to society like gambling machines; radar detectors; and, it would seem under the laws just mentioned, sex toys. So applicants for pleasure patents would need to assert some utility—while steering clear of beneficial utility’s immorality bar. Gilden and Rajec provide as background a clear and useful overview of the history of so-called “beneficial utility,” including its applicability to sex tech.

One way to thread the needle is to obfuscate. In the early 20th century, many vibrators were advertised for nonsexual purposes with an overt or implicit wink; personal massagers were for nothing but sore muscles. Such stratagems could, did, and do help innovators evade the laws that otherwise vex sex tech. But Rajec and Gilden intentionally step beyond the disguise gambit (though its success raises interesting questions about utility law doctrine in general). Instead, they focus on patented inventions that are obviously, explicitly, and clearly about sex. The USPTO classes inventions according to types of technology, and one classification, A61H19/00, is reserved for “Massage for the genitals; Devices for improving sexual intercourse.” Tough to obfuscate there. So how are inventors getting the hundreds of patents Gilden and Rajec find in this class?

This is the central tension that Pleasure Patents addresses: Because of the utility doctrine, patentees must say what their inventions are for—but because US law has been generally quite hostile to sex and sex tech, pleasure patents have to say they are for something other than, well, pleasure. In the heart of the piece, Rajec and Gilden carefully catalog these descriptions over time, revealing a changing picture about what sorts of purposes were considered acceptable sex tech—at least, in the eyes of the USPTO.

It turns out patents can tell us interesting things about sex norms. Gilden and Rajec identify several narratives about what sex tech was for, including saving marriages and treating women’s frigidity (both thankfully more historic than contemporary rationales), helping individuals who cannot find sexual partners, avoiding sexually transmitted infections, helping persons with disabilities, and facilitating sexual relations by LGBTQIA individuals.

In recent years (perhaps following the effective demise of beneficial utility in 1999 at the hands of the Federal Circuit in the coincidentally-but-aptly-captioned Juicy Whip v. Orange Bang), pleasure patents have finally copped to being actually about pleasure, telling a narrative of sexual empowerment. Many pleasure patents in this last vein are remarkably forthright pieces of sex ed, among their other functions. As Rajec and Gilden note, “Particularly compared with federally-supported abstinence-only education programs, or the Department of Education’s heavily-critiqued guidelines on student sexual conduct, the federal patent registry provides a pretty thorough education on the anatomies and psychologies of sexual pleasure.” There’s much to learn here of the fascinating rise and fall of different utility narratives and how the patent system reflects changing social norms.

There is much, too, to like in Gilden and Rajec’s sketched implications for patent law and for studies of law and sexuality. Pleasure patents provide an underexplored window onto the ways patent law shapes (or fails to shape) inventions to which other areas of law are deeply hostile. And for scholars of law and sexuality, who critique law’s overwhelming sex-negativity, the patent system is a surprising respite of sex positivity—if cloaked in a wide array of acceptability narratives.

The piece also cues up fascinating future work. In particular, patents are typically considered important because they provide incentives for innovation; do they provide incentives for sex tech? Rajec and Gilden mention a couple of times that the patents they study are “valuable property rights,” but how valuable are those rights, and why? Are patents providing ex ante incentives, as in the standard narrative? Do sex tech inventors rely on the exclusivity of a future patent to develop new products? Or is there something else going on? The imprimatur of government approval on an industry otherwise attacked by the law? Safety to commercialize inventions shielded from robust competition? Shiny patent ribbons to show investors? In short, how should we think about pleasure patents as innovation incentive?

Gilden and Rajec have found a trove of material in the USPTO files that sheds light on both the patent system and American sex-tech norms over the last century and a half. Patenting Pleasure is an enlightening, provocative, intriguing, and—yes—pleasurable read.

Cite as: Nicholson Price, Illegal Sex Toy Patents, JOTWELL (May 12, 2021) (reviewing Sarah R. Wasserman Rajec and Andrew Gilden, Patenting Pleasure (Feb. 25, 2021), available at SSRN), https://cyber.jotwell.com/illegal-sex-toy-patents/.