The Journal of Things We Like (Lots)
Select Page

Automated Algorithmic Decision-Making Systems and ALPRs in Consumer Lending Transactions

Nicole McConlogue, Discrimination on Wheels: How Big Data Uses License Plate Surveillance to Put the Brakes on Disadvantaged Drivers, 18 Stan. J. Civ. Rts. & Civ. Lib. __ (forthcoming, 2022), available at SSRN.

Over the last decade the use of automated license plate reader (ALPR) technology has increased significantly.Several states have adopted legislation regulating the use of ALPRs and associated data.1 At the federal level, bills have been proposed to address law enforcement agencies’ use of ALPRs and companies’ use of automated algorithmic decision-making systems.2 There has been significant debate about the privacy and constitutional implications of government actors’ use of ALPR technology and ALPR data.

However, as Professor Nicole McConlogue observes in her excellent forthcoming article, Discrimination on Wheels: How Big Data Uses License Plate Surveillance to Put the Brakes on Disadvantaged Drivers, less attention has been paid to corporate actors and the way their use of ALPRs connects with their use of automated algorithmic decision-making. Corporate entities are increasingly using data collected by ALPRs together with predictive analytics programs to determine the types of opportunities that consumers receive. Professor McConlogue makes an important contribution to scholarship in the consumer and technology law fields by exposing the relationship between ALPR technology and automated algorithmic decision-making in the automobile lending industry. Her work links what are often distinct discussions of surveillance technologies and automated decision-making, as used by the private sector in consumer transactions, thus bridging the fields of consumer law and technology law.

Professor McConlogue argues that in contrast to government actors’ use of ALPRs, less attention has been given to the privacy and commercial implications of private entities’ use of ALPR data in financial transactions involving consumers. The article begins by exploring the connections between ALPR technology and the “predictive risk analysis tools” used by lenders and other entities. Professor McConlogue notes that proponents of these technologies suggest that they can be used to “democratize” access to automobiles, thereby helping to address “the discriminatory history of auto access and consumer scoring.”

However, Professor McConlogue contends that the unchecked use of these technologies is more likely to further facilitate discrimination against vulnerable groups of consumers on the basis of race and class. She convincingly argues that automobile consumer scoring using predictive analytics does not “address the points at which bias enters the scoring process.” This defect is further complicated by lenders’ and insurers’ use of ALPR-based data. Once combined with other sources of data, ALPR data and predictive analytics programs can be used by automobile lenders and insurers to determine contract terms, rates, and price adjustments that further enable income and wealth disparities. Professor McConlogue’s research indicates that at least one ALPR data vendor has encouraged insurers to evaluate consumers’ vehicle location history to better determine rates when issuing and renewing policies. Companies, too, can use data generated by ALPR technology to aid in the repossession of consumers’ encumbered collateral post-default, which mostly impacts underprivileged consumers.

Professor McConlogue’s article contains useful graphical depictions of the various points at which discrimination enters the lending cycle. She aptly uses these visual depictions along with examples to highlight the potential discriminatory nature of ALPR technology and predictive analytics.ALPR technology can reveal location data. Professor McConlogue argues that the location of a consumer’s home can be impacted by the historic legacies of redlining and segregation. Predictive analytics programs that incorporate location data such as those obtained from ALPR technology to determine consumers’ scores, contract terms and price can replicate these discriminatory practices.

Linking privacy to broader consumer protection, Professor McConlogue offers convincing critiques of existing consumer protection laws. The article highlights inadequacies in several sources of law, including the Equal Credit Opportunity Act and the Fair Credit Reporting Act. Professor McConlogue offers a novel way forward that recognizes that multi-faceted comprehensive solutions are necessary to address the problems she highlights. She provides multiple recommendations to fill gaps in existing laws to combat discrimination, and offers other proposals that include prohibiting commercial entities’ use of ALPR technology and restricting companies’ ability to use trade secret protection to obscure their “consumer scoring models.” Professor McConlogue’s most valuable contribution is exposing the important connection between ALPR technology and algorithmic decision-making in consumer lending transactions.

  1. Privacy Law §1.08, Law Journal Press (ALM Media Properties, 2021); Nat’l Conf. State Legislatures, Automobile License Plate Readers: State Statutes (Apr. 9, 2021).
  2. Reasonable Policies on Automated License Plate Readers Act, H.R. 4303, 115th Cong. (2017); Consumer Online Privacy Rights Act, S. 2968, 116th Cong. (2019).
Cite as: Stacy-Ann Elvy, Automated Algorithmic Decision-Making Systems and ALPRs in Consumer Lending Transactions, JOTWELL (September 24, 2021) (reviewing Nicole McConlogue, Discrimination on Wheels: How Big Data Uses License Plate Surveillance to Put the Brakes on Disadvantaged Drivers, 18 Stan. J. Civ. Rts. & Civ. Lib. __ (forthcoming, 2022), available at SSRN), https://cyber.jotwell.com/automated-algorithmic-decision-making-systems-and-alprs-in-consumer-lending-transactions/.

The Ideology of Bridging the Digital Divide

Daniel Greene’s The Promise of Access: Technology, Inequality, and the Political Economy of Hope has both a sharp theoretical point of view and fascinating ethnographic accounts of a tech startup, a school, and a library in Washington, DC, all trying to navigate a neoliberal economy in which individuals are required to invest in their own skills, education, and ability to change in response to institutional imperatives. Although it doesn’t directly address law, this short book’s critique of technology-focused reimaginings of public institutions suggests ways in which cyberlaw scholars should think about what institutions can, and can’t, do with technology.

Greene argues that many people in libraries and schools have, for understandable reasons, accepted key premises that are appealing but self-defeating. One such premise is that there is a “digital divide” that is a primary barrier that prevents poor people from succeeding. It follows that schools and libraries must reconfigure themselves around making the populations they serve into better competitors in the new economy. This orientation entails the faith that the professional strategies that worked for the disproportionately white people in administrative/oversight positions would work for the poor, disproportionately Black and Latino populations they are trying to help. In this worldview, startup culture is touted as a good model for libraries and schools even though those institutions can’t pivot to serve different clients but can only “bootstrap,” which is to say continually (re)invent strategies and tactics in order to convince policymakers and grantmakers to give them ever-more-elusive resources. Because poverty persists for reasons outside the control of schools and libraries, however, these new strategies can never reduce poverty on a broad scale.

Fights over how to properly use the library’s computers—for job searches, not for watching porn or playing games, even though the former might well be futile and the latter two might produce more individual utility—play out in individual negotiations between patrons and librarians (and the library police who link the library to the carceral state). Likewise, in the school, teachers model appropriate/white professional online use: the laptop is better than the phone; any minute of free time should be used to answer emails or in other “productive” ways rather than texting with friends or posting on social media. The school’s racial justice commitments, which had led it to bar most coercive discipline, eventually give way when the pressure to get test scores up gets intense. The abandonment is physically represented by the school’s conversion of a space that students had used to hang out in and charge their phones into a high-stakes testing center with makeshift cardboard barriers separating individual students.

Legal scholars may find interest in Greene’s analysis of the ruinous attractions of the startup model. That model valorizes innovation in ways that leave no room for “losers” who are written out of the narrative but still need to stay alive somehow; it demands, sometimes explicitly, that workers give over their entire lives to work because work is supposed to be its own reward. The startup model is seductive to mayors and others trying to sustain struggling cities, schools, or libraries, but its promises are often mirages. Government institutions can’t—or at least shouldn’t—fire their citizens and get new ones for a new mission when the old model isn’t working. Scholars interested in innovation may learn from Greene’s account of how startup ideology has been so successful in encouraging longstanding institutions to reconfigure themselves, both because that’s a strategy to access resources in a climate of austerity and because the model promises genuinely rewarding work for the professionals in charge.

Another reason for cyberlaw scholars to read Greene’s book is to encounter his challenge to subject matter divides that insulate certain foundational ideas from inspection. To label a problem as one of access to online resources is to suggest that the solution lies in making internet access, and perhaps internet-based training, available. But most of the poor people Greene interviews have smartphones; what they lack are safe physical spaces. Greene recounts how some of the people he talks to successfully execute multiple searches to find open shelter beds, creating a list and dividing responsibilities for making calls to different locations. Many of them are computer-literate, and more job training wouldn’t let them fit into the startup culture that is literally separated from them in the library by a glass wall (entrepreneurs—mostly white—can reserve a separate workspace behind this wall, while ordinary patrons—mostly Black—have to sign up for short-term access to library computers). As with platform regulation debates, when we ask cyberlaw to solve non-cyberlaw problems, we are setting ourselves up for failure.

Moreover, as Greene points out, other governance models are possible. Other countries fund and regulate internet connectivity more aggressively than the US does, meaning that libraries and schools don’t have to be connectors of last resort. Models of libraries and schools as places that empower citizens, rather than places that prepare individuals to go out and compete economically in an otherwise atomized world, are also imaginable—and they have been imagined and attempted before. Much as Stephanie Plamondon Bair’s Impoverished IP widens the focus of IP’s incentives/access model to examine the harms of poverty and inequality on creativity and innovation, Greene’s book calls attention to the fact that “the digital divide” is not, at its heart, about internet access but about economic and social inequality.

Cite as: Rebecca Tushnet, The Ideology of Bridging the Digital Divide, JOTWELL (August 10, 2021) (reviewing Daniel Greene, The Promise of Access: Technology, Inequality, and the Political Economy of Hope (2021)), https://cyber.jotwell.com/the-ideology-of-bridging-the-digital-divide/.

What’s the Harm? The Answer is Many

Danielle Keats Citron & Daniel J. Solove, Privacy Harms, Geo. Wash. U. L. Stud. Res. Paper No. 2021-11 (Mar. 16, 2021), available at SSRN.

Privacy law scholars have long contended with the retort, “what’s the harm?” In their seminal 1890 article The Right to Privacy, Samuel Warren and Louis Brandeis wrote: “That the individual shall have full protection in person and in property is a principle as old as the common law; but it has been found necessary from time to time to define anew the exact nature and extent of such protection.” Other legal scholars have noted that the digital age brings added challenges to the work of defining which privacy harms should be cognizable under the law and should entitle the complainant to legal redress. In Privacy Harms, an article that is sure to become part of the canon of privacy law scholarship, Danielle Citron and Daniel Solove provide a much needed and definitive update to the privacy harms debate. It is especially notable that the authors engage the full gamut of the debate, by parsing both who has standing to bring suit for a privacy litigation and also what damages should apply. This important update to privacy law literature builds upon prior solo and joint influential work by the two authors, such as Solove’s Taxonomy of Privacy, and Citron’s Sexual Privacy, and their joint article Risk and Anxiety.

The article furnishes three major contributions to law and tech scholarship. First, it highlights the challenges deriving from the incoherent and piecemeal patchwork of privacy laws in the U.S., exacerbated by what other scholars have noted are the exceedingly higher showings of harm demanded for privacy litigation versus other types of litigation. Second, the authors construct a road map for understanding the different genre of privacy harms with a detailed typology. Third, Citron and Solove helpfully provide an in-depth discussion of when and how privacy regulations should be enforced. That exercise is predicated on their viewpoint that there is currently a misalignment of the goals of privacy law and available legal remedies.

As Citron and Solove note, the higher prerequisite for a showing of privacy harm serves as an unreasonable gatekeeper to legal remedies for privacy violations. As such harm is difficult to define and proof of harm is elusive in some cases, such gatekeeping sends a dangerous signal to organizations, telling them that they do not need to heed legal obligations for privacy, so long as it remains difficult to prove harm.

Citron and Solove then provide a comprehensive typology of privacy harms. This exhaustive typology, which the authors meticulously illustrate with factual vignettes drawn from caselaw, is an especially useful resource for legal scholars, practitioners, and judges attempting to make sense of the morass that is privacy law in the United States. Citron and Solove’s typology encompasses 14 types of privacy harms: 1) physical harms, 2) economic harms, 3) reputational harms, 4) emotional harms, 5) relationship harms, 6) chilling effect harms, 7) discrimination harms, 8) thwarted expectation harms, 9) control harms, 10) data quality harms, 11) informed choice harms, 12) vulnerability harms, 13) disturbance harms, 14) autonomy harms. While some might quibble about whether some of the harms delineated are truly distinct from each other, the typology is an accessible and deft heuristic for contextualizing privacy harms both in terms of their origin and their societal effects. Two striking features of this taxonomy: first, in a departure from the authors’ previous solo and collective work, this taxonomy does not focus on the type of information breached and does not attempt to establish distinct privacy rights (see, for example, Citron’s Sexual Privacy, arguing for a novel privacy right regarding certain sexually abusive behaviors). Rather, this new taxonomy is concerned with the harmful effects of the privacy violation. Second, the taxonomy goes beyond individual level harms to introduce privacy harms that could also be seen as collective, such as chilling effect harms and vulnerability harms.

The Article’s final contribution is a discerning examination of when and how privacy harms should be recognized and regulated. This last discussion is important because, as the authors reveal, a focus on legally recognizing only those privacy harms that are easily provable, immediate, or handily quantifiable in monetary terms is detrimental to societal goals. The same can be said when the court’s focus is on a showing of what individual harm has resulted from a privacy violation.

As Citron and Solove remind us, and others have written, privacy harms are not merely individual harms, they are also societal wounds. Privacy as a human right allows for personhood, autonomy, and also the free exercise of democracy. Thus, the authors underscore that an undue emphasis on compensation, as a remedial goal for privacy violation, neglects other important societal considerations.

They observe that privacy regulations do not just compensate for harm, but serve the useful purpose of deterrence. A requirement of measurable economic or physical harm is only truly necessary to decide on compensation. If we have the clear aim of preserving privacy, merely for the benefit of what privacy affords us, rather than the objective of compensating for the injury of privacy violations, a decisive query for cutting through the bog is: what amount of damages would be optimal for deterrence?

With this keen analysis, Citron and Solove provide a way forward for determining when and how to adjudicate privacy litigation. As they conclude, for tort cases launched to demand compensation, a showing of harm may be requisite, but for other types of cases, when monetary damages are not sought, a showing of measurable economic or physical harm may be unnecessary.

In conclusion, Citron and Solove have written a truly useful article that provides a vital guardrail for navigating the quagmire of privacy litigation. Yet, their article is much more than a practitioner’s guide or judiciary touchstone. In plumbing the profundity of privacy harms, Citron and Solove have also started a cardinal socio-legal discourse on the human need for privacy and the societal ends that privacy insures. This is a conversation that has become even more urgent in the digital era.

Cite as: Ifeoma Ajunwa, What’s the Harm? The Answer is Many, JOTWELL (July 9, 2021) (reviewing Danielle Keats Citron & Daniel J. Solove, Privacy Harms, Geo. Wash. U. L. Stud. Res. Paper No. 2021-11 (Mar. 16, 2021), available at SSRN), https://cyber.jotwell.com/whats-the-harm-the-answer-is-many/.

Update of Jotwell Mailing Lists

Many Jotwell readers choose to subscribe to Jotwell either by RSS or by email.

For a long time Jotwell has run two parallel sets of email mailing lists, one of which serves only long-time subscribers. The provider of that legacy service is closing its email portal next week, so we are going to merge the lists. We hope and intend that this will be a seamless process, but if you find you are not receiving the Jotwell email updates you expect from the Techlaw section, then you may need to resubscribe via the subscribe to Jotwell portal. This change to email delivery should not affect subscribers to the RSS feed.

The links at the subscription portal already point to the new email delivery system. It is open to all readers whether or not they previously subscribed for email delivery. From there you can choose to subscribe to all Jotwell content, or only the sections that most interest you.

Gauging Genetic Privacy

James W. Hazel & Christopher Slobogin, “World of Difference”? Law Enforcement, Genetic Data, and the Fourth Amendment, 70 Duke L.J. 705 (2021).

Human beings leave trails of genetic data wherever we go. We unavoidably leave genetic traces on the doorknobs we touch, the items we handle, the bottles and cups we drink from, and the detritus we throw away. We also leave a trail of genetic data with the physicians we visit, who may order genetic analysis to help treat a cancer or to assist a couple in assessing their pre-conception genetic risks. Our genetic data, often but not always shorn of obvious identifiers, may be repurposed for research use. If we seek to learn about our ancestry, we may send a DNA sample to a consumer genetics service, like 23andMe, or share the resulting data on a cross-service platform like GEDmatch. If we are arrested or convicted of a crime, we may be compelled to give a DNA sample for perpetual inclusion in an official law-enforcement database. Law enforcement might use each of these trails of genetic data to learn about or identify us—or our genetic relatives.

Should law enforcement be permitted to make use of each and every one of these forms of genetic data, consistent with the Fourth Amendment of the U.S. Constitution? That is the question that motivates James W. Hazel and Chris Slobogin’s recent article, “World of Difference”? Law Enforcement, Genetic Data, and the Fourth Amendment. Hazel and Slobogin take an empirical approach to the Fourth Amendment inquiry, reporting results of a survey of more than 1500 respondents and probing which types of data access respondents deemed “intrusive” or treading upon an “expectation of privacy.” Their findings indicate that the public often perceives police access to genetic data sources as highly intrusive, even where traditional Fourth Amendment doctrine might not. As Hazel and Slobogin put it, “our subjects appeared to focus on the location of the information, not its provenance or content.” That is, intrusiveness turns more on who holds the data, rather than on how it was first collected or analyzed. Hazel and Slobogin conclude that their findings “support an argument in favor of judicial authorization both when police access nongovernmental genetic databases and when police collect DNA from individuals who have not yet been arrested.”

Hazel and Slobogin’s analysis is firmly rooted in existing doctrine. As they observe, much genetic data collection, analysis, and use has traditionally been beyond the scope of the Fourth Amendment. The Fourth Amendment extends its protections only to “searches” and “seizures,” and existing doctrine defines government intrusion as a search, in large measure, based on whether government action intrudes upon an “expectation of privacy” that society is prepared to recognize as “reasonable.” Under the so-called “third-party doctrine,” “if you share information, you do not have an expectation of privacy in it.” But in its recent Fourth Amendment decision in United States v. Carpenter, the Supreme Court suggested that the third-party doctrine is not categorical. As Hazel and Slobogin aptly summarize, “In the wake of Carpenter, considerable uncertainty exists about the applicability of the third-party doctrine to genetic information.” Indeed, Justice Gorsuch, dissenting in Carpenter, “used DNA access as an example” of information in which individuals typically expect privacy, despite having entrusted that information to third parties.

Hazel and Slobogin provide an empirical response to this uncertainty. They survey public attitudes regarding the privacy of certain sources of genetic data, and the intrusiveness of investigative access to that data. In assessing these attitudes, the authors also queried respondents about a range of non-genetic scenarios, including some both clearly within and beyond existing Fourth Amendment regulation, in order to better gauge relative findings of intrusiveness and privacy. The authors appropriately acknowledge that the platform they utilized to complete the survey—Amazon Mechanical Turk—and the population they recruited to participate may be imperfectly representative of the general public. They discuss countermeasures they took to minimize biases in their results, including excluding responses received in under five minutes (which “are indicative that the individual did not answer thoughtfully”).

The results indicate that law-enforcement access to many sources of genetic data ranked as highly intrusive and infringing upon an expectation of privacy. Among other findings, “police access to public genealogy, direct-to-consumer and research databases, as well as the creation of a universal DNA database, were … ranked among the most intrusive activities.” These government activities ranked similarly to searches of bedrooms and emails, and as both more intrusive and more infringing on a reasonable expectation of privacy than “cell location”—the data at issue in the Carpenter case itself. Yet many already-common police collections of genetic data, including surreptitious collection of “discarded” DNA, compelled DNA collection from arrested or convicted persons, and even familial searches in official law enforcement DNA databases ranked as among the least intrusive or privacy-offending activities.

Hazel and Slobogin suggest that Fourth Amendment doctrine should be attentive to societal views about privacy, such as the data uncovered in their survey, and that this should prompt closer scrutiny of the “situs of genetic information” in assessing expectations of privacy. The role of survey data in Fourth Amendment analysis is contested, but one need not subscribe to Hazel and Slobogin’s view of the importance of this data to Fourth Amendment analysis to appreciate their insights.

For one thing, Hazel and Slobogin’s data provide an antidote to claims of broad public support for law enforcement use of consumer genetics platforms to investigate crimes. According to Hazel and Slobogin, government access to consumer genetics data consistently ranked as highly intrusive and privacy-invasive. These findings also lend weight to Justice Gorsuch’s intuition in Carpenter that government access to genetic data from these sources ought to require a warrant or probable cause.

In addition to the Fourth Amendment, moreover, Hazel and Slobogin’s findings suggest that Congress or the Department of Health or Human Services ought to act to better protect medical data, especially genetic data in medical records. Survey respondents “ranked law enforcement access to genetic data from an individual’s doctor as the most intrusive of all scenarios, just above police access to other information in medical records.” Under existing law, these records are typically protected from nonconsensual disclosure under the HIPAA Privacy Rule, and physicians and their patients share a fiduciary relationship that is often privacy protective. But the HIPAA Privacy Rule codifies a gaping exception to nonconsensual disclosure for law enforcement purposes. As Hazel and Slobogin recognize, the Privacy Rule permits genetic information to be disclosed to law enforcement upon as little as an “administrative request.” That minimal standard runs contrary to the strongly held attitudes of privacy and intrusiveness that Hazel and Slobogin’s study reveals. These findings should provide impetus to act to better protect medical records from government access.

We ought not, however, overinterpret the authors’ results. Their findings indicate limited concern about the most well-known forms of genetic surveillance, through compelled DNA collection from individuals arrested or convicted of crimes or from surreptitiously collected items containing trace DNA that individuals cannot help but leave behind. Perhaps these results reflect a genuine lack of concern with these practices—or perhaps they merely reflect that individuals expect what they know the government is already doing. A one-way ratchet of public acceptance ought to give us pause about findings of non-intrusiveness for well-known police practices.

In sum, Hazel and Slobogin’s article yields important new data suggesting that government access to many sources of genetic data is indeed highly intrusive. That data may inform Fourth Amendment analysis. It also may inform discussions about the fitness of existing statutory and regulatory protections for genetic data, the need for new protections, and the credibility of existing claims of public support for certain uses of such data.

Cite as: Natalie Ram, Gauging Genetic Privacy, JOTWELL (June 10, 2021) (reviewing James W. Hazel & Christopher Slobogin, “World of Difference”? Law Enforcement, Genetic Data, and the Fourth Amendment, 70 Duke L.J. 705 (2021)), https://cyber.jotwell.com/gauging-genetic-privacy/.

Illegal Sex Toy Patents

Sarah R. Wasserman Rajec and Andrew Gilden, Patenting Pleasure (Feb. 25, 2021), available at SSRN.

In Patenting Pleasure, Professors Sarah Rajec and Andrew Gilden highlight a surprising incongruity: while many areas of U.S. law are profoundly hostile to sexuality in general and the technology of sex in particular, the patent system is not. Instead, the U.S. Patent and Trademark Office (USPTO) has over the decades issued thousands of patents on sex toys—from vibrators to AI, and everything in between.

This incongruity is especially odd because patent law has long incorporated a doctrine that specifically tied patentability to the usefulness of the invention, and up until the end of the 20th century one strand of that doctrine held that inventions “injurious to society” failed the utility test. And until about that time—and in some states and localities, even today—the law was exceptionally clear that sex toys were immoral and illegal. Patents issued nonetheless. How did inventors show that their sex toys were useful, despite being barred from relying on their most obvious use? Gilden and Rajec examine hundreds of issued patents to weave an engrossing narrative about sex, patents, and the law.

Two very nice background sections are each worth the price of admission. “The Law of the Sex Toy” canvasses the many ways U.S. law has been historically hostile to sex toys, including U.S. Postal Inspector Anthony Comstock’s 19th century crusade against “articles for self-pollution.” (Comstock, of “Comstock laws” fame, seized over 60,000 “immoral” rubber articles.) Efforts to criminalize sex toys continued in the late 20th century as well; many of these laws are still on the books, including in Texas, Alabama, and Mississippi, and some, including Alabama’s, are still enforced. At the federal level, the 2020 CARES Act included over half a trillion dollars in small business loans as pandemic relief; those making or selling sex toys (as well as other sex-based businesses) were excluded.

What’s all this got to do with patent law? For one thing, patenting illegal sex toys seems a fruitless errand, since it makes little sense (most of the time) to patent things you can’t make or sell. This puzzle goes unaddressed by the authors. At a doctrinal level, patent law’s utility requirement long barred patents on inventions injurious to society like gambling machines; radar detectors; and, it would seem under the laws just mentioned, sex toys. So applicants for pleasure patents would need to assert some utility—while steering clear of beneficial utility’s immorality bar. Gilden and Rajec provide as background a clear and useful overview of the history of so-called “beneficial utility,” including its applicability to sex tech.

One way to thread the needle is to obfuscate. In the early 20th century, many vibrators were advertised for nonsexual purposes with an overt or implicit wink; personal massagers were for nothing but sore muscles. Such stratagems could, did, and do help innovators evade the laws that otherwise vex sex tech. But Rajec and Gilden intentionally step beyond the disguise gambit (though its success raises interesting questions about utility law doctrine in general). Instead, they focus on patented inventions that are obviously, explicitly, and clearly about sex. The USPTO classes inventions according to types of technology, and one classification, A61H19/00, is reserved for “Massage for the genitals; Devices for improving sexual intercourse.” Tough to obfuscate there. So how are inventors getting the hundreds of patents Gilden and Rajec find in this class?

This is the central tension that Pleasure Patents addresses: Because of the utility doctrine, patentees must say what their inventions are for—but because US law has been generally quite hostile to sex and sex tech, pleasure patents have to say they are for something other than, well, pleasure. In the heart of the piece, Rajec and Gilden carefully catalog these descriptions over time, revealing a changing picture about what sorts of purposes were considered acceptable sex tech—at least, in the eyes of the USPTO.

It turns out patents can tell us interesting things about sex norms. Gilden and Rajec identify several narratives about what sex tech was for, including saving marriages and treating women’s frigidity (both thankfully more historic than contemporary rationales), helping individuals who cannot find sexual partners, avoiding sexually transmitted infections, helping persons with disabilities, and facilitating sexual relations by LGBTQIA individuals.

In recent years (perhaps following the effective demise of beneficial utility in 1999 at the hands of the Federal Circuit in the coincidentally-but-aptly-captioned Juicy Whip v. Orange Bang), pleasure patents have finally copped to being actually about pleasure, telling a narrative of sexual empowerment. Many pleasure patents in this last vein are remarkably forthright pieces of sex ed, among their other functions. As Rajec and Gilden note, “Particularly compared with federally-supported abstinence-only education programs, or the Department of Education’s heavily-critiqued guidelines on student sexual conduct, the federal patent registry provides a pretty thorough education on the anatomies and psychologies of sexual pleasure.” There’s much to learn here of the fascinating rise and fall of different utility narratives and how the patent system reflects changing social norms.

There is much, too, to like in Gilden and Rajec’s sketched implications for patent law and for studies of law and sexuality. Pleasure patents provide an underexplored window onto the ways patent law shapes (or fails to shape) inventions to which other areas of law are deeply hostile. And for scholars of law and sexuality, who critique law’s overwhelming sex-negativity, the patent system is a surprising respite of sex positivity—if cloaked in a wide array of acceptability narratives.

The piece also cues up fascinating future work. In particular, patents are typically considered important because they provide incentives for innovation; do they provide incentives for sex tech? Rajec and Gilden mention a couple of times that the patents they study are “valuable property rights,” but how valuable are those rights, and why? Are patents providing ex ante incentives, as in the standard narrative? Do sex tech inventors rely on the exclusivity of a future patent to develop new products? Or is there something else going on? The imprimatur of government approval on an industry otherwise attacked by the law? Safety to commercialize inventions shielded from robust competition? Shiny patent ribbons to show investors? In short, how should we think about pleasure patents as innovation incentive?

Gilden and Rajec have found a trove of material in the USPTO files that sheds light on both the patent system and American sex-tech norms over the last century and a half. Patenting Pleasure is an enlightening, provocative, intriguing, and—yes—pleasurable read.

Cite as: Nicholson Price, Illegal Sex Toy Patents, JOTWELL (May 12, 2021) (reviewing Sarah R. Wasserman Rajec and Andrew Gilden, Patenting Pleasure (Feb. 25, 2021), available at SSRN), https://cyber.jotwell.com/illegal-sex-toy-patents/.

Content Cartels and Their Discontents

evelyn douek, The Rise of Content Cartels, Knight First Amendment Inst. at Columbia Univ. (2020).

Content moderation is a high-stakes, high-volume game of tradeoffs. Platforms face difficult choices about how aggressively to enforce their policies. Too light a touch and they provide a home for pornographers, terrorists, harassers, infringers, and insurrectionists. Too heavy a hand and they stifle political discussion and give innocent users the boot. Little wonder that platforms have sometimes been eager to take any help they can get, even from their competitors.

evelyn douek’s The Rise of Content Cartels is a careful and thoughtful exploration of a difficult tradeoff in content-moderation policy: centralized versus distributed moderation. The major platforms have been quietly collaborating on a variety of moderation initiatives to develop consistent policies, coordinated responses, and shared databases of prohibited content. Sometimes they connect through nonprofit facilitators and clearinghouses, but increasingly they work directly with each other. douek’s essay offers an accessible description of the trend and an even-handed evaluation of both its promise and its perils.

Take the problem of online distribution of child sexual abuse materials (CSAM). There is a broad consensus behind the laws criminalizing the distribution of CSAM images, such images have no redeeming societal value, and image-hashing technology is quite good at flagging only uploads that are close matches for ones in a reference database. Under these circumstances, it would be wasteful for each service to maintain its own database of CSAM hashes. Instead, the National Center for Missing and Exploited Children (NCMEC) maintains a shared database, which is widely used by content platforms to check uploads.

douek traces the spread of the NCMEC model, however, to other types of content. The next domino to fall was “terrorist” speech: not always so clearly illegal and not always so obviously low-value. The Global Internet Forum to Counter Terrorism helps the platforms keep beheading videos from being uploaded. There have been similar initiatives around election interference, foreign influence campaigns, and more. I would add that technology companies have long collaborated with each other on security and anti-spam responses (often with law enforcement in the room as well) in ways that effectively amount to a joint decision on what content can and cannot transit their systems.

When there are so few platforms, however, content collaboration can become content cartelization. The benefits of cartelization on content moderation are many. Where there is an existing consensus on which content is acceptable, policy enforcement is more effective because platforms can pool their work. Even where there is not, platforms can learn from each other by sharing best practices. Some coordinated malicious activity is hard to detect when each platform holds only one piece of the puzzle; botnet takedowns now involve industry partners in dozens of countries. And to be effective, bans on truly bad actors need to be enforced everywhere, or they will simply migrate to the most permissive platform.

But douek smartly explains why content cartels are also so unsettling. They make it even harder to assess responsibility for any given moderation decision, both by obscuring who actually made it and by slathering the whole thing in a “false patina of legitimacy.” They amplify the existing “power of the powerful” by removing one of the classic safety valves for private platform speech restrictions: alternative avenues for the speaker’s messages. And, much like economic cartels, they present decisions made in smoky back rooms as though they were the “natural” outcomes of “market” forces.

douek’s explanation of how coordinated content moderation stands in sharp contrast to the rhetoric of competition these companies normally adopt is particularly sharp. Even the name itself–content cartels–points out the way in which this coordinated behavior raises questions of antitrust law and policy. To this list might be added the danger that content-moderation creep will turn into surveillance creep as platforms decide that to make decisions about their own users’ posts, they need access to information about those users’ activities across the Internet.

The Rise of Content Cartels resists the temptation to cram platform content moderation into a strictly “private” or strictly “public” box. Like douek’s forthcoming Governing Online Speech: From ‘Posts-As-Trumps’ to Proportionality and Probability, it is thoughtful about the relationship between power and legitimacy, and broad-minded about developing new hybrid models to account for the distinctive character of our new speech and governance institutions.

It is an exciting time for content-moderation scholarship. Articles from just five years ago read as dated and janky compared with the outstanding descriptive and normative work now being published. douek joins scholars like Chinmayi Arun, Hannah Bloch-Wehba, Joan Donovan, Casey Fiesler, Daphne Keller, Kate Klonick, Renee DiResta, Sarah T. Roberts, and Jillian C. York in doing important work in this urgently important field. To borrow a phrase, make sure to like and subscribe.

Cite as: James Grimmelmann, Content Cartels and Their Discontents, JOTWELL (April 13, 2021) (reviewing evelyn douek, The Rise of Content Cartels, Knight First Amendment Inst. at Columbia Univ. (2020)), https://cyber.jotwell.com/content-cartels-and-their-discontents/.

‘Practical and effective protection’ of human rights in the era of data-driven tech: Understanding European constitutional law

In her General Principles of the European Convention on Human Rights, Janneke Gerards demonstrates how one of Europe’s two highest Courts offers ‘practical and effective’ protection to a number of human rights. These rights are at stake when governments or other big players use data-driven measures to fight e.g. international terrorism, a global pandemic or social security fraud. For those who wish to understand how the General Data Protection Regulation (GDPR) is grounded in European constitutional law, this book is an excellent point of departure, because the GDPR explicitly aims to protect the fundamental rights and freedoms of natural persons. Rather than ‘merely’ protecting the right to privacy of data subjects, the GDPR does not mention privacy at all; it is pertinent for all human rights, including non-discrimination, fair trail, presumption of innocence, privacy and freedom of expression.

Those not versed in European law may frown upon calling the European Convention of Human Rights (ECHR, “the Convention”) European constitutional law, as they may conflate ‘Europe’ with the European Union (EU). The EU has 27 Member States who are all Contracting Parties to the Convention, and at the constitutional level the EU is grounded in the various Treaties of the EU and in the Charter of Fundamental Rights of the EU (CFREU, “the Charter”). The Convention is part of a larger European jurisdiction, namely that of the Council of Europe (CoE), which has 47 Contracting Parties. The CoE is an international organisation, whereas the EU is a supranational organisation (though not a federal state). To properly understand both the GDPR and the Charter, however, one must first immerse oneself in the ‘logic’ of the Convention, because the Charter stipulates that the meaning and scope of Charter rights that overlap with Convention rights are at least the same as those of Convention rights. The reader who finds all this complex and cumbersome, may want to consider that the overlap often enhances the protection of fundamental rights and freedoms, similar to how the interrelated systems of federal and state jurisdiction in the US may increase access to justice. It is for good reason that Montesquieu observed that the complexity of the law actually protects against arbitrary rule, providing an important countervailing power against the unilateral power of a smooth, efficient and streamlined administration of ‘justice’ (The Spirits of the Laws, VI, II).

(For those interested in exploring the complexities of the two European jurisdictions to better understand the ‘constitutional pluralism’ that defines European law, I recommend Steven Greer, Janneke Gerards, and Rose Slowe’s 2018 Human Rights in the Council of Europe and the European Union: Achievements, Trends and Challenges (New York: Cambridge University Press.)

On 8 April 2014, the Court of Justice of the European Union (CJEU) invalidated the 2006 EU Data Retention Directive (DRD) that required Member States (MS) to impose an obligation on telecom providers to retain metadata and to enact legislation to allow access to such data by criminal justice authorities (case Digital Rights Ireland C-293/12). The CJEU’s invalidation of an entire legislative instrument highlights the significance of Janneke Gerards’ work on the Convention. Let me briefly explain: (1) the CJEU invalidated the DRD because it violated the fundamental rights to privacy and data protection of the Charter, (2) this violation was due to the fact that the DRD was deemed disproportional in relation to its legitimate goal of fighting terrorism, (3) the reason being that the DRD enabled infringements of privacy and data protection that were not strictly necessary to achieve this goal and therefore not justified, (4) this criterion of necessity, framed in terms of proportionality, builds on the case law of the European Court of Human Rights (ECtHR, ‘the Court’) that decides potential violations of the Convention.

The invalidation of the DRD obviously demonstrates that those who wish to situate the remit of the General Data Protection Regulation (GDPR) should study the EU’s Charter, because the fundamental right to data protection is one of the Charter rights. It also marks out that where the right to data protection overlaps with the Convention’s right to privacy the case law of the (other) Court must be taken into account. Thus, precisely because the fundamental right to data protection is part of European constitutional law, those interested in legal protection against data-driven systems should probe the salience of legal framework for constitutional protection of human rights in Europe.

In General Principles, Gerards explains in simple and lucid prose how the Convention operates, while nevertheless respecting the complexity of an institutional system that provides human rights protection in 47 national jurisdictions, including Russia and Turkey. She introduces the Convention as ‘a living instrument’ (see section 3.3), which flies in the face of the cumbersome discussions in the US on ‘plain text’ meaning, ‘Framers’ intention,’ and ‘Originalism’. Its meaning is decided by the Court in Strasbourg on a case-to-case basis. The Court squarely faces the need for interpretation that is inherent in text-based law (chapter 4), while taking into account that deciding the meaning of the text decides the level of protection across all 47 Contracting States. The meaning of the Convention is not immutable but adaptive. That is why it is capable of offering what the Court calls ‘practical and effective protection’ (chapter 1). Unlike what some blockchain afficionados seem to believe, immutability does not necessarily offer better protection, especially not in real life.

Gerards discusses the constitutional nature of the Convention, and the emphasis of the Court on an interpretation of Convention rights as rights that should be both ‘practical and effective’, while taking into account that the role of the Court is subsidiary in relation to the national courts, who are the primary caretakers. This results in the double role of the Court: (1) supervising compliance by the contracting states on a case-to-case basis, including redress in case of a violation and (2) providing an interpretation of convention rights that clarifies the minimum level of protection in all contracting states.

To mediate these twin objectives the Court has developed an approach that incorporates three steps: (1) the Court decides whether the case falls within the scope of the allegedly violated right, (2) the Court decides whether the right has been infringed and (3), the Court decides whether the infringement was justified. Though infringements can be justified if specific explicit or implied conditions are fulfilled, some rights are absolute in the sense that if the right is infringed it is necessarily violated, meaning that no justification is possible (notably in the case of torture and degrading or inhuman treatment). Gerards explains how the first and the second step interact as the facts of the case are qualified in light of the applicable Convention text while, in turn, the applicability and the meaning of the Convention text are decided in light of the facts of the case at hand. She understands this as a ‘reflective equilibrium’ where facts and norms, the concrete and the abstract are – in my own words – mutually constitutive.

General Principles proceeds to a detailed discussion of the principles that determine the Court’s ‘evolutive interpretation’ (chapter 3), which takes into account, on the one hand, the changing understanding of the meaning of convention rights (the first step mentioned above) and on the other hand, the confrontation with new cases that cannot be reduced to prior cases (highlighting the second step). Note that Gerards’ structured conceptual approach is firmly anchored in the case law of the Court, providing concrete examples of the reasoning of the Court based on succinct and lucid accounts of what is at stake in the relevant case law. This is also how she discusses arduous issues such as positive and negative obligations for states (chapter 5) as well as the difference between vertical and horizontal effect (both direct and indirect) (chapter 6), explaining convoluted legal framings without ignoring their complexity.

Finally, Gerards explains in rich detail the third step indicated above, that of justification, anchored in an in-depth and crystal-clear analysis of the Court’s case law. Justification of a restriction of human rights is only possible if three cumulative conditions are fulfilled: the infringing measures are lawful (chapter 8), have a legitimate aim (chapter 9) and are necessary in a democratic society (chapter 10). Lawfulness is interpreted by the Court as legality, not as legalism; it not only requires a basis in written or unwritten law, but also demands both accessibility and foreseeability, while to qualify as lawful the legal basis must incorporate sufficient safeguards to mitigate the impact on relevant human rights (including procedural due care). As to necessity, the Court checks the proportionality between measures and legitimate aim, performing a fair balancing test, taking into account the scope and severity of the infringements in relation to the importance of the aim at stake.

This is the necessity criterion that also plays a crucial role in infringements of the fundamental right to data protection. The Charter requires necessity in a way similar to the Convention and even though ‘necessity’ plays a crucial role in the GDPR’s own principles and its requirement of a legal basis, necessity often plays a seminal role when testing these infringements against the necessity principle of European constitutional law. When the CJEU invalidated the DRD it explicitly invoked the meaning of ‘necessity’ in this sense.

This book is not only relevant as a textbook for students of human rights in Europe. It also offers a detailed account of why and how individual rights and freedoms matter, what difference they can make, and which complex balancing acts must be performed to ensure legal certainty as well as justice. For those seeking protection against algorithmic decision-making and data-driven surveillance General Principles is a key resource. The clarity of explanation highlights the difficult dynamics between public and individual interests, between national and supranational jurisdictions and between the freedom of states to act in the general interest and the freedom from unlawful interference for individual citizens, acknowledging that such individual freedom is also a public good. Whereas human rights can be used to protect the interests of those already in power by ignoring the rights and freedoms of marginalised communities, the Court’s requirement that rights are ‘practical and effective’ rather than formal or efficient gives clear direction to an interpretation strategy that is firmly grounded in a substantive and procedural conception of the rule of law. I guess this comes closest to Jeremy Waldron’s ‘The Rule of Law and the Importance of Procedure’, 50 Nomos 2011, 3-31, underlining the need for institutional checks and balances without which rule of law checklists offer little to no protection when push comes to shove.

Cite as: Mireille Hildebrandt, ‘Practical and effective protection’ of human rights in the era of data-driven tech: Understanding European constitutional law, JOTWELL (March 15, 2021) (reviewing Janneke Gerards, General Principles of the European Convention on Human Rights (2019)), https://cyber.jotwell.com/practical-and-effective-protection-of-human-rights-in-the-era-of-data-driven-tech-understanding-european-constitutional-law/.

The Data Economy is Political

Salome Viljoen, Democratic Data: A Relational Theory for Data Governance (Nov. 11, 2020), available on SSRN.

Between 2018 and 2020, nine proposals (or discussion drafts) for comprehensive data privacy legislation were introduced in the U.S. Congress. 28 states introduced 42 comprehensive privacy bills during that time. This is on top of the European Union’s General Data Protection Regulation, which took effect in 2018, and the California Consumer Privacy Act, which took effect in 2020. Clearly, U.S. policymakers are eager to be active on privacy.

Are these privacy laws any good? Put differently, are policymakers drafting, debating, and enacting the kind of privacy laws we need to address the problems of informational capitalism? In Democratic Data: A Relational Theory for Data Governance, Salome Viljoen suggests that the answer is no.

Viljoen’s argument is simple. The information industry’s data collection practices are “primarily aimed at deriving population-level insights from data subjects” that are then applied to individuals who share those characteristics in design nudges, behavioral advertising, and political microtargeting, among others. (P. 3.) But privacy laws, both in their traditional form and in these recent proposals, “attempt to reduce legal interests in information to individualist claims subject to individualistic remedies that are structurally incapable of representing this fundamental population-level purpose of data protection.” (P. 3.)

Viljoen could not be more right, both in her diagnosis of current proposals and in their structural mismatch with the privacy, justice, and dignitary interests undermined by data-driven business models that traffic in the commodification of the human experience.

Viljoen first notes that privacy has traditionally been legally conceptualized as an individual right. The Fair Information Practice Principles (FIPPs) and a long series of federal sectoral privacy laws and state statutes grant privacy rights to consumers qua individuals. This new crop of privacy laws is no different. They guarantee rights of access, correction, deletion, and portability, among others. But all of these rights are for the individual consumer. Notice-and-choice, the framework for much of U.S. privacy law, operated the same way: Its consent paradigm centered the right to choose or consent in the individual internet user.

This also tracks the scholarly literature in privacy since 1890. Privacy has long been understood as either a negative—freedom from—or positive—freedom to—right, but almost always a right located in the individual. Modern privacy scholarship has moved away from this model, recognizing both privacy’s social value, its importance in social interaction and image management, and the connection between privacy and social trust. That terrain is well worn; its inclusion here speaks both to Viljoen’s in-depth knowledge of the literature in her field and law review editors’ adherence to a model of overlong “background” sections.

Viljoen’s contributions are not so much her descriptive claim that privacy law has traditionally conceptualized privacy in individualistic terms, but where she goes from there.

Her notion of “data governance’s sociality problem” is compelling. (P. 23.) Viljoen argues that the relationships between individuals and the the information industry can be mapped along two axes: vertical and horizontal. (Pp. 25-27.) The vertical axis is the relationship between us and data collectors. When we agree to Instagram’s terms and conditions and upload a photo of our new dog, we are creating a vertical relationship with Instagram and its parent company, Facebook. The terms of that relationship “structure[] the process whereby data subjects exchange data about themselves for the digital services the data collector provides.”

“Horizontal data relations” are those relations between and among us, data subjects all, who share relevant characteristics. Those who “match” on OKCupid are in a horizontal data relationship with each other. A gay man who “likes” pictures of Corgis is in a horizontal data relationship with those targeted for advertisements based on those latent characteristics. As is a person arrested because a facial recognition tool identified him as a suspect socially connected with the person whose voluntarily uploaded picture of the same tattoo was used to train the facial recognition AI. (P. 26.)

This leads to a critical point. The person who was arrested has a privacy interest in the collection, use, and processing of data about his tattoo. But his interest is independent of the interests of the person who actually uploaded the picture, who started this causal chain of picture, collection, processing, training AI, misidentification, and arrest. It doesn’t matter where the original picture came from. Whoever uploaded it, the victim’s privacy interest is not represented in the vertical data relationship triggered by terms and conditions, a privacy policy, or a picture upload.

Viljoen’s second important contribution flows from the first. She offers a normative diagnosis for why horizontal relationships matter for data governance law. That is, data extraction’s harms stem not only from concerns over my privacy or our visceral reaction to creepy, ubiquitous surveillance. By merely using technologies that track and extract data from us, we become unwitting accomplices in the process through which industry translates our behavior into designs, technologies, and patterns that shape and manipulate everyone else. Abetting this system is a precondition of participation in the information age.

For Viljoen, then, the information economy’s core evil is that it conscripts us all in a project of mass subordination that is (not so incidentally) making a few people very very rich.

This may be Viljoen’s central contribution, and it has already changed my understanding of privacy. Focusing on the individual elides the population-level harms Viljoen highlights. Data flows classify and categorize. Data helps industry develop models to predict and change behavior. And it is precisely this connection between data and the identification of relationships between groups of people that creates economic value. We are deeply enmeshed in perpetuating a vicious cycle that subordinates data subjects while enriching Big Tech. There is no way an individual rights-based regime that gives one person some measure of control over their data can ever address this problem.

And that is, at least in part, where current proposals for comprehensive privacy laws go awry. Although there are some differences at the margins, most proposals are binary: they guarantee individual rights of control and rely on internal compliance structures to manage data collection and use. The rights model, Viljoen shows, inadequately addresses the privacy harms of informational capitalism. So, for that matter, does the compliance model. But that conversation is for another day.

Cite as: Ari Waldman, The Data Economy is Political, JOTWELL (February 12, 2021) (reviewing Salome Viljoen, Democratic Data: A Relational Theory for Data Governance (Nov. 11, 2020), available on SSRN), https://cyber.jotwell.com/the-data-economy-is-political/.

No Machines in the Garden

Rebecca Crootof & BJ Ard, Structuring TechLaw, __ Harv. J.L. & Tech. __, (forthcoming, 2020), available at SSRN.

A decade ago, I mused about the implications and limits of what was then called “cyberlaw.” By that time, scholars had spent roughly 15 years experiencing the internet and speculating that a new jurisprudential era had dawned in its wake. The dialogue between the speculators and their critics was famously encapsulated in a pair of journal articles. Lawrence Lessig celebrated the transformative potential of what we used to call “cyberspace” for law. Judge Frank Easterbrook insisted on the continuing utility of existing law in solving cyber-problems. The latter’s pejorative characterization of cyberlaw as “law of the horse” has endured as a metonym for the idea that law ought not to be tailored too specifically to social problems prompted by some exotic new device.

It turns out, as I mused, that Lessig and Easterbrook and others in their respective camps were arguing on the wrong ground. Cyberspace and cyberlaw pointed the way to an integrative jurisprudential project, in which novel technologies and their uses motivate a larger rethinking of the roles and purposes of law, rather than a jurisprudence of exception (Lessig) or a jurisprudence of tradition (Easterbrook). But it has taken some time for elements of an integrative project to emerge. Rebecca Crootof and BJ Ard, in Structuring Techlaw, are among those who are now building in that direction and away from scholars’ efforts to justify legal exceptionalism in response to various metaphorical horses – among them algorithmic decision making, data analytics, robotics, autonomous vehicles, 3D printing, recombinant DNA, genome editing, and synthetic biology. Their story is not, however, primarily one of power, ideology, markets, social norms, or technological affordances. Julie Cohen, among others, has taken that approach. Structuring Techlaw is resolutely and therefore usefully positivist. The law and legal methods still matter, as such. The law itself can be adapted, reformed, and perhaps transformed.

In that spirit, Structuring Techlaw offers a framework for organizing legal analysis (Pp. 8-9), rather a solution, so it is (admirably, in my opinion) primarily descriptive rather than normative. Like Leo Marx’s classic The Machine in the Garden, exploring American literature’s industrial interruption of the pastoral, it clarifies the situation. The article is a field guild to problems in technology and law, rather than a theory or a jurisprudential intervention. As a field guide, few of its details will be new to scholars, lawyers, or even students familiar with technology policy debates of the last 25 years. But the paper collects and organizes those details in a thoughtful, clear way, with priority given to traditional legal forms and to illustrations drawn from a wide variety of technology-animated social problems. Historical problems get attention, including those that long pre-dated the internet, along with contemporary challenges. The resulting framework is for use by scholars, policy makers, and other decision makers confronted with what Crootof and Ard characterize as a critical problem common to all types of new technology: legal uncertainty in the application and design of relevant rules.

Their broad view requires a broad beginning. “Technology” means devices that extend human capabilities. (P. 3  n.1.) Structuring Techlaw offers the neologism “techlaw” to distinguish solutions to larger-scale social problems created by technology in society from technology-enabled solutions to specific problems in the provision of professional services, or so-called legaltech or lawtech. (Id.)

Techlaw exposes legal uncertainties of three types. The framework consists of those three types, in layers, with some nuances, details, and illustrations added for good measure, together with likely strategies for dealing with each one. Each type of uncertainty is described in terms of familiar debates. Some of those concern the welfare effects of precautionary and permissive regulatory approaches. Some concern choices among updating existing law, imagining new law, and reconceptualizing the legal regime the context of institutional choices. The full framework is laid out in a single graphic. (P.11.)

Layer one consists of application uncertainties, in which existing legal rules are deemed to be either too narrow (gaps) or too broad (overlaps) as responses to technology-fostered social problems. Regular or traditional tools of legal interpretation may be used effectively here.

Layer two consists of normative uncertainties, in which technology-fostered problems expose larger concerns about the purposes and functions of the laws in question. Existing law may be revealed to be underinclusive or overinclusive relative to its original aims. This is the space for normative realignment of the law.

Layer three consists of institutional uncertainties, in which the roles and responsibilities of different legal actors are called into question based on concerns about legitimacy, authority, and competence. Are technology-fostered problems best solved by updates supplied by legislatures? By administrative agencies? By courts?

This is not so much a functioning method for reaching a judgment in a particular instance in practice.. It’s best viewed as a tool for understanding. Crootof and Ard round out their description with examples at multiple points along the way, but they don’t seek to apply the framework fully either to a real historical case or to an imaginary new one. Instead, the framework is best understood as they describe it (P. 47 n.187): as an idealized template by which observers and participants alike can begin to discern and respond to common patterns in law-making, rather than deal with each technology as a shiny new object or worse, as a distracting but entertaining squirrel. The framework may produce an integrated jurisprudence of technology and law as it is used over time, over multiple applications.

Will it? If the challenge of resolving uncertainties in legal meaning evokes H.L.A. Hart’s famous “No Vehicles in the Park” illustration of interpretive flexibilities in the law1 – a positivist Pole star – that is no accident. Structuring Techlaw is replete with references to Hart (P. 16 n.35) and Hartian interpretations and extensions. (Pp.69-70.) But one needs a way to get from what this rule means (per Hart) to how this rule is part of a pattern of multiple rules, some for equivalent instances and some for different ones. Crootof and Ard manage the transition to a pattern of multiple rules via an overview of the critical role of analogical reasoning and framing effects in legal interpretation. (Pp. 52–62.) That move is surely the right one; analogies help us scale from case to case, from case to rule, and from rule to system. But its success depends on any number of empirical claims as to how legal reasoning actually works in practice, such as those summarized by Dan Hunter, that are beyond the scope of this work.

Moreover, as Crootof and Ard acknowledge, fully specifying the framework and building the resulting field of law requires exploring a standard set of questions regarding comparative institutional advantage. They don’t do that in Structuring Techlaw. Tantalizingly, they promise that exploration in an additional paper. (P.9 n.19.)

Even more tantalizing are glimpses of jurisprudence yet to come. I wondered a bit about Structuring TechLaw’s emphasis on legal uncertainty. The return to positivism is an important one, but some scholars today place significant normative weight on humans and humanity in legal systems, precisely because of the lack of predictability, certainty, and consistency that human imaginations entail in practice.2 Some scholars argue that contestability of legal meaning, an attribute that is akin to uncertainty, is both essential to the rule of law and threatened by some novel technologies.3 Crootof and Ard hint that there is more in store on this point. Understanding humans in technological systems, or “loops,” is the promised subject matter of an “aspirational” manuscript. (P. 12 n.26.)

I can’t wait.

  1. H.L.A. Hart, Positivism and the Separation of Law and Morals, 71 Harv. J. Rev. 593 (1958). Lon Fuller’s reply was published as Lon L. Fuller, Positivism and Fidelity to Law – A Reply to Professor Hart, 71 Harv. L. Rev. 630 (1958).
  2. Brett M. Frischmann & Evan Selinger, Re-Engineering Humanity (2018); Meg Leta Jones & Karen Levy, Sporting Chances: Robot Referees and the Automation of Enforcement, We Robot 2017
  3. Mireille Hildebrandt, Law as Computation in the Era of Artificial Legal Intelligence: Speaking Law to the Power of Statistics, 68 U. Toronto L.J. Supp. 1, 12 (2018). DOI: 10.3138/utlj.2017-0044
Cite as: Michael Madison, No Machines in the Garden, JOTWELL (January 13, 2021) (reviewing Rebecca Crootof & BJ Ard, Structuring TechLaw, __ Harv. J.L. & Tech. __, (forthcoming, 2020), available at SSRN), https://cyber.jotwell.com/no-machines-in-the-garden/.