The Journal of Things We Like (Lots)
Select Page

Privacy Depends

Solon Barocas & Karen Levy, Privacy Dependencies, 95 Wash. L. Rev. 555 (2020).

American law typically treats privacy and its associated rights as atomistic, individual, and personal—even though in many instances, that privacy is actually relational and interdependent in nature. In their seminal article on The Right to Privacy, for instance, Samuel Warren and Louis Brandeis described privacy as a “right to be let alone.” Doctrines of informed consent are generally concerned with “respect[ing] individual autonomy,” even as the information disclosed or withheld by that consent may implicate the privacy of others. Similarly, consumer genetics platforms seek authorization from a single individual before processing or uploading a genetic profile, even though law enforcement now routinely searches those profiles to identify distant relatives who may have committed prior criminal acts.

In their article, Privacy Dependencies, Solon Barocas and Karen Levy move beyond the observation that privacy is relational to provide a typology of the “varied ways in which one person’s privacy is implicated by information others reveal.” They identify three broad types of privacy dependencies: those based on our social or other ties (tie-based dependencies), those drawn from our similarities to others (similarity-based dependencies), and those revealed by our differences from others (difference-based dependencies). While social norms or legal obligations may serve to discipline some of these privacy dependencies, they will be inapplicable or inapposite for many others. Barocas and Levy masterfully survey the wide range of normative values and diverse areas of law that may be affected by privacy dependencies. Taking genetic data as a case study, Barocas and Levy then demonstrate how each form of privacy dependency can arise in this context—and how each has been exploited in criminal investigations. They conclude that a greater attentiveness to privacy dependencies, and when and how they arise, can inform better policymaking and give us greater purchase on the values that privacy serves.

Barocas and Levy devote the bulk of their article to identifying and explaining each of the three forms of privacy dependencies that make up their typology, subdividing each into several subtypes. The first category of privacy dependencies, tie-based dependencies, exploits information gathered about one individual (Alice) to learn about another individual (Bob) by virtue of some relationship between them, whether known or unknown to Alice and Bob themselves. Barocas and Levy further subdivide this category into four types. A “passthrough” is a tie-based dependency in which Alice passes information about Bob on to some observer, or Alice and Bob share information through some third-party intermediary like Facebook or Gmail. A “bycatch” occurs where information about Bob is incidentally, but foreseeably, collected in the process of learning about Alice, as with police body-worn cameras. “Identification” can turn on a tie-based dependency, as where an unknown Bob can be identified due to his connection to a known Alice. Finally, “tie-justified dependencies” exploit social ties between Alice and Bob to justify expanding surveillance from Alice alone to also include Bob.

The government has exploited each of these forms of privacy dependency in national security and criminal investigations as, for instance, in the investigative use of consumer genetic data to target genetic relatives as suspects or the National Security Agency (NSA) bulk telephony metadata program. So too have social media entities as, for instance, in the Cambridge Analytica scandal at Facebook or Amazon Ring’s surveillance devices. Troublingly, for the most part, the law has not vested individuals whose privacy is affected by a tie-based dependency with protections against these kinds of privacy losses. Indeed, key Fourth Amendment doctrines encourage the government to exploit our interdependent data privacy. Moreover, social norms may be of limited utility in regulating against unwelcome exposure, particularly where the tie being exploited is involuntary or unknown to its subjects.

The second category of privacy dependencies that Barocas and Levy identify is based on similarity, in which information that Alice discloses about herself may be imputed to Bob insofar as Bob “is understood to be similar to Alice.” This form of dependency may turn on three ways in which individuals may be “similar” to others: based on “the company you keep”; on some “social salient characteristics that you share with others (e.g., gender, race, and age), but with whom you hold no explicit social ties”; or more distantly, on “non-socially-salient” characteristics, as in behavioral advertising.

Insurance is a paradigm example of similarity-based inference at work, but these dependencies may also arise in the context of criminal law (where bail, sentencing, and other decisions may turn in part on statistical risk assessments tools), credit scoring, advertising, and others. As Barocas and Levy observe, “[s]imilarity-based dependencies violate the moral intuition that people deserve to be treated as individuals and subject to individualized judgment.” And yet, “there is no way to avoid using generalizations or avoid being subject to them.” Moreover, similarity-based dependencies may be troubling both “when they subject people to coarse generalizations” and “when they allow for overly granular distinctions.” Particularly when they depend on non-socially-salient characteristics, similarity-based dependencies may fail to elicit the social solidarity that might restrain the excesses of this data inference mechanism.

Finally, difference-based dependencies arise when, by revealing some information about herself, Alice enables an observer to learn something about Bob by making herself distinguishable from him. Here, too, this dependency may occur in three ways: by “process of elimination,” in which Alice’s disclosure makes an unknown Bob’s ultimate identification more likely; by “anomaly detection,” in which Bob’s atypicality becomes apparent by comparing his data to that of many “normal” Alices; or by “adverse inference,” in which Bob’s refusal to disclose some information appears more suspect because most Alices disclose. Importantly, unlike tie-based and similarity-based dependencies, none of these forms of difference-based dependency requires a prior connection between Alice and Bob. Moreover, there is little Bob can do to protect his privacy in these cases. As Barocas and Levy observe “any attempts he might make to do so may, perversely, make him stand out even more.” The difficulty of this kind of dependency is evident in the NSA’s approach to encrypted communications, which has treated the fact of encryption itself as a basis for retention and analysis.

For these difference-based dependencies, collectivity is “essential to privacy preservation here.” Yet collective action may be difficult to muster where individuals may be “unaware of the effects of their disclosures or acting out of requirement or self-interest.” Instead, difference-based dependencies, Barocas and Levy conclude, are best restrained by restricting mass data collection in the first instance, since difference becomes apparent only in comparison to many others.

The payoffs for Barocas and Levy’s detailed typology of privacy dependencies are several. For one thing, as Barocas and Levy explain in a case study of privacy dependencies in genetic data, statutory protections may yield unexpected privacy dividends, where a protection adopted with one type of dependency in mind may come to protect against manipulation of another. Consider the Genetic Information Non-discrimination Act (GINA), which, although enacted as an anti-discrimination statute, has demonstrated value as an employee-privacy statute as well. Barocas and Levy also describe myriad ways in which law enforcement has exploited privacy dependencies in the context of genetic data. In so doing, as Barocas and Levy observe, identifying the various privacy dependencies at work can “help us determine if and when we even recognize Bob as a party with a legitimate privacy claim,” “shed light on the varied normative goals that we expect privacy to serve,” and “suggest possible targets for intervention.”

Perhaps most forcefully, Barocas and Levy provide a further perspective on the inadequacy of notice-and-choice as a paradigm for privacy regulation. As they explain, “[i]f we are scarcely able to make decision that attend to our own privacy interests, the goal of recognizing shared interests should not be to further burden our individual choices with an expectation that we take into account the interests of others.” And they conclude that “[r]ecognizing the mechanisms that create different forms of dependency does more than demonstrate the shortcomings of privacy individualism; it lays the groundwork for well-tailored policymaking and advocacy.” Ultimately, Barocas and Levy give an irrefutable accounting of the many ways in which individualism fails privacy, and their typology for organizing and understanding these failures make better privacy law possible.

Cite as: Natalie Ram, Privacy Depends, JOTWELL (August 11, 2022) (reviewing Solon Barocas & Karen Levy, Privacy Dependencies, 95 Wash. L. Rev. 555 (2020)), https://cyber.jotwell.com/privacy-depends/.

The Humble Vending Machine

Gregory Klass, How to Interpret a Vending Machine: Smart Contracts and Contract Law, 7 Geo. L. Tech. Rev. __ (forthcoming, 2022), available at SSRN.

Gregory Klass’s How to Interpret a Vending Machine: Smart Contracts and Contract Law is an extraordinarily incisive legal analysis of smart contracts. While others have written insightfully about the relationship of smart contracts and legal contracts, Klass utterly nails a central conceptual point: When smart contracts are embedded in legal relationships, they stand in need of interpretation.

Nick Szabo introduced smart contracts in the 1990s as contracts “embedded in the world” such that breach is expensive or impossible. Whereas traditional contracts rely on the legal system (backed by threat of force) to enforce their terms, smart contracts use hardware and software to automatically enforce their terms. Szabo gives the example of a “humble vending machine” that takes in coins and dispenses products, and then argues that software and cryptography make it possible to craft much more sophisticated agreements than simple cash sales.

Seen this way, smart contracts are not contracts but mechanisms, and hence the vending-machine analogy is apt. What is important is not what they mean but what they do. Klass shows that even mechanisms need interpretation. Through a sequence of entertaining hypos, he demonstrates that courts confronting cases involving mechanisms embedded in contracts must use the methods of legal interpretation to reason about what those mechanisms are understood to do, just as they reason about what contractual text is understood to do.

His examples, cleverly, also involve the humble vending machine. He starts with a “standard,” “black-box” vending machine: Charles Kingsfield, Jr. inserts two quarters into an Acme Vending Company machine, and a hidden mechanism dispenses a sugary snack. If the machine jams and the snack fails to fall, this is a breach of a contract of sale and Kingsfield is entitled to a refund. How do we know it’s a contract of sale governed by the UCC? From the “shared cultural understanding of what vending machines do.” That shared cultural understanding doesn’t depend on the specifics of the gears and cams in the guts of Acme’s machine; instead, it has to do with how the form of the vending machine itself communicates an offer to engage in cash transactions for the sale of snacks.

Next, Klass invites the reader to consider a “glass-box” vending machine: mutually designed and constructed by two contracting parties to implement their exchange. Orville inserts chocolate truffles into the machine, and Wilbur inserts cash, and the machine duly delivers truffles to Wilbur and cash to Orville in corresponding quantities. Klass persuasively argues that now, the internal workings of the vending machine (and not just its appearance as a vending machine) are relevant. As he puts it, “the design of Orville and Wilbur’s mutually constructed vending machine belongs to the interpretive evidence of their agreement.” If the workings of the machine reveal that it was constructed to dispense thirteen truffles every time Wilbur inserts enough money for twelve, a court could reasonably conclude that Orville and Wilbur’s contract was for baker’s dozens.

This point leads Klass to a well-taken distinction between the design and the operation of the machine. If the bill scanner occasionally reads $20 bills as $10 bills and dispenses only half as many truffles as a result, Orville must deliver the balance of truffles or give Wilbur a refund for the extra $10. But if the machine itself is deliberately constructed to occasionally deliver only half as many truffles as paid for, then this behavior was contemplated by the parties and forms part of their agreement. As a programmer would say, the contractual meaning of a glass-box mutually constructed mechanism is determined by its features, not by its bugs.

Klass’s third example, and the most vexed, involves glass-box vending machines that are unilaterally constructed. We are back to Acme again, except that now there is a large glass panel on the side of the machine that allows users to inspect the details before they insert their coins. Most of the time, this makes no difference. Professor Kingsfield can’t even remember the names of his students; he can hardly be expected to inspect the inner workings of every glass-sided vending machine he encounters. If the machine eats his quarters, Acme owes him a refund regardless of whether the quarter-eating is a deliberate feature or an unintentional bug.

This said, Klass argues that there might be a few circumstances under which a purchaser should be held to have agreed to non-standard vending-machine behavior. Perhaps, for example, the machine is installed in the lounge of the mechanical engineering department and comes with a prominent disclaimer warning users to inspect the mechanism closely to understand what it does. But these cases will be rare. Under most circumstances, the mechanism is not relevant evidence as to the understanding of parties who have not participated in its design.

The genius of the article is that the argument goes through almost perfectly unchanged when the implementation moves from vending machines to blockchains. Interwoven with its discussion of hardware-based vending machines is an equally sophisticated analysis of software-based smart contracts. There are circumstances under which a smart contract’s code —its text and not just its effects — is necessary to understand the terms of a contract involving it. But these circumstances are neither “never” nor “always,” so even the question of whether to interpret the code is context-dependent.

Klass’s thought experiments show that what matters is the relationship a mechanism and its users: not the technical details themselves but how users understand or should be expected to understand them. This viewpoint, which is characteristic of legal institutions in general and of contract law in particular, connects to Karen Levy’s important work on “book-smart” contracts, in which she shows that smart contracts fail to capture the ways in which “people use contracts as social resources to manage their relations.” Klass’s reflections about “what legal contracts do” built on Levy’s insights to argue that the “techno-utopian dream of governance by code rests on an anemic view of human sociability.”

This perceptive article is of obvious interest to scholars working on blockchain- and Internet-related issues. But its creative application of contract doctrine should also be of interest to contract theorists and legal philosophers. Klass offers important insights about the nature of bargained-for exchange in a world of mechanisms, and about the nature of legal interpretation when the texts to be interpreted are written in a programming language rather than in a natural language. This is the smartest analysis of smart contracts I have read.

Cite as: James Grimmelmann, The Humble Vending Machine, JOTWELL (July 13, 2022) (reviewing Gregory Klass, How to Interpret a Vending Machine: Smart Contracts and Contract Law, 7 Geo. L. Tech. Rev. __ (forthcoming, 2022), available at SSRN), https://cyber.jotwell.com/the-humble-vending-machine/.

Congressional Myopia in Biomedical Innovation Policy

Rachel Sachs, The Accidental Innovation Policymakers, __ Duke L.J. _ (March 27, 2022 draft, forthcoming 2022), available at SSRN.

Innovation policy is hard. Getting it right requires balancing incentives for developers, consumer access, rewards for later innovators, safety concerns, and other factors. This balance is vitally important and wickedly difficult—even when it’s the focus of concerted, careful, informed effort. How well should we expect it to go when innovation policy is made by accident?

Enter The Accidental Innovation Policymakers, an illuminating new project by Professor Rachel Sachs. Sachs persuasively shows how Congress has repeatedly made substantial changes to innovation policy, seemingly without talking about, seriously considering, or even recognizing that it is doing so. There’s an asymmetry to this accident, and it favors industry. When Congress wants to directly promote innovation, it explicitly gives rewards to the biomedical industry. When Congress focuses on other matters such as patient finances and happens to increase the rewards for biomedical innovation by embiggening the market, no one mentions it. But when Congress, focusing on patient finances, tries to rein in prices and thus decrease prices for industry, drugmakers scream bloody murder and claim that the engines of progress will grind to a halt. This legislative dynamic is off-kilter. It demands understanding and options for fixes. Sachs provides both.

Sachs grounds her analysis in a rich description of four keystone pieces of legislation that changed the landscape of biomedical innovation: the Orphan Drug Act, the Hatch-Waxman Act (occasionally known by its formal title, the Drug Price Competition and Patent Term Restoration Act of 1984), the Affordable Care Act, and the enactment of Medicare Part D (covering outpatient prescription drugs). Each of these Acts created substantial incentives for innovation. More obviously, the Orphan Drug Act and Hatch-Waxman Act each created new forms of exclusivity for drugs. Less obviously, the Affordable Care Act and Medicare Part D vastly expanded insurance coverage for drugs, including for the poor (ACA) and the elderly (Medicare Part D).1 This increased market size, prompting innovation in the relevant areas. As Sachs shows through meticulous legislative history, Congress talked lots about incentives for the first two, and essentially not at all for the second two.

Aha, you say! Maybe it’s just that Congress doesn’t understand that bigger markets create more incentives to develop products. But no—when Congress considers shrinking markets to protect patient pocketbooks, pharma promptly prognosticates plummeting productivity. And Congress listens. In fact, the Congressional Budget Office’s analyses of drug-pricing-cut proposals explicitly include estimates of how many drugs won’t be invented. So Congress makes biomedical policy accidentally, but only in one direction, when it increases incentives.

That descriptive insight is a valuable contribution on its own; Sachs has slogged through the legislative history so we don’t have to to understand why the ACA’s innovation policy implications remain underspecified. But what are we to make of all this? Sachs makes three incisive points.

First, if Congress has been making innovation policy accidentally, that policy probably demands pretty close scrutiny. For instance, did Congress really mean by changing insurance policy to increase incentives for new drugs for seniors, regardless of whether those drugs needed additional incentives? It doesn’t seem that they did, and if they did, that seems like a poor call and fodder for scholarly attention, especially when accidental policy results in innovation inequities.

Second, and related, the accidental and asymmetric nature of innovation policy means we should be especially cautious about baseline assumptions. If the status quo is at least in part an accident, there’s no particular reason to think we’ve magically wandered into the right answer. Incentives might be too low, or too high, or just misaligned to the type of innovation we really want. That last possibility seems awfully likely, based on separate work by Sachs, Hemel and Ouellette, Eisenberg, and others.

Third (and frankly the kind of insight that I’m glad someone like Sachs can share because it requires deep Congressional weediness), Congressional dynamics are just poorly set up to get the full innovation picture. Who would think that Congress misses big chunks of innovation policy because different committees have jurisdiction over different acts and see different bits of the elephant? Sachs. (The Judiciary Committee, for instance, sees patent laws, but not most health laws; Ways & Means sees Medicare but not intellectual property or FDA. For more, see…Sachs!)

The piece closes with institutional suggestions to fix Congressional innovation myopia. The Congressional Budget Office could tackle the task (though it doesn’t make recommendations or consider impacts of prior legislation); the sadly-defunct Office of Technology Assessment could evaluate broad pictures (though it, well, doesn’t exist); or the lesser-known Medicare Payment Advisory Commission or Medicaid and CHIP Payment and Access Commission (MedPAC and MACPAC, respectively and delightfully) could weigh in (though their foci are narrower than biomedical innovation writ large). Each has their problem, but any could help. It’s hard to argue with the idea that Congress should better know what it’s doing, and Sachs has identified an apparently substantial hole in that knowledge.

The argument Sachs makes is a compelling one. Congress is making big biomedical innovation policy by accident. One response is for Congress (and others) to actually think it through both prospectively and retrospectively, whatever the institutional mechanism. Another, drawing on Sachs’s other work, is to consider more broadly how creative tools can shape innovation policy more precisely. One reason Congress is making innovation policy by accident is because insurance payment and drug development incentives are so closely connected, often automatically. But they needn’t be; Sachs has argued that delinking reimbursement from development (and particularly approval) could help better align incentives (Hemel and Ouellette, Masur and Buccafusco, and I have also made suggestions in this direction.) Accidents of policy can be fixed or avoided, more so now that Sachs has so clearly delineated the problem.

  1. ACA junkies will ask about the Biologics Price Competition and Innovation Act, buried in title VII of the ACA and the biologics analog of the Hatch-Waxman Act. Sachs mentions the BPCIA but focuses on the ACA’s coverage provisions.
Cite as: Nicholson Price, Congressional Myopia in Biomedical Innovation Policy, JOTWELL (June 13, 2022) (reviewing Rachel Sachs, The Accidental Innovation Policymakers, __ Duke L.J. _ (March 27, 2022 draft, forthcoming 2022), available at SSRN), https://cyber.jotwell.com/congressional-myopia-in-biomedical-innovation-policy/.

Confronting Surveillance

Amanda Levendowski, Resisting Face Surveillance with Copyright Law, 100 N. C. L. Rev. __ (forthcoming, 2022), available at SSRN.

One prevailing feature of technological development is that it is not sui generis. Rather, new technologies often mirror or reflect societal anxieties and prejudices. This is true for surveillance technologies, including those used for facial recognition. Although the practice of facial recognition might be positioned as a type of convincing evidence useful for identifying an individual, the fact remains that racial and gender biases can limit its efficacy.. Scholars such as as Timnit Gebru and Joy Buolawmini have shown through empirical evidence that facial recognition systems, which are often trained on limited data, display stunningly biased inaccuracy. The two AI researchers reviewed the performance of facial analysis algorithms across four “intersectional subgroups” of males or females featuring lighter or darker skin. They made the startling discoveries that the algorithms performed better when determining the gender of men as opposed to women, and that, darker faces were most likely to be misidentified.

In her path-breaking article, Resisting Face Surveillance with Copyright Law, Professor Amanda Levendowski identifies these harms and others, and advocates for the proactive use of copyright infringement suits to curb the use of photographs as part of automated facial surveillance systems. First, Levendowski illustrates why the greater misidentification of darker faces by algorithmic systems is a problem of great concern. Levendowski shares the story of Robert Julian Borchak Williams who was placed under arrest in front of his home and in view of his family. A surveillance photograph had been used to algorithmically identify him.. However, once the photograph was compared to the actual person of Mr. Williams, it was obvious that he had been misidentified. The only explanation Mr. Williams got was, “The computer must have gotten it wrong.” The sad reality is that Williams’ case is not unique, there are many more stories of Black men being wrongfully arrested based on misidentification by AI systems. Given the glacial creep of federal legislation to regulate face surveillance, Levendowski advocates for turning to the copyright tools she believes we already have.

Facial recognition systems have proliferated in the past few years. For example, in 2020, an individual taking the Bar exam in New York related how he was directed to “sit directly in front of a lightning source such as a lamp” so the face recognition software could recognize him as present. I have written about and against the troubling use of facial recognition by automated hiring programs. Evan Sellinger and Woodrow Hartzog have written about the extensive use of facial surveillance in immigration and law enforcement and have called for a total ban. Although some jurisdictions in the United States have heeded the call to ban the use of facial recognition systems by law enforcement, many others have not, and there is currently no federal legislation banning or even regulating the use of facial recognition systems.

Resisting Face Surveillance with Copyright Law is innovative in its approach of deploying copyright law as a sword against the use of automated facial recognition. As Levendowski argues “Face Surveillance is animated by deep-rooted demographic and deployment biases that endanger marginalized communities and threaten the privacy of all.” Deploying copyright litigation to stem the use of facial recognition holds great potential for success because as Levendowski notes, corporations like Clearview AI are trawling selfies and profile pictures online to compose a gargantuan face-recognition database for law enforcement and other purposes. Levendowski notes that Clearview AI has copied about three billion photographs without the knowledge or consent of the copyright holders or even the authorization of the social media companies that host those photographs. Levendowski’s article is one answer to what can be done with the laws we have now to curtail the use of face surveillance.

Levendowski notes that one common defense of scraping — to invoke the First Amendment — would not be viable against copyright claims. Levendowski recounts the Court’s statement in Eldred v. Ashcroft that “copyright law contains built-in First Amendment Accommodations”“ which “strike a definitional balance between the First Amendment and copyright law by permitting free communication of facts while still protecting an author’s expression.” Thus, Levendowski concludes, copyright infringement lawsuits could serve as “a significant deterrent to face surveillance” particularly given the hair-raising statutory damages of $150,000 for each case of willful infringement.

However, as Levendowski notes, there are several hurdles to the successful use of a copyright infringement lawsuit against face surveillance. For one, there is the affirmative defense of fair use. Levendowski concedes that the Google v. Oracle decision in 2021, which concluded that Google made a fair use when it copied interface definitions from Java for use in Android, has changed the fair use landscape and may make it less likely for copyright infringement suits against face surveillance systems to prevail. Yet, as Levendowski explains. the use of profile pictures may still fall outside of fair use protections because they are more likely to fail the four-factor test. She argues that unlike search engines which fairly “use” works in order to point the public to them, facial recognition algorithms copy faces in order to identify faces. That is, the “heart” of the copied work — a person’s face — is the part that is copied by the face surveillance systems, and the use is less transformative than a search engine’s use. Levendowski also draws on recent case law to suggest that courts will be less likely to find the for-profit subscription model deployed by many facial recognition companies to be fair use, compared to the free-to-the-public model used by most search engines.

Levendowski deploys Google v. Oracle and other key fair use cases to assess each fair use factor. First, she notes that surveillance companies are not using the pictures for a new purpose, their reason for using the photographs are the same as profiles pictures: particularized identification. Yet, Levendowski argues, even absent a new purpose, such use may still be somewhat transformative, favoring face surveillance companies. She then also concludes that that the nature of the work is creative and that the use features the photographs’ faces—the “heart” of profile pictures, creating unfavorable outcomes for these companies under the middle two factors. Analyzing the final factor, Levendowski concludes that using these photographs harms the unique licensing market for profile pictures, and that this dictates a ruling against fair use.

All in all, although some might not agree with her fair use analysis, I find Levendowski’s approach to be an ingenious approach to lawyering in the digital age. If I have any reservations, it is whether this information might introduce a new tactic for face surveillance corporations —to purchase or license the copyrights in the photographs they use. Such a tactic would be facilitated by social media or other platforms that require users to give up the copyrights to any photos they post. This indicates that there might yet be more regulation needed to address face surveillance. But in the meantime, Levendowski’s lawyering represents a creative approach to the problem of face surveillance.

Cite as: Ifeoma Ajunwa, Confronting Surveillance, JOTWELL (May 12, 2022) (reviewing Amanda Levendowski, Resisting Face Surveillance with Copyright Law, 100 N. C. L. Rev. __ (forthcoming, 2022), available at SSRN), https://cyber.jotwell.com/confronting-surveillance/.

The Disconnect Between ‘Upstream’ Automation and Legal Protection Against Automated Decision Making

Reuben Binns and Michael Veale, Is that your final decision? Multi-stage profiling, selective effects, and Article 22 of the GDPR, 11 Int'l Data Privacy L. 319 (2021).

In their brief and astute article Is That Your Final Decision? Multi-stage Profiling, Selective Effects, and Article 22 of the GDPR, Reuben Binns and Michael Veale discuss the arduous issues of the EU GDPR’s prohibition of impactful automated decisions. The seemingly Delphic article 22.1 of the GDPR provides data subjects with a right not to be subject to solely automated decisions with legal effect or similarly significant effect. As the authors indicate, similar default prohibitions (of algorithmic decision-making) can be found in many other jurisdictions, raising similar concerns. The article’s relevance for data protection law sits mainly in their incisive discussion of how multi-level decision-making fares under such prohibitions and what ambiguities affect the law’s effectiveness. The authors convincingly argue that there is a disconnect between the potential impact of ‘upstream’ automation on fundamental rights and freedoms and the scope of article 22. While doing so, they lay out the groundwork for a more future-proof legal framework regarding automated decision-making and decision-support.

The European Data Protection Board (EDPB), which advises on the interpretation of the GDPR, has determined that the ‘right not to be subject to’ impactful automated decisions must be understood as a default prohibition that does not depend on data subjects invoking their right. Data controllers (those who determine purpose and means of the processing of personal data) must abide by the prohibition unless one of three exceptions apply. These concern (1) the necessity to engage such decision-making for ‘entering into, or performance of, a contract between the data subject and a data controller’, (2) authorization by ‘Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests’ or (3) ‘explicit consent’ for the relevant automated decision-making.

Binns and Veale remind us that irrespective of whether automated decisions fall within the scope of article 22, insofar as they entail the processing of personal data, the GDPR’s data protection principles, transparency obligations and the requirement of a legal basis will apply. However, automated decisions are often made based on patterns or profiles that do not constitute personal data, precisely because they are meant to apply to a number of individuals who share certain (often behavioral) characteristics. Article 22 seeks to address the gap between data protection and the application of non-personal profiles, both where such profiles have been mined from other people’s personal data and where they are applied to individuals singled out because they ‘fit’ a statistical pattern that in itself is not personal data.

Once a decision is qualified as an article 22 decision, a series of dedicated safeguards is put in place, demanding human invention, some form of explanation and an even more stringent prohibition on decisions based on article 9 “sensitive” data (‘revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation’).

The authors are interested in the salient question of how different layers of automation create a disconnect between, on the one hand, the impact on the fundamental rights and freedoms of those targeted and, on the other hand, the protection offered by article 22. For instance, algorithmically inferred dynamic pricing (or ‘willingness to pay’) may be used to inform human decisions on insurance, housing, credit and recruitment. However, it escapes the GDPR’s protection against automated decisions because humans make the final decision. Considering ‘automation bias’, the presorting that takes place in the largely invisible backend systems may disenfranchise those targeted from the kind of human judgement and effective contestability that article 22 calls for. (See recently Margot E. Kaminski & Jennifer M. Urban, The Right to Contest AI.) The ensuing gap in legal protection is key to the Schufa case that is now pending before the Court of Justice of the European Union, raising the question of whether a credit risk score decided by the scoring algorithm of a credit information agency that is used by an insurance company, in itself qualifies as an automated decision (case C-634/21).

The authors distinguish five types of ‘distinct (although in practice, likely interrelated) challenges and complications for the scope of article 22. The first (1) is that adding human input at the level of all data subjects, which affects whether article 22 applies, can still leave a subset of data subjects not protected by that human input. The second (2) is the GDPR’s lack of clarity on ‘where to locate the decision itself.’ The third challenge (3) is whether the prohibition concerns potential or only ‘realised’ impact. The fourth (4) is the likelihood that largely invisible automated backend systems have a major impact irrespective of the human input that is available on the frontend. And the fifth (5) and perhaps most significant challenge is the GDPR’s focus on only the final decision in a chain of relevant decisions, which ignores the impact of prior automated decisions on the choice architecture of those making the final decision. This is the “multi-stage” profiling the authors reference in their title.

The abstruse wordings of article 22, probably due to compromises made during the legislative process, may inadvertently reduce or obliterate what the European Court of Human Rights would call the ‘practical and effective’ protection that article 22 nevertheless aims to provide. The merit of the points made by Binns and Veale is their resolute escape from the usual distractions that turn discussions of article 22 into a rabbit hole of fruitless speculations, for instance on whether there is a right to explanation and what this could mean in the case of opaque algorithmic decision-making and on whether the explanations are due before decisions are made or only after. As they explain, all this will depend on the circumstances and should be decided in light of the kind of protection the GDPR aims to provide (notably enhancing both control over one’s personal data and accountability of data controllers).

Binns and Veale’s precise and incisive assessment of the complexities of upstream automation and the potential impact on those targeted should be taken into account by the upcoming legislative frameworks for AI and by courts and regulators deciding relevant cases. In the US we can think of the Federal Trade Commission’s mandate and the National Artificial Intelligence Initiative Act of 2020. Binns and Veale remind us of the gaps that will occur in practical and effective legal protection if AI legislation restricts itself to the behavior of data-driven systems instead of incorporating decisions of deterministic decision-support systems, which will be the case if AI is defined such that the latter systems fall outside the scope of AI legislation. Both Veale and Binns are prolific writers, anyone interested in the underlying rationale of EU data protection law and the relevant technical background should keep a keen eye on their output.

Cite as: Mireille Hildebrandt, The Disconnect Between ‘Upstream’ Automation and Legal Protection Against Automated Decision Making, JOTWELL (April 7, 2022) (reviewing Reuben Binns and Michael Veale, Is that your final decision? Multi-stage profiling, selective effects, and Article 22 of the GDPR, 11 Int'l Data Privacy L. 319 (2021)), https://cyber.jotwell.com/the-disconnect-between-upstream-automation-and-legal-protection-against-automated-decision-making/.

Shifting the Content Moderation Paradigm

evelyn douek, Content Moderation as Administration (Jan. 12, 2022), available on SSRN.

As law-and-technology scholar evelyn douek explains in her eye-opening, scholarly, and well-written Content Moderation as Administration, the conventional account of content moderation is wrong and its policy implications are off the mark. douek argues that we should toss aside the assumption that content moderation is a series of individual decisions made by people and computers acting as judges. The better way to think about it is as a process of ex ante rights administration and institutional design. Instead of learning lessons from judicial process, we need to learn from administrative law.

A system of immeasurable scale purportedly designed to reflect liberal First Amendment principles, content moderation now includes algorithms and artificial intelligence, armies of third-party moderators from the Global South paid very little to make decisions in seconds, and, a lot of money for Silicon Valley executives. Of course, this has led to repeated and repeatedly horrible results. Content moderation rules and practices facilitated genocide, helped swing elections toward fascists, and routinely and systematically censored queer and nonnormative sexual content. Right wing politicians got in on the act, as well, claiming designed-in and as-applied anti-conservative bias when the evidence proved the opposite. Facebook responded by creating an oversight board with a lot of fanfare, but very little power.

Through it all, the vision of content moderation has remained roughly the same: ex ante automated filtering and ex post judicialish review of whether user-generated content violated platform policies. If this “first wave” of content moderation scholarship is right, then presumably, the best way to protect speech and social media users is to demand procedural due processish protections: transparency and rights to appeal. And that’s precisely what those members of Congress who are legitimately concerned about content moderation have proposed.

The standard picture of content moderation is like an old Roman emperor whose thumbs up or thumbs down decides the fate of a gladiator: some all-powerful person or all-powerful thing is deciding whether a post stays up or comes down. Content moderation, then, happens post-by-post.

douek explains that almost none of that is helpful or correct. As many scholars have argued, content moderation involves an assemblage of people and things. Platforms do more than just decide to keep content up or take it down. And, most importantly, these misguided assumptions contribute to misguided policy.

Case-by-case ex post review misses systemic failures. It also provides inadequate remedies: a moderator could take something down or put something back up, leaving the problems of training and institutional design untouched. And the cycle will continue as long as the structural problem remains. Case-by-case review also lends itself to privacy theatre like the Facebook Oversight Board. By the nature of its design, it may eventually address a few takedown decisions, but has little to no impact on how the whole system works.

In place of this misguided vision, douek proposes a “second wave” of content moderation scholarship, discourse, and solutions. douek deftly argues that content moderation is a product of ex ante system design. It is one result of a larger institutional structure that frames the flow of all sorts of information. Content moderation is also the product of multiple corporate goals, not just the ostensible desire to reflect and perpetuate a liberal vision of free speech. Policy reform should reflect that.

douek suggests that one way to do that is to learn from the literature in collaborative governance, an approach to administrative regulation of corporations that involves public and private entities working together to achieve mutual goals. It benefits from private expertise while using a wide toolkit—audits, impact assessments, transparency reports, ongoing monitoring, and internal organizational structures, among others—cabining private discretionary decision-making by making firms accountable to the public and to regulatory agencies. Proponents see the multi-stakeholder model of governance as a more effective way of governing fast-changing and technologically complex systems, an argument made in profound and powerful detail by Margot Kaminski.

Collaborative governance is meant to help regulators supervise vast organizational systems ex ante before they do something wrong. Its ex ante approach and process toolkit are supposed to instantiate public values into every phase of organizational function. In that way, it is supposed to influence everyone, create systems up front, and foster the development of organizations more attuned to popular needs and values.

douek makes a compelling argument that collaborative governance is the better way to approach content moderation, both conceptually and as a matter of policy. Instead of an ex post appeal process, the collaborative governance approach means integrating officers whose entire jobs are to advocate for fair content moderation. It means giving those employees the safety and separation they need from departments with contrary motivations in order to do their work. It means transparency about underlying data and systemic audits of how the system works.

What’s so compelling about Content Moderation as Administration is that it changes the paradigm and pushes us to respond. douek has described a new and more compelling way of looking at content moderation. We all have to learn from their work, especially those of us writing or interested in writing about content moderation, collaborative governance, or both. The challenge, of course, will be guarding against managerialism and performative theatre in the content moderation space. Compliance models are at best risky when not subordinated to the rule of law and, in particular, a vision of the rule of law attuned to the unique pressures of informational capitalism. But those questions come next. Content Moderation as Administration does an article does an outstanding job of challenging the conventional account that has been at the core of content moderation scholarship for so long.

Cite as: Ari Waldman, Shifting the Content Moderation Paradigm, JOTWELL (March 1, 2022) (reviewing evelyn douek, Content Moderation as Administration (Jan. 12, 2022), available on SSRN), https://cyber.jotwell.com/shifting-the-content-moderation-paradigm/.

Debunking the Myth that Police Body Cams are Civil Rights Tool

Body-worn cameras are proliferating with astounding speed in police departments throughout the country. Depending on the conditions under which cameras are used, the spread of this technology has been defended by certain civil liberties organizations as a means of holding police accountable for excessive force used disproportionately against Black, Brown, and queer people. In his new book, Police Visibility, Professor Bryce Clayton Newell musters empirical fieldwork on police deployment of body cameras to slow the rush to implement this potentially pernicious law enforcement surveillance tool.

This book is a careful and in-depth study by a leading scholar of police technology. Specifically, Newell questions whether the prescription (police cameras) will meaningfully treat the illness (structural racism and police violence). As he puts it, “[i]n the absence of broader police reforms, the cameras may offer a Band-Aid … but they do not promise a cure.” (P. 40.) As Newell notes, body-worn cameras “serve the coercive aims of the state” and increase police power because the cameras are evidentiary tools controlled by the police that can be used to surveil and incarcerate more people.

According to Newell, police body cameras may lend police false legitimacy, lending a modicum of visibility without real transparency given that police officers and departments may in many instances limit access to and dissemination of the videos. More broadly, any single instance of police officer accountability may not lead to broader structural reforms. To that end, Newell notes the widespread (though not universal) approval of such cameras by the rank and file police officers he surveyed—one indicator that police cameras may not be the solution civil rights advocates hope.

All told, body cameras may not be a reform at all, but instead could aggravate our broken and racist carceral system and the surveillance that enables it. (One quibble: borrowing the perspective of those advocating for police cameras, Newell refers to surveillance of civilians as “collateral,” suggesting that the police are primary targets of the cameras’ lens. Centering the surveillance of civilians as the primary target would have been more accurate and rhetorically powerful.)

In light of these shortcomings, Newell offers a few suggestions for reform. As a background policy norm militating against implementation of police cameras in the first instance, he emphasizes that bystander videos of police conduct are a preferable form of sousveillance against the police because police departments do not serve as gatekeepers of who can and cannot access the videos and under what conditions. This is critically important, though not without drawbacks of its own as a means of police regulation. I’ve argued that such citizen recordings are themselves not without meaningful privacy harms. Safiya Noble has powerfully explained that they may contribute to the commodification of black death through profiteering by social media companies when images of police violence against people of color are viewed online.

If police body cameras are deployed, to counteract police power over how cameras are used, Newell believes that departments should not be able to institute body cameras through unregulated procurement policies prior to public deliberation and consent. And to guide that deliberation, Newell offers a few overarching principles to help better insure that police body cameras are a tool of antipower preventing further state domination: (1) independent oversight (not just for camera policies, but for officer conduct more broadly), (2) a right to access for anyone captured on film, (3) redaction/blurring of all identifying information of both victims and bystanders, and (4) default restrictions on accessing video of people’s private spaces.

These are trenchant suggestions for regulating police body cameras in that they try to maximize the extent to which cameras hold police accountable while minimizing (albeit not eliminating) the extent to which they can be used to invade others’ privacy. However, Newell’s recommendations do less work in preventing the cameras from serving as an evidentiary surveillance tool.

Compelling arguments can be made that attempting to bureaucratize the regulation of surveillance technologies is more cumbersome and less effective than outright banning them (as others have rightly argued in similar contexts such as police use of facial recognition technology). However, Newell’s informed recommendations move the policy conversation in a productive direction. They serve as an important bulwark against the “surveil now, ask questions later” ethos undergirding much of the body camera policies currently in place.

Cite as: Scott Skinner-Thompson, Debunking the Myth that Police Body Cams are Civil Rights Tool, JOTWELL (January 28, 2022) (reviewing Bryce Clayton Newell, Police Visibility: Privacy, Surveillance, and the False Promise of Body-Worn Cameras (2021)), https://cyber.jotwell.com/debunking-the-myth-that-police-body-cams-are-civil-rights-tool/.

How to Regulate Harmful Inferences

Alicia Solow-Niederman, Information Privacy and the Inference Economy (Sept. 10, 2021), available at SSRN.

A decade ago, Charles Duhigg wrote a story for the New York Times that still resonates today, revealing that Target could predict its customers’ pregnancies and delivery dates from changes in their shopping habits. This and similar revelations pose a difficult question: how do we protect vulnerable people from the power of inferences? At the time, I wondered aloud whether we ought to regulate harmful data-driven inferences and how we would do it, which sparked characteristically overheated responses from the libertarian punditry.

A decade on, the ceaseless progress of machine learning (ML) has exacerbated these problems, as advances in the state-of-the-art of prediction make Target’s old algorithm seem like child’s play. ML techniques have become more accessible and more powerful, fueled by advances in algorithms, improvements in hardware, and the collection and distribution of massive datasets chronicling aspects of people’s lives we have never before been able to scrutinize or study. Today, obscure startups can build powerful ML models to predict the behavior and reveal the secrets of millions of people.

This important draft by Alicia Solow-Niederman argues that information privacy law is unequipped to deal with the increasing and sometimes-harmful power of ML-fueled inference. The laws and regulations on the books, with their focus on user control and notice-and-choice, say very little about the harmful inferences of companies like Clearview AI, which notoriously scraped millions of photos from Facebook, LinkedIn, and Venmo, using them as ML training data to build a powerful facial-recognition service it sells exclusively to law enforcement agencies. Unlike Target, which had a contractual relationship with its customers and gathered the data for its algorithm itself, Clearview AI had no connection to the individuals it identified, suggesting that protections cannot lie in laws focused primarily on user consent and control.

The first very useful contribution of this article is its important summary of recent advances in ML, how they raise the possibility of harmful inferences, and how they challenge outdated privacy laws built upon notice-and-choice. This makes Part II of the article an accessible primer on a decade’s worth of ML advances for the non-technical privacy expert.

Solow-Niederman’s most important move, in Part IV of the article, is to ask us to focus on actors beyond the dyad of provider and user. Like Salome Viljoen’s magisterial work on Democratic Data (previously reviewed in these pages), Solow-Niederman deploys geometry. Where Viljoen added the horizontal dimension of people outside the vertical user/service relationship, Solow-Niederman asks us to move beyond the “linear” to the “triangular.” She urges us to look outside the GDPR-style relationship between data subject and data controller, to consider the actions of so-called “information processors.” These are companies like Clearview that amass massive data sets about millions of individuals to train machine learning models to infer the secrets and predict the habits not just of those people but also of others. We cannot protect privacy, Solow-Niederman argues, unless we develop new governance approaches for these actors.

This move — relational and geometric — leads her to focus on actors and relationships that get short shrift in other work. If we worry about the power of inference to harm groups and individuals, we need to scrutinize that which gives power to inference, she argues. Solow-Niederman focuses, for example, on how information processors amass “compute”: the computer-processing infrastructure needed to harness massive data sets. She provocatively suggests that regulators might cast extra scrutiny on mergers and acquisitions that lead companies to increase compute power, citing for inspiration the work of now-FTC-Chair Lina Khan, who has argued for similar shifts in antitrust law.

The triangular view also focuses attention on how companies like Clearview obtain data. Other commentators have been loath to focus on Clearview’s scraping as the source of the problem, because many tend to be wary of aggressive anti-scraping restrictions, such as expansive interpretations of the Computer Fraud and Abuse Act (CFAA). Solow-Niederman suggests, contrary to the conventional wisdom, that the CFAA could have been useful in thwarting Clearview AI, had Facebook detected the massive scraping operation, asserted its Terms of Service, and sued under the CFAA. She even suggests FTC action against companies that purport to prohibit scraping yet fail to detect or stop scrapers.

These are two genuinely novel, even counter-intuitive, prescriptions that flow directly from Solow-Niederman’s triangular intervention. They suggest the power of the approach, and we would be well-advised to see how it might lead us to other prescriptions we might be missing due to our linear mindsets.

To be clear, as I learned a decade ago, protecting people from the power of inference will raise difficult and important questions about the thin line between intellectual exploration and harm production. Inference can be harm, Solow-Niederman suggests, but she acknowledges that inference can also be science. Preventing the former while permitting the latter is a challenging undertaking, and this article defers to later work some of the difficult questions this differentiation will raise. But by focusing attention and energy on the ever-growing power of ML inference, by compellingly exploring how conventional information privacy law and scholarship cannot rise to the challenge of these questions, and by suggesting new means for considering and addressing inferential harm, Solow-Niederman makes an important and overdue contribution.

Cite as: Paul Ohm, How to Regulate Harmful Inferences, JOTWELL (December 22, 2021) (reviewing Alicia Solow-Niederman, Information Privacy and the Inference Economy (Sept. 10, 2021), available at SSRN), https://cyber.jotwell.com/how-to-regulate-harmful-inferences/.

The Hotel California Effect: The Future of E.U. Data Protection Influence in the U.K.

Paul M. Schwartz, The Data Privacy Law of Brexit: Theories of Preference Change, 22(2) Theoretical Inquires in Law 111 (2021).

The tension between the forces of nationalism and globalism has reached its peak with the United Kingdom’s decision to break with the European Union. This dramatic move continues to impact countless economic sectors and, more importantly, the lives of many citizens. Yet all is calm on the data protection front. The U.K. has decided to continue applying the E.U.’s strict GDPR. In this timely and intriguing article, Paul Schwartz strives to explain why this happened, as well predict what’s next for data protection and the British Isles.

GDPR is a four-letter word. Its strict rules and heavy fines have changed the world of data protection forever. Ninety-nine articles, one hundred and seventy-three recitals, thousands of pages of commentary, and the many millions of dollars spent preparing for it only tell us part of the story. Now that the U.K. can escape the grasp of this vast and overarching regulatory framework, why hasn’t it “checked out”? Rather, just a few days prior to Brexit, the U.K. adopted a local law which is almost identical to the GDPR. This outcome is especially surprising to me personally, as I have argued that the GDPR substantially encumbers innovation in the age of big data (although it is quite possible I was wrong).

The simple answer to the GDPR’s persistence in the U.K. relates to the business importance of international data transfers from the E.U. For such transfers to continue unfettered, the U.K. must maintain laws that are “adequate.” This is because, post-Brexit, the U.K. is rendered a “third country” in terms of data transfers for all E.U. nations. (P. 128.) “Adequacy,” according to current E.U. jurisprudence, requires a legal regime of “essential equivalence” to that of the E.U. Without such “equivalent” laws, data transfers to the U.K. would be forbidden (or at least rendered very complicated) and economic loss in multiple industries would follow.

But this reason is unsatisfactory. The decision to maintain the GDPR seems to run counter to the explicit political agenda of the U.K.’s ruling Conservative party, which constantly promised to “take back control.” Schwartz even quotes the U.K. Prime Minister Boris Johnson stating (and possibly making an intentional reference to this journal): “We have taken back control of laws and our destiny. We have taken back control of every jot and tittle of our regulation” (emphasis added – T.Z). (P. 145.) Why spare the many jots making up the GDPR? After all, the U.K. might be able to achieve adequacy without carbon copying the GDPR; several countries currently holding an adequacy status have laws that substantially vary from the E.U.’s harsh regime.

To provide a response to this intriguing legal and political question, Paul Schwartz develops a sophisticated set of models. These models are compared to the (fifth) “Brussels Effect” paradigm – a model Anu Bradford maps out in her recent book. Bradford explains how nations worldwide are both de jure and de facto swayed to accept the E.U.’s influence, thus explaining why the U.K. will hold on to the GDPR. In addition to the Brussels Effect, Schwartz explains that the GDPR might have been applied in the U.K. due to (1) a change in the U.K.’s preference to accept to E.U.’s data protection norms, as reflected in the GDPR. This could be manifested in either U.K. public opinion, or in the preferences of the legal system (which reflects the preferences of the elite). Schwartz develops this model on the basis of the work of his colleague Bob Cooter, which focuses on individual preferences. (2) the U.K.’s data protection preferences were always aligned with those of the E.U. (3) the U.K. changed its values (rather than preferences) to align with those of the E.U. through a process of persuasion or acculturation (P. 117), and (4) the easy accessibility of a legal transplant (the E.U. data protection regime) has led the U.K. to opt for this simple and cheap option. In the article’s final segment, Schwartz uses these five models to explore whether the U.K. will remain aligned with the E.U.’s data protection regime. The answer will depend on which of the five models proves most dominant in the years to come.

Beyond Schwartz’s models, the U.K.’s decision regarding the GDPR is unique as it was somewhat passive; or as Schwartz notes, a decision not to reject, or “un-transfer” E.U. data protection law. It is a decision to maintain stability and sidestep the high costs associated with changing the law. (P. 137.) In other words, the U.K. adopted the GDPR when it was part of the E.U. and is now “stuck” with this “sticky” default. Switching a default is far more difficult than accepting an external legal regime. This, in fact, was a theme Schwartz explored almost 20 years ago when considering the privacy rules of the GLB Act. In other words, this situation is so unique that unless another member state breaks from the EU, we will probably not witness a similar dynamic involving such migration of data protection norms. As opposed to the “Brussels Effect” which was influenced by the earlier “California Effect”, the situation at hand might be featuring a “Hotel California” Effect – even though the U.K. wants to check out of this aggressive regulatory framework, it is finding that it “can never leave.” as its bureaucracy has grown accustomed to it.

Therefore, the GDPR-Brexit dynamic is a unique example of the “Brussels Effect.” Yet as Schwartz has shown in another important article discussing data protection and the “Brussels Effect,” there are many unique examples. In his other work, Schwartz explained that the U. S’s adoption of the (now defunct) “Privacy Shield” and the E.U-Japan mutual adequacy agreement did not fit a “cookie cutter” paradigm of E.U. influence. All these examples demonstrate that while Bradford’s description of the “Brussels Effect” is appealing (might I say, brilliant) in its simplicity and elegance, reality is often more complex. Thus, the Brussels Effect is merely one of several explanations for the GDPR’s growing influence.

Schwartz’s taxonomy will prove helpful in understanding what happens next in the U.K.. Just recently (on August 26, 2021), the U.K. announced its intent to promote data adequacy partnerships with several nations, including the United States. Specifically, regarding the U.S., the relevant press release noted the U.K.’s disappointment with the Schrems II ruling and the importance of facilitating seamless data transfers to the U.S. It further stated that the U.K. is free to enable such transfers “now it has left the E.U..”

Should these plans move forward (they currently are in their early stages), they would create substantial (though possibly workable) challenges for the U.K’s “adequacy” status. Such developments possibly indicate that the U.K. did not move to adopt E.U privacy norms, or even cave to the economic pressures of commercial entities. Rather, it was the ease of remaining within a familiar scheme that led the U.K. to stick with the GDPR, and not check out of this notorious hotel. Yet perhaps this final assertion is too superficial. Time will tell as to whether Schwartz’s nuanced analysis of changing preferences, Bradford’s hypothesis regarding global influence, or other models best predict and explain what comes next for the U.K. and the GDPR.

Cite as: Tal Zarsky, The Hotel California Effect: The Future of E.U. Data Protection Influence in the U.K., JOTWELL (November 23, 2021) (reviewing Paul M. Schwartz, The Data Privacy Law of Brexit: Theories of Preference Change, 22(2) Theoretical Inquires in Law 111 (2021)), https://cyber.jotwell.com/the-hotel-california-effect-the-future-of-e-u-data-protection-influence-in-the-u-k/.

The Law of AI

Michael Veale and Frederik Zuiderveen Borgesius, Demystifying the Draft EU Artificial Intelligence Act 22(4) Computer L. Rev. Int'l 97-112 (2021).

The question of whether new technology requires new law is central to the field of law and technology. From Frank Easterbrook’s “law of the horse” to Ryan Calo’s law of robotics, scholars have debated the what, why, and how of technological, social, and legal co-development and construction. Given how rarely lawmakers create new legal regimes around a particular technology, the EU’s proposed “AI Act” (Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts) should put tech-law scholars on high alert. Leaked early this spring and officially released in April 2021, the AI Act aims to establish a comprehensive European approach to AI risk-management and compliance, including bans on some AI systems.

In Demystifying the Draft EU Artificial Intelligence Act, Michael Veale and Frederik Zuiderveen Borgesius provide a helpful and evenhanded entrée into this “world-first attempt at horizontal regulation of AI systems.” One the one hand, they admire the Act’s “sensible” aspects, including its risk-based approach, prohibitions of certain systems, and attempts at establishing public transparency. On the other, they note its “severe weaknesses” including its reliance on “1980s product safety regulation” and “standardisation bodies with no fundamental rights experience.”. For U.S. (and EU!) readers looking for a thoughtful overview and contextualization of a complex and somewhat inscrutable new legal system, this Article brings much to the table at a relatively concise length.

As an initial matter, it’s important to understand that the Draft AI Act is just the beginning of the European legislative process. Much can still change. And the Act must be understood in legal context: it is entwined with other EU Regulations (such as the GDPR), Directives (such as the Law Enforcement Directive and Unfair Commercial Practices Directive), and AI-specific initiatives in progress (such as the draft Data Governance Act and forthcoming product liability revisions).

The AI Act itself focuses on risk management and compliance, looking at threats to physical safety and fundamental rights. At its core, the Act is an attempt to reduce trade barriers while also addressing fundamental rights concerns. According to Veale and Borgesius, by primarily relying on product safety regulations and bodies, the AI Act gets the balance wrong.

Not all is bad, however. Veale and Borgesius appreciate the AI Act’s division of AI practices into four risk levels: unacceptable (Title II), high (Title III), limited (Title IV), and minimal (Title IX). AI systems with unacceptable risks trigger full or partial prohibitions, while high risk systems are regulated based on the EU approach to products safety (the New Legislative Framework or NLF). But Veale and Borgesius note that at closer examination, neither the prohibitions nor the regulations are as robust as they might appear.

For example, take the ban on biometric systems, which at first appears to be precisely what some scholars have called for. The Act bans most “real-time” and “remote” law enforcement uses of biometric systems in publicly accessible spaces (Art. 5(1)(d)). Notably, systems that analyze footage after-the-fact are not included. Nor is live biometric identification online, nor is the use of remote biometric identification for non-law enforcement purposes, which falls under the GDPR. And Member States may create yet more exceptions, by authorizing certain law enforcement uses of real-time biometrics, so long as they include certain safeguards. Veale and Borgesius rightly point out that the ample exceptions to the Act’s limited biometrics ban mean that the infrastructure for biometrics systems will still be installed, leading some to claim that the Act “legitimises rather than prohibits population-scale surveillance.” Moreover, nothing in the Act prevents EU companies from marketing such biometrics systems to oppressive regimes abroad.

The most complex and unfamiliar aspect of the Act is its regulation of high-risk systems. There, according to Veale and Borgesius, the Act collapses the protection of fundamental rights into the EU’s approach to product safety, to its detriment. The NLF is used to regulate toys, elevators, and personal protective equipment, and is completely unfamiliar to most information law scholars (we will have to learn fast!). Under the NLF, manufacturers perform a “conformity assessment” and effectively self-certify that they are in compliance with “essential requirements” under the law. Here, those requirements are listed in Chapter 2 of the Act, and include a quality management system, a risk management system, and data quality criteria, among other things. Manufacturers can mark conforming products with “CE,” which guarantees freedom of movement within the EU.

By contrast, Veale and Borgesius point to the path not taken: EU pharmaceutical regulation requires pre-marketing assessment and licensing by a public authority. Here, the public sector has a much more limited role to play. There are “almost no situations” in which such industry AI self-assessments will require approval by an independent technical organization, and even then, such organizations are usually private sector certification firms accredited by Member States.

Post-marketing, the AI Act again reflects the NLF by giving “market surveillance authorities” (MSAs)—typically existing regulatory agencies—the power to obtain information, apply penalties, withdraw products, etc. While AI providers must inform MSAs if their own post-market monitoring reveals risks, Member States have discretion as to which authorities will be responsible for monitoring and enforcing against standalone high-risk AI systems. In practice, Veale and Borgesius observe that this will put technocratic government agencies ordinarily concerned with product regulation in charge of a range of tasks well outside their usual purview: “to look for synthetic content on social networks, assess manipulative digital practices of any professional user, and scrutinise the functioning of the digital welfare state…[t]his is far from product regulation.”

Moreover, Veale and Borgesius point out that private standards-setting organizations will determine much of the content of the law in practice. The European Commission will likely mandate that several European Standardisation Organizations develop harmonized standards relating to the Act that companies can follow to be in compliance with it. For internet governance buffs, the problems with deciding on fundamental values through privatized processes are familiar, even old hat. But as Veale and Borgesius observe, the Act’s “incorporation of broad fundamental rights topics into the NLF [regime]… spotlight[s] this tension of legitimacy” in the EU products safety context.

This Article contains many additionally helpful sections, including a summary of the Act’s transparency provisions, approach to human oversight, and the potential confusion around and problems with the scope of the Act’s harmonization efforts. I do wish the authors had spent more time on the lack of rights, protections, and complaint mechanisms for what they call “AI-systems-subjects”—the individuals and communities impacted by the use of AI. As Veale and Borgesius observe, neither the standards-setting organizations nor the relevant government bodies are required to take input or complaints from impacted persons. They characterize this primarily as bad regulatory design, noting that “the Draft AI Act lacks a bottom-up force to hold regulators to account for weak enforcement.” To those of us steeped in the GDPR’s emphasis on individual rights, the absence of individual rights here is more shocking. I would be curious to learn about whether this choice/oversight is a real problem, or whether other EU laws nonetheless enable affected individuals to participate in the EU governance of AI.

Overall, this article is a much-needed guide to an immensely significant regulatory effort. For scholars, it raises complex questions about not just when new technology leads to new law, but how the choice of legal regime (here, product safety) establishes path dependencies that construct a technology in particular ways. Veale and Borgesius are to be applauded for their noted expertise in this space, and for doing the work to make this this regime more accessible to all.

Cite as: Margot Kaminski, The Law of AI, JOTWELL (October 25, 2021) (reviewing Michael Veale and Frederik Zuiderveen Borgesius, Demystifying the Draft EU Artificial Intelligence Act 22(4) Computer L. Rev. Int'l 97-112 (2021)), https://cyber.jotwell.com/the-law-of-ai/.