The Journal of Things We Like (Lots)
Select Page

Bringing Society Back In: How Tech Remakes Social Relations

Marion Fourcade & Kieran Healy, The Ordinal Society (2024).

In a congressional hearing over seven years ago, Senator Orrin Hatch asked CEO Mark Zuckerberg a simple question: how did his company, then known as Facebook, make money if users never paid them a dime? Zuckerberg’s brief, smirking answer immediately went viral: “Senator, we run ads.” The exchange seemed to encapsulate both the generational divide between the 84-year-old Hatch and the 33-year-old Zuckerberg and their fundamentally different understandings of how capitalism worked on the ground. That Hatch needed something as basic as the Facebook business model spelled out for him suggested, to some, that he was out of touch.

But Zuckerberg’s deceptively straightforward reply also warrants unpacking, because—as is by now obvious—Meta does far more than simply sell advertising. In The Ordinal Society, sociologists Marion Fourcade and Kieran Healy argue that firms like Meta have remade society and sociality itself. By transforming social activity into a source of profit, firms have gained the ability to control and manipulate interactions and to rank and sort individuals in increasingly precise ways. An ambitious account in the vein of Julie Cohen’s Between Truth and Power, The Ordinal Society offers a crucial rethinking of how technology has reordered society, focusing on how the data economy enables emerging systems of ranking and classification that not only amplify underlying social stratification, but also produce new and unpredictable forms of inequality. Existing legal approaches fail to address the harms wrought by this reordering.

What can legal scholars gain by reading about the social and political effects of technology? For one thing, looking at informational capitalism through Fourcade and Healy’s lens blows the dust off some of the oldest debates in law and technology. For example, consider the lengthy privacy policies offered by technology platforms that set forth the terms of the bargain between users and the firm. Legal scholars have long argued about the insufficiency of privacy policies to actually inform consumers and ensure meaningful user consent to the disclosure of data. But as Fourcade and Healy point out, users don’t simply share existing information when they interact with digital services: they also generate new data. Interaction between users and platforms creates what Shoshana Zuboff describes as the “behavioral surplus,” a new source of inferences and profit.

As scholars of law and tech well know, the behavioral surplus has allowed tech platforms to wreak havoc on privacy and act in anticompetitive ways. But Fourcade and Healy point out that engagement with digital platforms has also transformed the way we relate to each other. The centrality of tech to everyday interaction makes people willing to “surrender” their data and alters our expectations of confidentiality, privacy, and autonomy. These shifts in social relations make it a challenge to map old legal presumptions onto the status quo.

The book contains at least two crucial insights relevant to legal scholars who study the impact of automation and algorithms. First, Fourcade and Healy argue that the use of modern methods of machine learning does not just replicate existing categories such as race, sex, or gender, but also creates new ones: “they identify new classes of people, reformat identities, help control social action, and produce new criteria for truth-telling and ethical judgment.” In Fourcade and Healy’s account, technologies of data collection and automated decision systems aim at making ever more granular, predictive, and individualized assessments of users, based on an ever-increasing set of data points. This system of assessment allows the classification, sorting, and—ultimately—ranking of individuals in ways that shape our access to life chances: credit, housing, work, punishment, benefits, and so forth.

This recognition of the fine-grained ranking and sorting made achievable through technology complicates how legal scholars think about algorithms. As Fourcade and Healy acknowledge, lawyers, computer scientists and social scientists have argued that automated and algorithmic decisionmaking might “contribute to the reproduction of categorical inequalities around gender and race.” As legal scholar Aziz Huq has argued, public law offers partial responses to this problem through doctrines of equal protection and due process. The former bars intentional discrimination and racial classification; the latter protects interests in the accuracy and appealability of decisions. But as Fourcade and Healy point out, “[d]eep-learning models might classify based on weighted combinations and transformations of hundreds or thousands of features, leaving users with little idea which conventionally identifiable features are really important.” Moreover, even as categorical inequality is reproduced on a grand scale, that reproduction is achieved through increasingly individualized and fine-grained determinations rather than through mechanisms recognizable through doctrines that bar classification on the basis of immutable characteristics or other impermissible criteria.

Fourcade and Healy describe the implications of this increasingly individualized mode of seeing and assessing people for citizenship and rights. In their telling, much of the 20th-century liberal order was oriented toward the recognition of individuals as “nominally equal.” The legal architecture for that order rested on the insistence that people be treated as individuals, not simply as members of groups or classes. The rise of individualization and personalized scoring might therefore appear more consistent with liberal commitments to individualism than the kinds of blunt categorizations that tend to run afoul of antidiscrimination law.

But as Fourcade and Healy point out, the rise of relentless classification and individualized scoring also erodes the very categories of social and political inclusion and exclusion that our legal system was made for. For example, they contend that pervasive ranking and classification undermine solidarity within and among the kinds of groups that made up the liberal “mass politics” of the 20th century. Instead, individuals increasingly are classified on the basis of data points and decisions that represent opportunities to be gained and gamed. To the extent antidiscrimination law responds to a distinctively 20th century harm—the harm of being lumped together with others, rather than treated as an individual—it is an imperfect fit with the kinds of individualized determinations that Fourcade and Healy are concerned with. Yet, as Fourcade and Healy note, the ranking and classification they study reproduce some of the precise forms of social inequality that supposedly individualized, merit-based decisions ought to avoid.

One of the book’s most significant insights is about the effects of algorithms on what Fourcade and Healy call the “whole social process” itself. Fourcade and Healy tie the emergence of platforms and the monitoring of online activity to the routine surveillance of offline activity, as companies like Google and Amazon launched payment systems, grocery stores, medical services, and other kinds of business. Pervasive surveillance of this kind enables platforms to act in increasingly self-serving ways, by eroding competition and engaging in price discrimination. The appetite for data that Fourcade and Healy document also translates to other kinds of firms and contexts: education, employment, and government, to name a handful.

Fourcade and Healy thus describe a wholesale shift in how people, relationships, and interactions are measured and understood. This change means that legal prescriptions geared toward particular firms (e.g., Google) or particular harms (e.g., bias in employment) may end up less potent than legal scholars anticipate, because they fail to address the larger transformations that society has undergone. Fourcade and Healy may not offer concrete legal tools to address the potential harms that arise from an ordinal society. But they develop and advance a generative new critical vocabulary for thinking about the economic, political, and social changes underway.

Cite as: Hannah Bloch-Wehba, Bringing Society Back In: How Tech Remakes Social Relations, JOTWELL (March 10, 2026) (reviewing Marion Fourcade & Kieran Healy, The Ordinal Society (2024)), https://cyber.jotwell.com/bringing-society-back-in-how-tech-remakes-social-relations/.

Crossroads: Privacy Law and Copyright Law in the Age of Artificial Intelligence

Alicia Solow-Niederman, AI and Doctrinal Collapse, 78 Stan. L. Rev. ___ (forthcoming 2026), available at SSRN (Aug. 08, 2025).

Government actors across the globe have responded to the rapid uptake of artificial intelligence by adopting or proposing various forms of legislation. For instance, on September 29, 2025, California adopted the Transparency in Frontier Artificial Intelligence Act, which imposes transparency and safety obligations on artificial intelligence companies in the state. Other states, such as Colorado, have also responded by enacting laws addressing artificial intelligence. At the federal level, the proposed Generative AI Copyright Disclosure Act would impose disclosure requirements on artificial intelligence developers that use copyrighted works to train their systems. In 2024 the European Parliament adopted the Artificial Intelligence Act—a comprehensive framework for the regulation of artificial intelligence in European Union countries. Despite domestic and international legislative responses, the rapid rise of artificial intelligence continues to pose significant challenges for several established areas of law, including privacy law and intellectual property law.

In her article AI and Doctrinal Collapse, Professor Alicia Solow-Niederman offers an impressive contribution to both the privacy law and intellectual property law fields by exposing the various pressures placed on these two legal regimes by artificial intelligence. Solow-Niederman contends that artificial intelligence has blurred the boundaries between privacy law and copyright law—a phenomenon she aptly labels as “inter-regime doctrinal collapse.” She convincingly posits that without sufficient intervention, corporate actors will continue to implement “exploitation tactics” to profit from this doctrinal collapse and further undermine the rule of law.

Solow-Niederman observes that because artificial intelligence is often dependent on the use of data and both privacy law and copyright law regulate data, “there is overlapping coverage of the same regulatory object.” However, both legal regimes have distinct logics. She descriptively notes that while copyright law emphasizes a property regime “and the closely related issue of incentives,” privacy law is based primarily on the concepts of control and autonomy. American privacy law focuses on ensuring that individuals can control their data as opposed to granting property rights in data. America’s largely self-regulatory notice-and-choice model, in which companies provide notice of their privacy practices and individuals then choose whether to consent to those practices, is in keeping with this approach.

Solow-Niederman contends that if “the discreet rules and logics of” privacy law and copyright law do “not remain sufficiently distinct” or “are not legible, then the two domains [will] collapse into one another.” She goes on to identify inherent weaknesses in data privacy law which blur the boundaries between both legal regimes. She argues that this doctrinal collapse enabled by artificial intelligence facilitates corporate exploitation and opportunism. One example of doctrinal collapse that she identifies occurs when corporations “make claims about the public nature of data to justify data acquisition . . . [but] also make subsequent or simultaneous claims that the data is proprietary.” The fair use doctrine may protect corporate use of public data in the development of artificial intelligence models. Solow-Niederman goes on to posit that in the privacy law context, the “same ‘publicly available’ claim removes the material from the reach of information privacy law” because individuals generally do not have a significant privacy interest in publicly available data. She posits that this allows companies to exploit ambiguities in the definition of public data and it enables “companies to switch between legal regimes in ways that further destabilize” both privacy law and copyright laws’ “doctrinal integrity and normative coherence.” Artificial intelligence companies may also contend that users’ privacy justifies or supports their arguments to avoid discovery and disclosure of artificial intelligence-related data in intellectual property litigation.

She then turns her attention to identifying two distinct corporate tactics—“buy” and “ask”—that she contends are problematic. Under the “buy” approach, companies purchase data via licensing agreements in business-to-business transactions involving an artificial intelligence developer and “an aggregator of content.” Solow-Niederman convincingly argues that a “buy takes advantage of limitations and weaknesses in both privacy law and IP law to reduce overall regulatory costs.” For instance, a licensing deal between two corporations permits the transfer of data after individuals have consented to data disclosure via the notice-and-choice model, even if the subsequent use of the data in the artificial intelligence context violates individuals’ privacy expectations. The individual is also excluded from this business-to-business transaction.

Under the “ask” approach, companies weaponize the notice-and-choice model. For instance, an artificial intelligence developer may directly obtain consent from individuals to use their data to train artificial intelligence models via their privacy policies and terms of service. With respect to intellectual property, companies can also “ask” individuals to grant them “a form of copyright license” to use the data via their terms of service. Thus, the “ask” approach can “both limit future exposure to copyright liability and mitigate copyright adjacent social costs,” while allowing corporate entities to acquire the data they need to train their artificial intelligence systems. Additionally, Solow-Niederman contends that the “ask” approach is available to only the few corporate actors that possess a significancy large database of individual users.

This well-written article concludes with recommendations for mitigating concerns associated with doctrinal collapse. Solow-Niederman argues that legal institutions must first acknowledge the presence of doctrinal collapse. Doing so would enable “advocates to pinpoint which regulatory objects are likely to be focal points of contestation.” She also proposes a “conflict of laws inspired” solution. Under this approach, courts that are faced with a dispute involving competing copyright and privacy law interests could adopt “a rebuttable ‘anti-switching presumption.’” Under this presumption, a party would be prevented from asserting “mutually incompatible claims at different points in a lawsuit, absent a sufficiently compelling reason to defeat the presumption.” She also recommends the adoption of regulatory reforms in the privacy law regime to close gaps that facilitate corporate exploitation.

Solow-Niederman’s insightful description of doctrinal collapse in the copyright law and privacy law regimes should be of particular interest to courts, legislators, and scholars in any of the law-and-technology, intellectual property and privacy law fields.

Cite as: Stacy-Ann Elvy, Crossroads: Privacy Law and Copyright Law in the Age of Artificial Intelligence, JOTWELL (February 9, 2026) (reviewing Alicia Solow-Niederman, AI and Doctrinal Collapse, 78 Stan. L. Rev. ___ (forthcoming 2026), available at SSRN (Aug. 08, 2025)), https://cyber.jotwell.com/crossroads-privacy-law-and-copyright-law-in-the-age-of-artificial-intelligence/.

Tech Elites Don’t Just Evade the State, They Change It

Julie Cohen, Oligarchy, State, and Cryptopia, 94 Fordham L. Rev. 563 (2025).

Julie Cohen’s Oligarchy, State, and Cryptopia is a bracing account of how a handful of technology companies can move beyond regulatory arbitrage to something more ambitious: remaking the rules by which they are governed. The article’s core claim is that some groups of tech elites do more than evade oversight: they reconfigure the administrative state to relocate meaningful rulemaking into private hands.

Cohen’s analysis clarifies a particular form of power and why several familiar toolkits in the law, such as antitrust, fail at addressing it. From the many explanations that emerge from Cohen’s comprehensive framework, three are worth noting. Today’s tech elites fit the description of oligarchs not because they are rich, but because they can use their wealth for infrastructures that enable them to produce private rules (including both self-regulation and private governance) insulated from democratic accountability. So, programs of AI governance should consider political economy because the firms that build and operate the infrastructure also shape the State that might regulate them. An extension of this idea is that privacy law’s traditional focus on individual consent misses the point because the risk that privacy law should be addressing is the structural concentration of informational power.

A change from evasion to reconfiguration

Drawing on Jeffrey Winters, Cohen treats oligarchy as politics in which extreme personal wealth is deployed to obtain systemic advantage. Oligarchic power can coexist with any constitutional form and shifts along a spectrum of different modes, depending on how it interacts with institutions. Oligarchy, State, and Cryptopia shows that many leading tech executives function as oligarchs through the infrastructure they own and the governance they hardwire into it.

Familiar explanations of noncompliance, therefore, understate the phenomenon. Cohen documents a pattern of defiance of public governance and laws that goes from “move fast and break things” to orchestrated reg-neg campaigns that aim to reshape the scope of oversight. Various behaviors such as blitzscaling and participatory governance that seem unrelated make sense in a framework that shows how they combine to limit accountability. And this configuration also helps explain an occasionally fraught relationship between big tech and States: firms’ hybrid placement in terms of modes of oligarchy shows why they are unusually resistant to traditional enforcement.

It is worth noting that this configuration is different from a return to the Gilded Age. Nineteenth-century industrialists and financiers controlled the economy; the power of today’s tech elite is more multidimensional because everyone else depends on the oligarchs’ infrastructure to speak, transact, and sometimes govern. The upshot is that tech executives embed governance structures (and occasionally bake dependencies) into privately provisioned infrastructure from social media platforms to satellite systems as well as capital arrangements that bypass traditional forms of accountability.

AI governance needs political economy

Because, as Cohen shows, tech elites govern people, markets, and occasionally norms themselves through their infrastructure, treating AI regulation as a narrow technical problem is a mistake. Cohen’s analysis uncovers that the project marketed as deregulation is not about the decentralization of power or decision-making: leading actors lean on the State where it serves them though contracts, subsidies, favorable institutional redesign, and they seek to reconfigure it when it does not, most visibly in political efforts pitched as efficiency-enhancing. Cohen explains that the absence of regulation is an invitation for private power to consolidate rule. AI governance, in that context, is largely a question of who controls the levers of how the systems are deployed, who wins and who loses with the exercise of that control, and how that control interacts with political power.

The AI research ecosystem illustrates these points. Compute-intensive science relies on infrastructure and monetary resources that few firms can supply, influencing the scientific inquiries. Talent flows to the private sector, publication is conditioned by trade secrecy, and debate is shaped by private sector priorities.

Cohen shows, in sum, why AI governance must engage with capital structure that entrenches founder control and the infrastructure that governs privately, as well as an ideology that normalizes the decisions made in both. Technical checklists like model evaluations, watermarking, predetermined risk tiers are popular with regulators, but they cannot confront or substitute for confronting the move that puts essential informational infrastructures in private hands. Longtermism supplies a moral endorsement for the behavior that Oligarchy, State, and Cryptopia explains: safeguard long-term aggregate utility by consolidating control today.

Privacy law misses the structural concentration of informational power

The article also makes visible what privacy law often misses: risks in the information economy do not simply come from data extraction, but from the structural concentration of informational power. Regulatory regimes that center on each individual cannot counter a system whose leverage point is upstream; the law must operate at the same scale as the problem.

Privacy law has long tried to regulate the tech industry through individual consent and control. Cohen’s framework explains why this effort is misaligned with the problem. First, power is exercised at the collective level; it shapes economic and social conditions for populations who cannot reasonably leave its reach. That form of power is poorly addressed by tools designed for discrete harms to individual privacy. Second, when informational infrastructures double as governance mechanisms (e.g., by controlling access to resources or prioritizing speech) then accountability must operate at that level too. Cohen’s diagnosis is trenchant: “there is no particular reason to think” that a toolkit built for ordinary corporate power can remedy tech oligarchy due to its combination of infrastructure control, wealth, and ideological certainty.

This article matters practically

Cohen’s article, finally, is useful. Through its understanding of tech elites as oligarchs (hybrids who exercise personal control and selectively embrace institutionalism in ways that cement their authority), this article offers a framework to think about a wide set of legal, social, and economic problems. This unique form of power relocates governance from public law into private infrastructure. AI intensifies this shift, and individualistic data rights are too small for the job. Oligarchy, State, and Cryptopia gives readers a vocabulary and a map about what oligarchy is, how those who control big tech depart from earlier elites, which institutional levers matter, and why debates that orbit “content moderation,” “individual control over data,” or “AI safety” alone will keep missing the center of gravity.

Cite as: Ignacio Cofone, Tech Elites Don’t Just Evade the State, They Change It, JOTWELL (January 9, 2026) (reviewing Julie Cohen, Oligarchy, State, and Cryptopia, 94 Fordham L. Rev. 563 (2025)), https://cyber.jotwell.com/tech-elites-dont-just-evade-the-state-they-change-it/.

The Edge of Tomorrow

Tejas N. Narechania & Scott Shenker, How to Save the Internet, __ Berkeley Tech. L.J. __ (forthcoming), available at SSRN (Mar. 18, 2025).

Every time I teach Internet Law, I start by lying to my students about how the Internet works. I tell them the finely crafted story of how routing, packet-switching, and layering combine to produce a profoundly modular, decentralized, and standardized worldwide network. The only problem is that the Internet doesn’t work that way anymore, and hasn’t for years. Companies like Akamai, Cloudflare, and Amazon operate such massive networking infrastructure that they have warped Internet spacetime around them. The services they offer, and on which much of the Internet now depends, are integrated, centralized, and proprietary—the very opposite of what I tell my students.

Tejas N. Narechania and Scott Shenker’s How to Save the Internet brings the stories we tell about the Internet back into line with the Internet as it actually is. Narechania is a law professor and Shenker a computer scientist. Their article is a seamless fusion of their expertise—and a cogent guide to the Internet’s new normal and what it means for telecommunications policy and law.

How to Save the Internet begins with an overview of the traditional law-school description of the Internet, which is a model of clarity and economy. (I’m adding the article to the teacher’s manual for my Internet-law casebook as a highly recommended primer.) The Internet knits together networks around the world by layering another virtual network on top of them; the Internet Protocol (or IP) standard that defines this global network relies on these smaller networks to transmit individual packets of data, so that any given message generally passes through several destination networks before reaching its final destination. Narechania and Shenker efficiently review both the technical fundamentals and the business terms on which Internet Service Providers (or ISPs) connect their networks. From this technical overview, they extract three core principles embodied by the traditional Internet design:

  • Neutrality meant that individual ISPs could not effectively discriminate among the traffic passing through their networks: not based on its content, not based on the identity of its sender, and not based on the application or device sending it. “Network neutrality” is the decades-long attempt to turn this technical neutrality principle into a binding legal obligation.
  • Interconnection meant that the Internet is a true network of networks, owned and operated by different entities, rather than a centralized monolithic, monopolistic network, as the U.S. telephone network under AT&T largely was. The IP standard provided the technical foundation for interconnection; economically, it was based on a system in which smaller ISPs typically paid larger ones to connect, while ISPS of approximately the same size carried traffic to and from each other for free.
  • Generality meant that the Internet is open to any and all applications: email, file downloads, video calls, social media, video games, streaming audio, and more. Unlike an older network, which supported exactly those services that its provider offered (e.g., Comcast offers cable television, telephone, and home-security services as part of its non-Internet packages), the Internet is open to anything offered by anyone. The term “permissionless innovation” is sometimes used to describe this principle, but “generality” is better because it captures why such innovation matters.

This is where the traditional story stops. Plenty of legal debates can take place within this framework, including large swathes of the network-neutrality debates. But equally often, these features are taken for granted in legal circles. Scholars and students simply assume that a new application can be an idea today and a startup tomorrow—and showing up in police blotters and courtrooms the day after. That’s how the Internet works, after all.

Except that, increasingly, it isn’t. As Narechania and Shenker explain, while the Internet overall had a decentralized peer-to-peer design, many individual applications were built with a centralized client-server architecture. This approach can offer better performance and better security than a peer-to-peer design. If Spotify needs to serve more users, it can add more servers; if those servers are vulnerable to intrusions and attacks, it can hire more security engineers to secure them. All of this might seem like a Spotify problem, not an Internet problem–except that Spotify and other large application providers have increasingly been addressing their performance and security concerns in the network itself, rather than purely on their own systems.

The first big change is the rise of content delivery networks (or CDNs) that cache content on computers geographically closer to the users who need it. To take their example, the NBA could transmit game clips to users across the U.S. from its headquarters in New York. But that would mean the same video clip might need to travel from network to network across the country thousands of times. It would be much more efficient to send the clip once to a server in Los Angeles and send it to Los Angeles-area users from there (and so on for many other local regions). And thus, CDNs provide caching services: clusters of servers located near users, which are connected by the CDN’s own private network.

Narechania and Shenker use the term “enhanced service provider” (or ESP, a nice play on ISP) to describe what CDNs have become. In addition to their caching services, they provide significant security benefits. (In particular, it is much harder to launch a denial-of-service attack on a CDN.) Some ESPs are vertically integrated; Google runs an immense private network to support YouTube and its other user-facing sites. Others are public-facing; Akamai provides large-scale services for customers who don’t want to build their own secure CDNs.

ESPs, however, call into question the guiding design principles of the Internet:

  • ESPs are emphatically not neutral. It’s not just that they can pick their customers (often on the basis of willingness to pay, but sometimes on speech-related grounds). More fundamentally, their networks are designed to support different kinds of applications differently, with specific optimizations for streaming video, gaming, or other high-performance applications.
  • ESPs are not generally interconnected with each other. They connect out to the public Internet to deliver content, to be sure, but each of them has its own entirely private network reserved for its own use.
  • ESPs are not general. They offer discrete, integrated services. While they’re built on top of a general-purpose resource—computational power—they don’t sell it on an unbundled basis.

The result is an Internet that is increasingly concentrated among a few immense ESPs. The interconnected public portion of the Internet—what we call “the Internet” on the first day of class—carries comparatively less traffic, has less resiliency, and is more economically marginal. Narechania and Shenker fear that ESPs will impede innovation, both in developing new applications and in improving the Internet itself. It’s a familiar tune of oligopoly and stagnation, played in a surprising new key.

What to do about it? Narechania and Shenker propose creating technical standards for ESPs’ services, and particularly for an ambitious “InterEdge” design for modular networking services provided by the ESP’s server clusters. Regulators could then require ESPs to interconnect (along the lines of the interconnection mandate in the 1996 Telecommunications Act), require neutrality on the basis of content and speaker. The result, they argue, would be to catalyze a new generation of innovative applications, just as the Internet itself did back in the day.

I regret to say that I found How to Save the Internet’s recommendations for how to save the Internet less compelling than its diagnosis of the problem. The InterEdge is an appealing vision in some ways, but for now, it is a concept of a plan. The authors’ prior technical work describing it is quite interesting, but much more of a slog for the reader who does not already have a firm command of networking architecture, and even there, the InterEdge remains somewhat abstract. But then again, that is what future work is for.

How to Save the Internet is compelling and highly informative. If you want to learn more on the technical side, computer scientists Pamela Zave and Jennifer Rexford cover similar issues in greater depth in their book The Real Internet Architecture: Past, Present, and Future Evolution. Narechania and Shenker’s particular contribution is to show how these technical developments have significant legal and regulatory consequences. Their article is a must-read for Internet-law policy and Internet-law pedagogy.

Cite as: James Grimmelmann, The Edge of Tomorrow, JOTWELL (November 28, 2025) (reviewing Tejas N. Narechania & Scott Shenker, How to Save the Internet, __ Berkeley Tech. L.J. __ (forthcoming), available at SSRN (Mar. 18, 2025)), https://cyber.jotwell.com/the-edge-of-tomorrow/.

Distinguishing Marks

Dustin Marlan, Servicing Trade Dress: Demystifying the Tertium Quid, 58 U.C. Davis L. Rev. 1513 (2025).

It is rare in legal scholarship to be both novel and clearly correct at the same time. In Servicing Trade Dress: Demystifying the Tertium Quid, Professor Marlan pulls it off. He starts with a puzzling carveout in a key Supreme Court case in 2000, Wal-Mart Stores, Inc. v. Samara Brothers, Inc. That case drew a significant line between product packaging (which is eligible for immediate trademark protection if it is “inherently distinctive”—that is, if the court thinks that consumers would immediately recognize it as indicating the source of a product, not just product descriptions or decorations) and product design (which may only be protected by trademark law if it has gained “secondary meaning” by recognition in the marketplace over time). Product design isn’t as likely to signal source to consumers as words are, the Court reasoned, and, separately, it’s also very likely to provide non-source-related benefits. Most of the time, a cigar is just a cigar. Thus, trademark protection should be given sparingly, only when the claimant proves that the claimed design is actually serving a source-identifying function.

Wal-Mart’s rule makes it harder to bring anticompetitive lawsuits against competitors based on product design—the burden will be on the claimant both to prove that its claimed design is nonfunctional and also to prove that consumers are likely to perceive it as an indicator of source. This is particularly useful because competitors are likely to want to use similar product designs, not to confuse consumers, but to provide consumers with the same benefits.

But, in order to avoid overruling a previous case (Two Pesos, Inc. v. Taco Cabana, Inc.), the Wal-Mart Court distinguished the layout/décor of a restaurant as a “tertium quid,” a new and indefinite third category. The Court told us that the layout and décor of a restaurant were enough like product packaging to be eligible for inherent distinctiveness. Specifically, the Two Pesos Court adopted the description that the claimed trade dress was

[A] festive eating atmosphere having interior dining and patio areas decorated with artifacts, bright colors, paintings and murals. The patio includes interior and exterior areas with the interior patio capable of being sealed off from the outside patio by overhead garage doors. The stepped exterior of the building is a festive and vivid color scheme using top border paint and neon stripes. Bright awnings and umbrellas continue the theme.

(If you think that sounds like a bog-standard Mexican restaurant, you’re not wrong.) Restaurant trade dress (and perhaps any store décor) thus could receive trademark protection immediately upon use by a business, without building a reputation among consumers.

In further complicating the doctrine by distinguishing instead of overruling Two Pesos, the Court created a mess. This “tertium quid” has long bedeviled trademark scholars, not least because the Wal-Mart Court also said that, if there was doubt as to whether the claimed trademark matter was product design or product packaging, courts should err on the side of “design,” thus requiring secondary meaning. They should do this to protect legitimate competition and avoid overprotecting features that might be desirable to consumers for non-source-designating reasons, like the general attractiveness of a design. A layout of a store may have some features that are more like “packaging” for the services the store delivers, but at least some features are likely to be part of the services offered by the store—the “design.” So it would seem as if we should err on the side of requiring secondary meaning in cases of doubt. What, then, is the “tertium quid” of a restaurant layout that the Court said could be inherently distinctive?

Professor Marlan shows that there is an answer and that the Court should have overruled Two Pesos outright for coherence. He canvasses post-Wal-Mart cases to show that essentially all “tertium quid” cases involve services, which is useful to know, especially given that other parts of trademark law distinguish products from services for practical reasons. He also successfully uses the marketing literature, which discusses how companies should design services such as serving meals (restaurants) and providing airplane flights (airlines). That is, the marketing literature treats service design as analogous to product design. Given that service design, like product design, regularly attempts to provide actual benefits to consumers beyond source indication, such as an eating environment that makes them feel happy and also encourages them to eat quickly, it should be treated by trademark law like product design and thus not protected under trademark theories in the absence of secondary meaning. The last part of the article before the conclusion is a bit more speculative, but appropriately modest in its suggestions about the relationship between trade dress protection for services and the overcommodification of modern life.

As I was reading the article, I thought “of course!” but the issue hasn’t previously been explained in such a conceptually helpful way. The fact that the non-legal literature supports the claims he’s making is further evidence that Professor Marlan has hit on a compelling argument grounded in reality as well as in legal concepts.

In addition, the tertium quid problem raises its head in other contexts to which Professor Marlan’s analysis could usefully apply. For example, courts have long struggled to fit celebrity false endorsement claims into the trademark framework. The celebrity is usually well-known for being someone, and even if that celebrity came from a particular endeavor like football or music, that doesn’t constrain the fields in which they could plausibly endorse products or services. But that makes trademark logic hard to apply. Is the trademark any image of the celebrity whatsoever? Courts don’t like that because trademark prefers more tangible definitions. The appearance of deepfakes has increased these anxieties.

As one court in a voice-cloning case recently wrote, “[b]ecause marks can take essentially any form, courts must therefore be careful to ensure that they receive protection only when used as contemplated by the statute—that is, as marks.” The court found celebrity endorsement cases to be an “uneas[y]” fit with this principle, “as celebrities’ personas are also their products,” though at least for advertising cases the service of endorsement seems like the trademark use. Citing Jennifer Rothman, the court emphasized that “personal marks” are treated differently from other marks—they’re harder to register, and the law “is highly skeptical of efforts to restrict individuals from using their own identities in trade.” Basically, celebrities, because they signify attitudes, trends, and points of contention, are good to think with, which means that many uses of their identities won’t be source-identifying uses. Connecting the anticompetitive risks of overprotecting product and service design with the anticompetitive and anti-speech risks of overprotecting celebrity identity helps us understand both the justifications for and the limits of trademark protection.

Just as the layout and features of a store are regularly part of the services being sold, voices can be part of a product or a service: an audiobook narrator is not signifying source, but telling a story. Professor Marlan’s work, like Mark Lemley and Mark McKenna’s work on “the trademark spot” on products, fits into a larger story about the further development of the “use as a mark” doctrine to cabin trademark’s nearly unchecked expansion. It would be nice for courts to admit more often that “use as a mark” is, in significant part, a normative inquiry. To the extent that it is empirical, it is empirical as a rule of thumb—we presume, often irrebuttably, that many things do or do not automatically function as trademarks, or perform non-trademark functions for consumers that therefore preclude trademark protection. Professor Marlan’s use of the marketing literature on service design persuasively makes the case that we should include the design of services in our rule of thumb, “this probably isn’t a trademark.”

Cite as: Rebecca Tushnet, Distinguishing Marks, JOTWELL (October 31, 2025) (reviewing Dustin Marlan, Servicing Trade Dress: Demystifying the Tertium Quid, 58 U.C. Davis L. Rev. 1513 (2025)), https://cyber.jotwell.com/distinguishing-marks/.

Rummaging Rebooted

  • Andrew G. Ferguson, Digital Rummaging, 101 Wash. U. L. Rev. 1473 (2024).
  • Andrew G. Ferguson, Everything-Everywhere Searches, _ G.W. J. of L. & Tech. _ (forthcoming), available at SSRN (Feb. 17, 2025).

Advances in digital surveillance technologies have posed difficult questions for Fourth Amendment doctrine. For instance, does the government need a warrant to install cameras on poles along a street to monitor who enters and exits homes? What if the government wants a list of all cell phones near a robbery scene at the time of the crime? Is the answer different if the government wants several days of data, but only about one person? What if the data comes from an app developer like Waze (or your flashlight app) or a smart home device like an Alexa, rather than a cell phone provider?

The Supreme Court has begun to address these issues in cases like Riley (barring warrantless cell phone searches during arrest) and Carpenter (requiring warrants for long-term cell phone location data). But as Andrew G. Ferguson argues in two recent articles—Digital Rummaging and Everything-Everywhere Searches—Fourth Amendment doctrine has nonetheless not kept pace with the scale of digital surveillance. In a turn to history that may prove particularly persuasive to constitutional originalists, Ferguson argues that the Founding generation’s objections to “rummaging” through general warrants provide an appropriate guiding principle for constraining surveillance in the digital age.

Ferguson warns that existing doctrinal focus on “reasonable expectations of privacy” and “trespass” may perversely encourage mass surveillance: “by searching everyone and everything at the same time, police can elide the traditional threshold search and seizure questions because it is not clear what expectations anyone has under such continuous surveillance or even when the search occurs.” As an antidote, Ferguson revives the Founding Era’s concern with “rummaging.” In Digital Rummaging, he introduces the “rummaging principle,” traces its historical roots, defines a “rummaging test” for courts, and applies that test to smart home data and long-term digital pole camera surveillance. In Everything-Everywhere Searches, he extends the rummaging principle to geofence warrants. Both articles merit close reading.

Ferguson traces the “rummaging principle” to the Founding generation’s deep mistrust of “government agents rummaging around homes, property, and papers.” Rooted in opposition to general warrants and writs of assistance, this principle has long served as a background constraint in existing Fourth Amendment doctrine. As the Supreme Court has observed, the Fourth Amendment was a direct response to these colonial-era abuses, “which allowed British officers to rummage through homes in an unrestrained search for evidence of criminal activity” (emphasis in Ferguson). Drawing on early sources, like Wilkes v. Wood and Entick v. Carrington, Ferguson shows how terms like “rummage,” “rifle,” and “ransack” captured the Founders’ fear of unchecked searches. While the Supreme Court has generally overlooked rummaging in defining what counts as a search, recent cases involving digital technology in policing, like Carpenter, have begun to resuscitate interest in this inquiry.

Building on this history, Ferguson distills the rummaging principle into a modern “rummaging test” constraining digital policing under the Fourth Amendment. He argues that courts should ask whether a contested search involves “(1) arbitrary enforcement of police power; (2) overreaching exploratory expansions of initially justified searches; (3) intrusions into constitutionally secured interests (e.g., homes, persons, papers, effects, location); or (4) exposure of private details as a form of political or social control.” These inquiries, Ferguson explains, align with the core harms that the Fourth Amendment was meant to prevent. Arbitrary enforcement occurs when unchecked police power leads to unreasonable interference with individuals or communities. Overreach happens when searches are too broad, such as using “probable cause pretext about one crime to search for other[s]” or sweeping innocent conduct or people up in investigations.

Intrusion refers to government efforts to access constitutionally protected spaces, people, or information. As Ferguson observes, protecting the home means safeguarding “the things that happen inside those four walls, not the walls themselves.” So too for people and, among other things, the information in their DNA. Finally, exposure involves the risk of revealing private information, recalling early privacy law concerns about the “privacies of life.” Government searches can create stigma that signals guilt to others and can become a powerful tool for social or political control.

Ferguson argues that this rummaging test can help determine both whether a “search” has occurred and whether a warrant, or other procedural or legal safeguards, makes that search reasonable. Ferguson also suggests that if a search causes significant enough rummaging harms, it may violate the Fourth Amendment even with a warrant.

The rummaging test clarifies decisions like Riley and Carpenter, which limited warrantless government conduct and “embraced—without necessarily acknowledging it—the principles behind the rummaging test.” It also casts doubt on older decisions, like Greenwood, in which the Supreme Court held that there is no Fourth Amendment protection for trash—an outcome Ferguson suggests fails to account for the harms of rummaging.

Turning to new forms of digital policing, Ferguson applies the rummaging test to smart home data and long-term pole cameras. Police typically use these tools based on mere hunches, hoping that rummaging through the data might turn up something useful. But if police can use these tools without a warrant, as prosecutors argue, it opens the door to arbitrary, overbroad, and deeply intrusive searches. Most of the information gathered would be “innocent, embarrassing, or irrelevant.” Smart home data may reveal details not otherwise “obtainable absent an entry into the home (if then),” while pole cameras could be deployed against disfavored individuals and entangle anyone with whom they socialize. Nonetheless, Ferguson suggests that with carefully crafted warrants—including “minimization requirements, time limits, or other considerations”—these tools might yet pass constitutional muster.

Finally, in Everything Everywhere Searches, Ferguson expands the rummaging test to digital surveillance that targets everyone’s data in hopes of generating a suspect list, or even just clues to the start of one. Ferguson focuses on geofencing, but similar mass queries arise in law enforcement use of consumer genetics data to generate leads by identifying genetic relatives of an unknown suspect, persistent aerial surveillance that records and stores data about everything that happens on city streets, facial recognition tools that can track or identify persons of interest, or tools like Shotspotter that are always listening and direct police to possible crime scenes where everyone present comes under suspicion. Ferguson identifies three characteristics these technologies: they are pervasive, capturing information in “widespread, comprehensive, and voluminous” ways; they are digital, enabling investigators to “search back in time, aggregate the data, and connect personal data points for new insights”; and they are indiscriminate, “collect[ing] information constantly against everyone, innocent, guilty, or anywhere in between.” These features often frustrate Fourth Amendment protection, especially when courts treat third-party data as beyond its scope.

Ferguson applies the rummaging test to geofence queries, where police ask companies like Google to identify all devices present in a specific area during a specific time. Police often use a geofence query when they have no suspect in mind, hoping that someone in the data will fit. Ferguson argues that warrantless geofence queries are classic rummaging—arbitrary, overbroad, and deeply intrusive. In this location data panopticon, “even just the potential of collection” could be chilling. Even with a warrant, problems remain, because of the inevitable involvement of private intermediaries.

Ferguson analyzes geofence warrants as currently conducted: authorized by courts, but mediated in a three-step process by Google. At Step One, Google scans its entire location database and returns anonymized data on all devices in the geofence—an overbroad search that sweeps in innocent people. Steps Two and Three narrow the pool and eventually identify individuals, but the initial dragnet remains constitutionally troubling. Ferguson warns that, even with these limits, “in terms of a grant of power, it is hard not to see the rhetorical parallels between geofence warrants and the general warrants that gave rise to the Fourth Amendment.” To be lawful, courts would need to impose a far more rigorous definition of particularity, and even then, such warrants might only “slightly alleviate” our concerns.

Ferguson’s expansive work on digital rummaging skillfully shows us one more way in which the doctrinal myopia on “expectations of privacy” or “trespass” can miss the real harms the Fourth Amendment was intended to prevent. His rummaging test offers a historically grounded lens for explaining the harms of big data searches that “invert the traditional investigative model.” This work invites fresh debate and legal challenges across a host of investigative methods, both well-established and new. Perhaps most controversially, Ferguson suggests that some surveillance practices may be so invasive that they should simply be off limits—warrant or not.

Cite as: Natalie Ram, Rummaging Rebooted, JOTWELL (September 3, 2025) (reviewing Andrew G. Ferguson, Digital Rummaging, 101 Wash. U. L. Rev. 1473 (2024); Andrew G. Ferguson, Everything-Everywhere Searches, _ G.W. J. of L. & Tech. _ (forthcoming), available at SSRN (Feb. 17, 2025)), https://cyber.jotwell.com/rummaging-rebooted

AI Disgorgement or AI Recalls: A Trip down Remedy Lane

  • Daniel Wilf-Townsend, The Deletion Remedy, 103 N. Car. L. Rev. __ (forthcoming 2025), available at SSRN (Sept. 20, 2024).
  • Christina Lee, Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms, 16 U.C. Irvine L. Rev. ___ (forthcoming 2026), available at SSRN (Apr. 10, 2025).

In 2019 the Federal Trade Commission (FTC) created a new remedy in data privacy and AI law: algorithmic disgorgement, also known as model deletion. The FTC required that Cambridge Analytica “delete all Covered Information collected from consumers… and any information or work product, including any algorithms or equations, that originated, in whole or in part, from this Covered Information.” The idea behind model deletion is that companies should not be able to profit of models trained on wrongfully obtained personal data.

Algorithmic disgorgement has by now received its fair share of praise, including from FTC Commissioner Rebecca Kelly Slaughter, who called it “an innovative and promising remedy.” The remedy’s boosters, however, have largely lauded how algorithmic disgorgement/model deletion can mitigate data privacy and algorithmic governance laws’ struggles to identify, quantify, and deter legally cognizable harms.

Two excellent forthcoming articles—Daniel Wilf-Townsend’s The Deletion Remedy and Christina Lee’s Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms—bring both more caution and more depth to the conversation. Both articles offer nuanced framings of algorithmic disgorgement as a remedy, and guiding thoughts on when and how it might most appropriately be deployed.

Wilf-Townsend acknowledges some of the benefits of model deletion (his preferred term, because he claims it really isn’t “disgorgement” at all in the traditional sense) while also criticizing its potentially disproportionate consequences. The article begins with a detailed account of the remedy’s rise. The Biden-era FTC, since Cambridge Analytica (2019), regularly deployed model deletion as a remedy: in its orders in Everalbum (2021), Weight Watchers (2022), Ring (2023), Edmodo (2023), Rite Aid (2024), and Avast (2024). (He and Lee cover much the same list of enforcement actions.) In litigation, however, model deletion has only barely entered the picture.

Wilf-Townsend calls the remedy “model deletion” because he argues that, despite use of the term “disgorgement” by former FTC Commissioner Chopra and FTC Commissioner Slaughter, the remedy really isn’t disgorgement. Fascinatingly, he argues that the fact that “algorithmic disgorgement” is a misnomer may preserve the remedy for the FTC’s use. The Supreme Court held in 2021 in AMG Capital Management that the FTC is not authorized under Section 13(b) to order retroactive monetary disgorgement (i.e., disgorgement of profits). But Wilf-Townsend points out that model deletion is prospective, not retrospective; and it’s not monetary, but behavioral. Thus, the FTC could still properly order model deletion as injunctive relief. In copyright law, the source of authority is clearer: 17 U.S.C. § 503 provides that courts “may order the destruction or other reasonable disposition of all . . . articles by means of which” unlawful copies “may be reproduced.”

Wilf-Townsend recognizes that model deletion can prevent ongoing harms caused by a model, such as the continued disclosure of private personal information or the direct reproduction of images in its training data. Model deletion also avoids the “difficulty of putting a dollar value on a harm” that is so prevalent in U.S. privacy law. Unlike damages, “model deletion… does not inherently need to be pegged to any sort of quantified harm.”

However, Wilf-Townsend is deeply concerned about the potential for throwing the baby out with the bathwater. He describes model deletion as it has been implemented thus far as amounting to a “no bad bytes” rule: if even some of the training data was obtained illegally, then the whole model goes down, regardless of where the model’s value originates, and regardless of potential social costs.

The problem per Wilf-Townsend is that model deletion as currently practiced does not require a showing that the unlawfully gathered or unlawfully processed data be the cause of a model’s value. He argues that for models trained on immense databases, like leading LLMs, “neither the law nor the logic of disgorgement would support the remedy of model deletion” because too little of the overall model’s function and value derives from what might be a relatively miniscule portion of its training data.

Wilf-Townsend closes by proposing “a test for determining whether to use model deletion in a given case.” That test assess how much of the value of a model is derived from unlawful data, which, in my view, would lead to valuation challenges that could undo some of the central benefits of resorting to algorithmic disgorgement in the first place.

Even if a model’s value is not primarily attributable to unlawful data, Wilf-Townsend suggests that model deletion might still be appropriate when considering the defendant’s degree of culpability, a balance of the hardships (similar to equity frameworks), and the availability of alternative remedies (including fine-tuning, unlearning, and filtering).

Where Wilf-Townsend’s article largely compares and contrasts model deletion with traditional monetary disgorgement, Christina Lee does further conceptual heavy lifting. Lee finds that what regulators have been calling “algorithmic disgorgement” (the term she uses throughout) in fact involves two different scenarios of harms and related remedies, tracing to two different underlying principles. This is fascinating work. Lee’s article does what the best articles do: sifts through some complex and sometimes nonintuitive sources to argue that bigger, hard-to-initially-see patterns are at play.

Lee begins by highlighting that the FTC’s use of the disgorgement remedy in Rite Aid marked a decided shift by ordering Ride Aid “to instruct any third parties that received the tainted data from Rite aid to delete… any models or algorithms trained on that data.” Importantly, in Rite Aid FTC went after the company not just for using unlawfully gathered data, but for using the facial recognition software unfairly.

This leads Lee to argue that the FTC has really been deploying not one but two distinct remedies: the first, data-based disgorgement that focuses on the provenance of the model (its unlawful training data); and the second, something more like a product recall, which focuses on the harms the use of a model is causing in the world. Lee convincingly argues that “[t]hey are two distinct remedies that happen to share the same mechanics.”

Lee argues that the data- and use-based remedies stem from two different principles: disgorgement and consumer protection. Where disgorgement attempts to undo wrongful profits stemming from lawless actions, consumer protection is driven by “the desire to avoid having in the market something that is likely to cause harms to a lot of people,” regardless of wrongdoing. They also address issues at different stages of the AI lifecycle. True disgorgement focuses on training data. In effect, consumer protection principles lead to a “disgorgement” that really is more like a postmarket AI recall.

The product recall work here is a must-read. As Lee notes, the EU AI Act empowers European authorities to order “recall” and “withdrawal” of AI systems. Product recalls in other fields stem from a product defect that is repeatedly observable during normal operation or reasonably foreseeable use. What is required is not a showing of scienter or even wrongful behavior, but “a pattern of hazardous defect.”

Lee explains that product recalls may be mandated by regulators, but are also often voluntary, or the result of regulatory nudging. Lee points out that unlike algorithmic disgorgement, recalls in practice occur as an escalating toolkit of remedies: from warning labels and minor repairs, to a requirement that a seller cease production and offer refunds. These escalating levels of recall, she argues, “balance the need to protect consumers from mass harm and the value of having a useful tool available, even if the tool poses some risks.” This is market-level consumer protection reasoning, consistent with the underlying principle she identifies.

The second half of Lee’s article shifts to a more practical critique of the remed[ies]. Lee draws on Katherine Lee et al and Jennifer Cobbe et al’s important work on AI supply chains to argue that the disgorgement remedy often misses the mark. This is both because many distinct actors may be involved in the creation and fine-tuning of an AI model, and because foundation models may serve as a sort of AI infrastructure (my term, not hers) on which other AI systems are built.

Two of Lee’s astute criticisms stem from these observations: that algorithmic disgorgement often has little impact on the actual wrongdoer, who might be elsewhere in the supply chain; and that algorithmic disgorgement may disproportionately affect innocent third parties, especially those using foundation models in different ways, for different purposes. Lee does, however, acknowledge that a consumer-protection-motivated disgorgement/recall might “be justified in certain circumstances…[i]f the offense is egregious, or the magnitude of the potential harm great.” But “in many instances, this will not be the case.”

I came away from these two great articles knowing a lot more about the substantive law and feeling able to situate it within helpful theoretical framings. I do think both, however, undersold the unique institutional story of the remedy. The FTC’s accelerated use of the disgorgement remedy occurred against the backdrop of its loss of monetary remedies in 2021. Both former Commissioner Chopra and Commissioner Slaughter appear to have served as norm entrepreneurs within the FTC, advocating for algorithmic disgorgement as deterrence. While each article covers their advocacy, neither makes a clear argument that the commissioners may have been constructing a replacement enforcement tool as other tools were taken away. Further, the institutional story entails looking at the FTC as a consumer protection agency. I would have liked to see both authors, but especially Lee, discuss the effect the FTC’s institutional values may have had on the development of and subsequent extension of the remedy.

These articles in my view represent crucial readings in large part because I suspect that unlike in the European Union, the U.S. approach to AI will largely end up being (or perhaps, already is?) primarily postmarket. As the backdrop to settlement negotiations, a motivator for creating AI safe-harbors through legislation, or the site of a significant pain point for AI companies, AI disgorgement represents a central regulatory tool in efforts to come.

Cite as: Margot Kaminski, AI Disgorgement or AI Recalls: A Trip down Remedy Lane, JOTWELL (September 3, 2025) (reviewing Daniel Wilf-Townsend, The Deletion Remedy, 103 N. Car. L. Rev. __ (forthcoming 2025), available at SSRN (Sept. 20, 2024); Christina Lee, Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms, 16 U.C. Irvine L. Rev. ___ (forthcoming 2026), available at SSRN (Apr. 10, 2025)), https://cyber.jotwell.com/ai-disgorgement-or-ai-recalls-a-trip-down-remedy-lane

Deepfakes Deconstructed

Benjamin Sobel, A Real Account of Deep Fakes, available at SSRN (May 16, 2024).

With the rapid advancement of photorealistic generative AI technology, the problem of sexually explicit deepfakes has grown more urgent than ever. Thanks to widely available AI systems, users can now easily create images that appear to depict real people engaging in sexual acts. Not only have Taylor Swift and other celebrities been targeted, but deepfakes are also now alarmingly prevalent in American schools.

The government has already started to address the problem. At least 26 states now penalize the creation or distribution of nonconsensual sexually explicit deepfake imagery. And the federal Take It Down Act, which creates criminal penalties and a takedown regime for both real and AI-generated nonconsensual intimate imagery (NCII), was recently signed into law by President Trump. But, as Ben Sobel argues in his excellent (and award winning) new article, A Real Account of Deep Fakes, many of these bans have been passed without first articulating the precise harms posed by sexually explicit deepfakes, leaving the statutes open to free expression challenges. Sobel’s article aims to fill this gap. Through painstaking comparisons between deepfake bans and other areas of law that regulate deception, abuse, privacy invasions, and obscenity, the article crystallizes the normative arguments for deepfake regulation and the First Amendment stakes.

Beginning with a comprehensive survey of all recently passed or proposed state and federal laws, Sobel identifies several features common to many bans of sexually explicit deepfakes. In particular, these laws typically require the deepfake image to be a photorealistic depiction of an identifiable person, they prohibit distribution, and they do not require intent to deceive or harm. Most importantly, they do not allow the use of a disclaimer to avoid liability.

The fact that these bans hold distributors strictly liable, even if the deepfake images are clearly stated to be fictional, means that we cannot understand sexually explicit deepfakes as purely a defamation problem. Defamation requires a false statement that purports to be fact, meaning a disclaimer can generally be used to avoid liability. Sobel instead turns to privacy law to see if that offers a better fit. Building on recent work by Danielle Citron, Benjamin Zipursky, and John Goldberg, Sobel notes that the common law privacy torts are also mismatched with deepfake regulation. Some require the disclosure of true information, which deepfakes obviously are not. The tort of false light polices “offensive” distribution of false information but, like defamation, requires falsity. Privacy law does have ways of preventing the use of another’s likeness without permission, but these too fit deepfake bans unevenly. Claims under the right of publicity are generally limited to commercial uses. And “appropriation”—which Sobel treats as a cousin to the right of publicity that focuses specifically on dignitary harms—generally requires that the appropriation “advantage” the defendant.

Sobel ultimately concludes that deepfake bans are a kind of appropriation regime, but with a different normative core: “Today’s anti-deepfakes statutes redress the injury that appropriation redresses, subject. . . to the offensiveness limitation that appears in the false light tort.” That is, they focus on the “most offensive uses of identity—those that are (a) pornographic and (b) involve the manipulation of persons’ realistic visual likenesses rather than merely the invocation of their names.” The normative basis for deepfake regulation is thus “offensiveness” or “outrageousness,” of the kind that the law recognizes in a variety of areas, but one fraught with First Amendment uncertainty.

The article unpacks the normative and First Amendment stakes of this “offensive appropriation” rationale by turning to an unusual place: semiotic theory, and in particular the work of Charles Sanders Peirce. Semiotics is the study of signs—defined broadly as words, images, sounds, gestures—looking especially at how a sign’s meaning is created and communicated. Scholars have used semiotics in sophisticated ways to illuminate a variety of legal regimes, and Sobel’s work seeks to continue this tradition.

Semiotics distinguishes between two key types of signs: “indices” are signs that point to real-world phenomena (like a photograph) and “icons” are signs that resemble something but do not record reality (like a drawing). Deepfakes, as depictions that do not purport to document reality, are icons—they are closer to drawings than to something like documentary footage. This distinction is not merely semantic: recognizing that the law of deepfakes is fundamentally about the regulation of offensive icons yields interesting comparisons that illustrate the constitutional precariousness of these bans. Sobel’s comparisons include the prohibition on trademark dilution by tarnishment, bans on “morphed” child sexual abuse materials (materials where the image of a child is doctored to appear sexually explicit), and bans on flag and effigy destruction.

Rather than addressing each comparison, I will focus on one example that I think illustrates the value of Sobel’s turn to semiotics: written sexual fantasies. As cases like the notorious “cannibal cop” showcase, the First Amendment generally refuses to criminalize written sexual fantasies that involve real people, no matter how disturbing or obscene. But, as Sobel asks, what is the real difference between written sexual content involving a real person and a non-misleading deepfake? Neither are indices: they describe or depict identifiable people, but do not necessarily purport to document actual events, and both are offensive. Perhaps the visually realistic nature of a deepfake renders it so harmful that a categorical ban would not offend the First Amendment, similar to the ways courts have seemed to accept that morphed child sexual abuse materials (also categorizable as icons) are categorically outside the First Amendment.

Sobel does not claim to offer a doctrinal solution, but his analysis showcases that a blanket deepfake ban is, in essence, a content-based ban on expressive speech. States should be prepared to defend them as such, rather than hiding behind the inaccurate framing of defamation.

This analysis is subtle, and my one quibble is that Sobel could do a bit more to explicitly defend the need for semiotic analysis to make his main points, preemptively addressing those who might dismiss it as conceptual flair. More engagement with the rich literature on law and semiotics might help sway such skeptics. That said, I personally found the use of semiotic theory effective. The article rewards close reading, and Sobel is adept at threading complex social theory through many different areas of law.

Ultimately, Sobel’s work counsels us that even dire problems like sexually explicit deepfakes must be addressed judiciously to avoid undermining free expression and other constitutional protections. This is a lesson that we would be wise to apply to other problems posed by generative AI, which have led to a wave of new or proposed legislation. Many of these problems are serious, but their seriousness should not obviate the need for thoughtful analysis of AI’s precise harms and carefully tailored regulatory solutions.

Cite as: Jacob Noti-Victor, Deepfakes Deconstructed, JOTWELL (July 18, 2025) (reviewing Benjamin Sobel, A Real Account of Deep Fakes, available at SSRN (May 16, 2024)), https://cyber.jotwell.com/deepfakes-deconstructed/.

The Problem of Insincere, Post-Hoc AI Explanations

Boris Babic & I. Glenn Cohen, The Algorithmic Explainability "Bait and Switch", available at SSRN (August 20, 2023).

AI is mysterious and important. It’s important because it’s showing up everywhere and doing lots of things. It’s mysterious because we very often don’t know how it works and why it comes to the conclusions it does. Whether AI should be important is hotly debated, but its mystery is widely regarded as a problem, particularly when AI is making inscrutable decisions that matter to people’s lives. And so there are widespread calls in law, policy, and scholarship for explainable AI—that is, ways to explain just why an AI system came to the conclusion it did. In The Algorithmic Explainability “Bait and Switch, Boris Babic and Glenn Cohen add to the literature on explainable AI by clearly and convincingly arguing that explainable AI is “fool’s gold”—shiny and exciting on the surface, but not what we need, because it’s post hoc, insincere, tough to judge, and can’t be used to effectively guide actions.

So what is explainable AI, and why does it matter? Essentially, the problem is that it’s too hard to understand how AI makes decisions; they’re too complicated and don’t make sense, so they’re opaque to us. Explainable AI tries to use another, simpler algorithm to approximate a plausible reason the AI might have come to its conclusion; that explanation is typically specific to the conclusion being questioned. This happens after the initial system does its thing; it’s a post-hoc approximation, not a true accounting of why the initial system actually did what it did. Babic and Cohen illustrate this using an extended hypothetical admissions model for a hypothetical law school which shows the pitfalls and why they matter.

(As an aside, this bit demonstrates a real strength of the piece: its comprehensibility on complex topics. There’s a tension in law review articles: They need to speak to generalist readers (including the law students who do selection and editing, as well as scholars in adjacent fields), but they also need to move the ball forward for expert readers who are already in the conversation. It’s tough to do this well; typical approaches include neglecting one task or writing very long pieces with lots of detailed background to get the nonexpert up to speed. Both can be frustrating. Babic and Cohen smoothly walk this dual path, in part by using a sort of Choose-Your-Own-Adventure structure in the Background. ‘Here’s the math,’ they say, ‘but if you’d like, feel free to skip ahead to the intuitive example where we make it easy to understand.’)

Because AI explanations are simplified post-hoc approximations, they’ve got some real problems. They’re “insincere,” Babic and Cohen argue, in that they’re plausible reasons that the system might have used to make a particular decision (in the example, admitting a prospective student or not). But there’s no guarantee that they’re the actual reason. Indeed, there couldn’t be such a guarantee, because the whole point of post-hoc explanations is that they’re simple enough to be understood, when the whole reason we need post-hoc explanations in the first place is that the actual AI system being used isn’t simple enough to be understood. There’s a gap by definition. And so these answers aren’t sincere.

That post-hoc insincerity is a real problem for AI explanations for three big reasons. First, if an AI explanation doesn’t tell the actual reason for a decision, the affected party can’t know what to change to alter the outcome for next time (as an alternative, some have suggested systems for playing around with lots of possibilities to try to figure that out). It’s not an “action guiding” explanation if it can’t reliably guide action, something it’s often hoped explanations will do. If you’re trying to find out why your date to the movies is late and you only get a plausible explanation rather than the actual explanation, it’s hard to know whether to bail, buy a ticket for a later show, or get snacks because they’re on their way. (The article is spangled with delightful, intuitive examples that make tough concepts easier to understand, from Maverick and Goose piloting fighter jets to too-short dates to unethical test-ordering doctors; it’s a real strength.) More seriously, if someone gets denied parole and told a plausible reason that might or not be the real reason, it’s tough to know what to do to improve their chances for next time.

The second big problem with insincerity is trust. One touted benefit of explainability is that if people affected by AI systems understand their reasons, they’ll trust the AI systems (and the human systems in which they’re embedded) more. Transparency matters, and that includes knowing how decisions were reached. But if explanations are insincere and inaccurate, that’s likely to destroy trust in the system, not build it. This, Babic and Cohen point out, is especially likely because explainable AI comes up with different explanations for different individual decisions—and if the subjects of those decisions can share stories, they might find pretty quickly that they were given different decision rules.

Third and finally, it’s important to evaluate AI systems’ decision rules, because many rules aren’t OK. If a post-hoc, insincere explanation doesn’t reliably reflect the actual decision rule, it’s not a useful path to evaluate whether that rule is racist or sexist or otherwise unacceptable (which is disturbingly often the case).

The problems Babic and Cohen highlight matter because AI is incorporated into a broader range of contexts and decisions. When they wrote this piece in the hoary days of 2023, generative AI was still relatively new, and they focused accordingly on classification algorithms. But the problems of explanation remain, not only with those older systems but also with generative AI. Indeed, users can ask a chatbot why it said what it said. Trusting the answer is another matter. These issues aren’t going away.

So what’s to be done? There’s always the hope for a technological deus ex machina that makes all the black-boxes transparent and explicable; that’d be lovely but seems unlikely, at least in the near term, whether because it’s computationally very expensive to peer inside even simple black boxes or because some black box mechanics simply aren’t explicable. Instead, Babic and Cohen argue, we need to face up to the reality that explainable AI can’t really do all that’s asked of it. In some circumstances, that means we need to rely on interpretable AI or algorithms instead (simpler models we can actually understand); the Fair Credit Reporting Act takes this approach, for instance. Where procedural justice or democratic freedom are at stake, we truly need to understand why decisions are reached. In other contexts, we might be willing to sacrifice understanding in service of better performance; many medical AI systems might fall into this bucket. In any case, we should be clear-eyed about what we’re doing. With Babic and Cohen’s sharp and cogent explanation of explainability, that’s an easier task to undertake.

Cite as: Nicholson Price, The Problem of Insincere, Post-Hoc AI Explanations, JOTWELL (June 20, 2025) (reviewing Boris Babic & I. Glenn Cohen, The Algorithmic Explainability "Bait and Switch", available at SSRN (August 20, 2023)), https://cyber.jotwell.com/the-problem-of-insincere-post-hoc-ai-explanations/.

Centering the Vulnerable through Data Protection

Gianclaudio Malgieri, Vulnerability and Data Protection Law (2023).

For American lawyers, the concept of data protection can seem overly bureaucratic and even a bit obtuse. American legal scholars, in general, prefer to think in terms of privacy, with its manifold methods of potential protection of the liberal individual subject via tort causes of action, criminal law, consumer protection, and, occasionally some actual command and control regulation. In other words, the concept of data protection can—again, particularly for American audiences—seem question begging: protection of what data, whose data, and from whom? (Clearly the same questions can and are asked about privacy protections).

In his recent book, Professor Gianclaudio Malgieri explains why data protection laws matter. The GDPR isn’t an annoying consent regime for internet browsing, but can be mustered to protect people along several axes of vulnerability—including their demographics, yes, but also any power imbalance relative to the data controllers. The GDPR isn’t ideal for guarding against vulnerability because it lacks clear and explicit protections for the precarious and, according to Malgieri, new regimes must be imagined and implemented. But the book’s critically optimistic view helps us see how data protection can be used here and how to guard against vulnerability; in essence, as a form of harm reduction. It is a rigorous book that deftly applies often ethereal (but important) philosophical concepts to a turgid regulatory regime in order to unpack that regime’s anti-subordination potential.

How so? To begin, Malgieri explains while, on its face, the GDPR seems geared toward protecting an “average” data subject, there is room for consideration of contextual factors that might make the law more attentive to the needs of vulnerable subjects. Drawing from the work of Professor Martha Fineman and others, Malgieri recognizes that vulnerability is not a static concept tied to any specific demographic identities, but is a dynamic one that captures various kinds of power imbalances and intersectional identities. He then documents how European law makes room for the concept of a dynamic vulnerable subject in various contexts ranging from human rights to consumer protection. He believes there is support for incorporating this approach into the interpretation of the GDPR in part because of the GDPR’s solicitude for certain kinds of individuals, particularly children, and particular kinds of information, including the so-called special category data or sensitive data.

Assuming that is true, Malgieri explains how the GDPR can be interpreted to consider vulnerability both when evaluating whether data processors are complying with their duties as to those individuals and in determining whether individuals have the capacity to take advantage of the GDPR’s consent-and objection-based safeguards. In other words, there may be some hard and fast limits on what data can be processed with respect to vulnerable individuals. In particular, Malgieri sees potential for the data-protection impact assessments (DPIA) required by the GDPR as a fertile space where vulnerability concepts can be implemented with alacrity.

Make no mistake, Malgieri is clear-eyed that the GDPR is no magic wand for protecting vulnerable data subjects. And he recognizes both that his reading of the GDPR’s obligations with respect to vulnerability is aggressive (albeit textually strong), and that the GDPR could be amended to more explicitly capture the plastic concept of vulnerability without making it so flexible that it loses force and meaning. But Malgieri’s book does a truly commendable job of doing what lawyers ought to do: lawyer. It makes strong textual and normative arguments to advance the law toward justice and it does so in a methodical, disciplined, and yet accessible way. It’s a tremendous intervention for all those concerned about anti-subordination in the digital and physical spheres.

Cite as: Scott Skinner-Thompson, Centering the Vulnerable through Data Protection, JOTWELL (May 22, 2025) (reviewing Gianclaudio Malgieri, Vulnerability and Data Protection Law (2023)), https://cyber.jotwell.com/centering-the-vulnerable-through-data-protection/.