The Journal of Things We Like (Lots)
Select Page
  • Daniel Wilf-Townsend, The Deletion Remedy, 103 N. Car. L. Rev. __ (forthcoming 2025), available at SSRN (Sept. 20, 2024).
  • Christina Lee, Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms, 16 U.C. Irvine L. Rev. ___ (forthcoming 2026), available at SSRN (Apr. 10, 2025).

In 2019 the Federal Trade Commission (FTC) created a new remedy in data privacy and AI law: algorithmic disgorgement, also known as model deletion. The FTC required that Cambridge Analytica “delete all Covered Information collected from consumers… and any information or work product, including any algorithms or equations, that originated, in whole or in part, from this Covered Information.” The idea behind model deletion is that companies should not be able to profit of models trained on wrongfully obtained personal data.

Algorithmic disgorgement has by now received its fair share of praise, including from FTC Commissioner Rebecca Kelly Slaughter, who called it “an innovative and promising remedy.” The remedy’s boosters, however, have largely lauded how algorithmic disgorgement/model deletion can mitigate data privacy and algorithmic governance laws’ struggles to identify, quantify, and deter legally cognizable harms.

Two excellent forthcoming articles—Daniel Wilf-Townsend’s The Deletion Remedy and Christina Lee’s Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms—bring both more caution and more depth to the conversation. Both articles offer nuanced framings of algorithmic disgorgement as a remedy, and guiding thoughts on when and how it might most appropriately be deployed.

Wilf-Townsend acknowledges some of the benefits of model deletion (his preferred term, because he claims it really isn’t “disgorgement” at all in the traditional sense) while also criticizing its potentially disproportionate consequences. The article begins with a detailed account of the remedy’s rise. The Biden-era FTC, since Cambridge Analytica (2019), regularly deployed model deletion as a remedy: in its orders in Everalbum (2021), Weight Watchers (2022), Ring (2023), Edmodo (2023), Rite Aid (2024), and Avast (2024). (He and Lee cover much the same list of enforcement actions.) In litigation, however, model deletion has only barely entered the picture.

Wilf-Townsend calls the remedy “model deletion” because he argues that, despite use of the term “disgorgement” by former FTC Commissioner Chopra and FTC Commissioner Slaughter, the remedy really isn’t disgorgement. Fascinatingly, he argues that the fact that “algorithmic disgorgement” is a misnomer may preserve the remedy for the FTC’s use. The Supreme Court held in 2021 in AMG Capital Management that the FTC is not authorized under Section 13(b) to order retroactive monetary disgorgement (i.e., disgorgement of profits). But Wilf-Townsend points out that model deletion is prospective, not retrospective; and it’s not monetary, but behavioral. Thus, the FTC could still properly order model deletion as injunctive relief. In copyright law, the source of authority is clearer: 17 U.S.C. § 503 provides that courts “may order the destruction or other reasonable disposition of all . . . articles by means of which” unlawful copies “may be reproduced.”

Wilf-Townsend recognizes that model deletion can prevent ongoing harms caused by a model, such as the continued disclosure of private personal information or the direct reproduction of images in its training data. Model deletion also avoids the “difficulty of putting a dollar value on a harm” that is so prevalent in U.S. privacy law. Unlike damages, “model deletion… does not inherently need to be pegged to any sort of quantified harm.”

However, Wilf-Townsend is deeply concerned about the potential for throwing the baby out with the bathwater. He describes model deletion as it has been implemented thus far as amounting to a “no bad bytes” rule: if even some of the training data was obtained illegally, then the whole model goes down, regardless of where the model’s value originates, and regardless of potential social costs.

The problem per Wilf-Townsend is that model deletion as currently practiced does not require a showing that the unlawfully gathered or unlawfully processed data be the cause of a model’s value. He argues that for models trained on immense databases, like leading LLMs, “neither the law nor the logic of disgorgement would support the remedy of model deletion” because too little of the overall model’s function and value derives from what might be a relatively miniscule portion of its training data.

Wilf-Townsend closes by proposing “a test for determining whether to use model deletion in a given case.” That test assess how much of the value of a model is derived from unlawful data, which, in my view, would lead to valuation challenges that could undo some of the central benefits of resorting to algorithmic disgorgement in the first place.

Even if a model’s value is not primarily attributable to unlawful data, Wilf-Townsend suggests that model deletion might still be appropriate when considering the defendant’s degree of culpability, a balance of the hardships (similar to equity frameworks), and the availability of alternative remedies (including fine-tuning, unlearning, and filtering).

Where Wilf-Townsend’s article largely compares and contrasts model deletion with traditional monetary disgorgement, Christina Lee does further conceptual heavy lifting. Lee finds that what regulators have been calling “algorithmic disgorgement” (the term she uses throughout) in fact involves two different scenarios of harms and related remedies, tracing to two different underlying principles. This is fascinating work. Lee’s article does what the best articles do: sifts through some complex and sometimes nonintuitive sources to argue that bigger, hard-to-initially-see patterns are at play.

Lee begins by highlighting that the FTC’s use of the disgorgement remedy in Rite Aid marked a decided shift by ordering Ride Aid “to instruct any third parties that received the tainted data from Rite aid to delete… any models or algorithms trained on that data.” Importantly, in Rite Aid FTC went after the company not just for using unlawfully gathered data, but for using the facial recognition software unfairly.

This leads Lee to argue that the FTC has really been deploying not one but two distinct remedies: the first, data-based disgorgement that focuses on the provenance of the model (its unlawful training data); and the second, something more like a product recall, which focuses on the harms the use of a model is causing in the world. Lee convincingly argues that “[t]hey are two distinct remedies that happen to share the same mechanics.”

Lee argues that the data- and use-based remedies stem from two different principles: disgorgement and consumer protection. Where disgorgement attempts to undo wrongful profits stemming from lawless actions, consumer protection is driven by “the desire to avoid having in the market something that is likely to cause harms to a lot of people,” regardless of wrongdoing. They also address issues at different stages of the AI lifecycle. True disgorgement focuses on training data. In effect, consumer protection principles lead to a “disgorgement” that really is more like a postmarket AI recall.

The product recall work here is a must-read. As Lee notes, the EU AI Act empowers European authorities to order “recall” and “withdrawal” of AI systems. Product recalls in other fields stem from a product defect that is repeatedly observable during normal operation or reasonably foreseeable use. What is required is not a showing of scienter or even wrongful behavior, but “a pattern of hazardous defect.”

Lee explains that product recalls may be mandated by regulators, but are also often voluntary, or the result of regulatory nudging. Lee points out that unlike algorithmic disgorgement, recalls in practice occur as an escalating toolkit of remedies: from warning labels and minor repairs, to a requirement that a seller cease production and offer refunds. These escalating levels of recall, she argues, “balance the need to protect consumers from mass harm and the value of having a useful tool available, even if the tool poses some risks.” This is market-level consumer protection reasoning, consistent with the underlying principle she identifies.

The second half of Lee’s article shifts to a more practical critique of the remed[ies]. Lee draws on Katherine Lee et al and Jennifer Cobbe et al’s important work on AI supply chains to argue that the disgorgement remedy often misses the mark. This is both because many distinct actors may be involved in the creation and fine-tuning of an AI model, and because foundation models may serve as a sort of AI infrastructure (my term, not hers) on which other AI systems are built.

Two of Lee’s astute criticisms stem from these observations: that algorithmic disgorgement often has little impact on the actual wrongdoer, who might be elsewhere in the supply chain; and that algorithmic disgorgement may disproportionately affect innocent third parties, especially those using foundation models in different ways, for different purposes. Lee does, however, acknowledge that a consumer-protection-motivated disgorgement/recall might “be justified in certain circumstances…[i]f the offense is egregious, or the magnitude of the potential harm great.” But “in many instances, this will not be the case.”

I came away from these two great articles knowing a lot more about the substantive law and feeling able to situate it within helpful theoretical framings. I do think both, however, undersold the unique institutional story of the remedy. The FTC’s accelerated use of the disgorgement remedy occurred against the backdrop of its loss of monetary remedies in 2021. Both former Commissioner Chopra and Commissioner Slaughter appear to have served as norm entrepreneurs within the FTC, advocating for algorithmic disgorgement as deterrence. While each article covers their advocacy, neither makes a clear argument that the commissioners may have been constructing a replacement enforcement tool as other tools were taken away. Further, the institutional story entails looking at the FTC as a consumer protection agency. I would have liked to see both authors, but especially Lee, discuss the effect the FTC’s institutional values may have had on the development of and subsequent extension of the remedy.

These articles in my view represent crucial readings in large part because I suspect that unlike in the European Union, the U.S. approach to AI will largely end up being (or perhaps, already is?) primarily postmarket. As the backdrop to settlement negotiations, a motivator for creating AI safe-harbors through legislation, or the site of a significant pain point for AI companies, AI disgorgement represents a central regulatory tool in efforts to come.

Download PDF
Cite as: Margot Kaminski, AI Disgorgement or AI Recalls: A Trip down Remedy Lane, JOTWELL (September 3, 2025) (reviewing Daniel Wilf-Townsend, The Deletion Remedy, 103 N. Car. L. Rev. __ (forthcoming 2025), available at SSRN (Sept. 20, 2024); Christina Lee, Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms, 16 U.C. Irvine L. Rev. ___ (forthcoming 2026), available at SSRN (Apr. 10, 2025)), https://cyber.jotwell.com/ai-disgorgement-or-ai-recalls-a-trip-down-remedy-lane