The Journal of Things We Like (Lots)
Select Page
Benjamin Sobel, A Real Account of Deep Fakes, available at SSRN (May 16, 2024).

With the rapid advancement of photorealistic generative AI technology, the problem of sexually explicit deepfakes has grown more urgent than ever. Thanks to widely available AI systems, users can now easily create images that appear to depict real people engaging in sexual acts. Not only have Taylor Swift and other celebrities been targeted, but deepfakes are also now alarmingly prevalent in American schools.

The government has already started to address the problem. At least 26 states now penalize the creation or distribution of nonconsensual sexually explicit deepfake imagery. And the federal Take It Down Act, which creates criminal penalties and a takedown regime for both real and AI-generated nonconsensual intimate imagery (NCII), was recently signed into law by President Trump. But, as Ben Sobel argues in his excellent (and award winning) new article, A Real Account of Deep Fakes, many of these bans have been passed without first articulating the precise harms posed by sexually explicit deepfakes, leaving the statutes open to free expression challenges. Sobel’s article aims to fill this gap. Through painstaking comparisons between deepfake bans and other areas of law that regulate deception, abuse, privacy invasions, and obscenity, the article crystallizes the normative arguments for deepfake regulation and the First Amendment stakes.

Beginning with a comprehensive survey of all recently passed or proposed state and federal laws, Sobel identifies several features common to many bans of sexually explicit deepfakes. In particular, these laws typically require the deepfake image to be a photorealistic depiction of an identifiable person, they prohibit distribution, and they do not require intent to deceive or harm. Most importantly, they do not allow the use of a disclaimer to avoid liability.

The fact that these bans hold distributors strictly liable, even if the deepfake images are clearly stated to be fictional, means that we cannot understand sexually explicit deepfakes as purely a defamation problem. Defamation requires a false statement that purports to be fact, meaning a disclaimer can generally be used to avoid liability. Sobel instead turns to privacy law to see if that offers a better fit. Building on recent work by Danielle Citron, Benjamin Zipursky, and John Goldberg, Sobel notes that the common law privacy torts are also mismatched with deepfake regulation. Some require the disclosure of true information, which deepfakes obviously are not. The tort of false light polices “offensive” distribution of false information but, like defamation, requires falsity. Privacy law does have ways of preventing the use of another’s likeness without permission, but these too fit deepfake bans unevenly. Claims under the right of publicity are generally limited to commercial uses. And “appropriation”—which Sobel treats as a cousin to the right of publicity that focuses specifically on dignitary harms—generally requires that the appropriation “advantage” the defendant.

Sobel ultimately concludes that deepfake bans are a kind of appropriation regime, but with a different normative core: “Today’s anti-deepfakes statutes redress the injury that appropriation redresses, subject. . . to the offensiveness limitation that appears in the false light tort.” That is, they focus on the “most offensive uses of identity—those that are (a) pornographic and (b) involve the manipulation of persons’ realistic visual likenesses rather than merely the invocation of their names.” The normative basis for deepfake regulation is thus “offensiveness” or “outrageousness,” of the kind that the law recognizes in a variety of areas, but one fraught with First Amendment uncertainty.

The article unpacks the normative and First Amendment stakes of this “offensive appropriation” rationale by turning to an unusual place: semiotic theory, and in particular the work of Charles Sanders Peirce. Semiotics is the study of signs—defined broadly as words, images, sounds, gestures—looking especially at how a sign’s meaning is created and communicated. Scholars have used semiotics in sophisticated ways to illuminate a variety of legal regimes, and Sobel’s work seeks to continue this tradition.

Semiotics distinguishes between two key types of signs: “indices” are signs that point to real-world phenomena (like a photograph) and “icons” are signs that resemble something but do not record reality (like a drawing). Deepfakes, as depictions that do not purport to document reality, are icons—they are closer to drawings than to something like documentary footage. This distinction is not merely semantic: recognizing that the law of deepfakes is fundamentally about the regulation of offensive icons yields interesting comparisons that illustrate the constitutional precariousness of these bans. Sobel’s comparisons include the prohibition on trademark dilution by tarnishment, bans on “morphed” child sexual abuse materials (materials where the image of a child is doctored to appear sexually explicit), and bans on flag and effigy destruction.

Rather than addressing each comparison, I will focus on one example that I think illustrates the value of Sobel’s turn to semiotics: written sexual fantasies. As cases like the notorious “cannibal cop” showcase, the First Amendment generally refuses to criminalize written sexual fantasies that involve real people, no matter how disturbing or obscene. But, as Sobel asks, what is the real difference between written sexual content involving a real person and a non-misleading deepfake? Neither are indices: they describe or depict identifiable people, but do not necessarily purport to document actual events, and both are offensive. Perhaps the visually realistic nature of a deepfake renders it so harmful that a categorical ban would not offend the First Amendment, similar to the ways courts have seemed to accept that morphed child sexual abuse materials (also categorizable as icons) are categorically outside the First Amendment.

Sobel does not claim to offer a doctrinal solution, but his analysis showcases that a blanket deepfake ban is, in essence, a content-based ban on expressive speech. States should be prepared to defend them as such, rather than hiding behind the inaccurate framing of defamation.

This analysis is subtle, and my one quibble is that Sobel could do a bit more to explicitly defend the need for semiotic analysis to make his main points, preemptively addressing those who might dismiss it as conceptual flair. More engagement with the rich literature on law and semiotics might help sway such skeptics. That said, I personally found the use of semiotic theory effective. The article rewards close reading, and Sobel is adept at threading complex social theory through many different areas of law.

Ultimately, Sobel’s work counsels us that even dire problems like sexually explicit deepfakes must be addressed judiciously to avoid undermining free expression and other constitutional protections. This is a lesson that we would be wise to apply to other problems posed by generative AI, which have led to a wave of new or proposed legislation. Many of these problems are serious, but their seriousness should not obviate the need for thoughtful analysis of AI’s precise harms and carefully tailored regulatory solutions.

Download PDF
Cite as: Jacob Noti-Victor, Deepfakes Deconstructed, JOTWELL (July 18, 2025) (reviewing Benjamin Sobel, A Real Account of Deep Fakes, available at SSRN (May 16, 2024)), https://cyber.jotwell.com/deepfakes-deconstructed/.