Government actors across the globe have responded to the rapid uptake of artificial intelligence by adopting or proposing various forms of legislation. For instance, on September 29, 2025, California adopted the Transparency in Frontier Artificial Intelligence Act, which imposes transparency and safety obligations on artificial intelligence companies in the state. Other states, such as Colorado, have also responded by enacting laws addressing artificial intelligence. At the federal level, the proposed Generative AI Copyright Disclosure Act would impose disclosure requirements on artificial intelligence developers that use copyrighted works to train their systems. In 2024 the European Parliament adopted the Artificial Intelligence Act—a comprehensive framework for the regulation of artificial intelligence in European Union countries. Despite domestic and international legislative responses, the rapid rise of artificial intelligence continues to pose significant challenges for several established areas of law, including privacy law and intellectual property law.
In her article AI and Doctrinal Collapse, Professor Alicia Solow-Niederman offers an impressive contribution to both the privacy law and intellectual property law fields by exposing the various pressures placed on these two legal regimes by artificial intelligence. Solow-Niederman contends that artificial intelligence has blurred the boundaries between privacy law and copyright law—a phenomenon she aptly labels as “inter-regime doctrinal collapse.” She convincingly posits that without sufficient intervention, corporate actors will continue to implement “exploitation tactics” to profit from this doctrinal collapse and further undermine the rule of law.
Solow-Niederman observes that because artificial intelligence is often dependent on the use of data and both privacy law and copyright law regulate data, “there is overlapping coverage of the same regulatory object.” However, both legal regimes have distinct logics. She descriptively notes that while copyright law emphasizes a property regime “and the closely related issue of incentives,” privacy law is based primarily on the concepts of control and autonomy. American privacy law focuses on ensuring that individuals can control their data as opposed to granting property rights in data. America’s largely self-regulatory notice-and-choice model, in which companies provide notice of their privacy practices and individuals then choose whether to consent to those practices, is in keeping with this approach.
Solow-Niederman contends that if “the discreet rules and logics of” privacy law and copyright law do “not remain sufficiently distinct” or “are not legible, then the two domains [will] collapse into one another.” She goes on to identify inherent weaknesses in data privacy law which blur the boundaries between both legal regimes. She argues that this doctrinal collapse enabled by artificial intelligence facilitates corporate exploitation and opportunism. One example of doctrinal collapse that she identifies occurs when corporations “make claims about the public nature of data to justify data acquisition . . . [but] also make subsequent or simultaneous claims that the data is proprietary.” The fair use doctrine may protect corporate use of public data in the development of artificial intelligence models. Solow-Niederman goes on to posit that in the privacy law context, the “same ‘publicly available’ claim removes the material from the reach of information privacy law” because individuals generally do not have a significant privacy interest in publicly available data. She posits that this allows companies to exploit ambiguities in the definition of public data and it enables “companies to switch between legal regimes in ways that further destabilize” both privacy law and copyright laws’ “doctrinal integrity and normative coherence.” Artificial intelligence companies may also contend that users’ privacy justifies or supports their arguments to avoid discovery and disclosure of artificial intelligence-related data in intellectual property litigation.
She then turns her attention to identifying two distinct corporate tactics—“buy” and “ask”—that she contends are problematic. Under the “buy” approach, companies purchase data via licensing agreements in business-to-business transactions involving an artificial intelligence developer and “an aggregator of content.” Solow-Niederman convincingly argues that a “buy takes advantage of limitations and weaknesses in both privacy law and IP law to reduce overall regulatory costs.” For instance, a licensing deal between two corporations permits the transfer of data after individuals have consented to data disclosure via the notice-and-choice model, even if the subsequent use of the data in the artificial intelligence context violates individuals’ privacy expectations. The individual is also excluded from this business-to-business transaction.
Under the “ask” approach, companies weaponize the notice-and-choice model. For instance, an artificial intelligence developer may directly obtain consent from individuals to use their data to train artificial intelligence models via their privacy policies and terms of service. With respect to intellectual property, companies can also “ask” individuals to grant them “a form of copyright license” to use the data via their terms of service. Thus, the “ask” approach can “both limit future exposure to copyright liability and mitigate copyright adjacent social costs,” while allowing corporate entities to acquire the data they need to train their artificial intelligence systems. Additionally, Solow-Niederman contends that the “ask” approach is available to only the few corporate actors that possess a significancy large database of individual users.
This well-written article concludes with recommendations for mitigating concerns associated with doctrinal collapse. Solow-Niederman argues that legal institutions must first acknowledge the presence of doctrinal collapse. Doing so would enable “advocates to pinpoint which regulatory objects are likely to be focal points of contestation.” She also proposes a “conflict of laws inspired” solution. Under this approach, courts that are faced with a dispute involving competing copyright and privacy law interests could adopt “a rebuttable ‘anti-switching presumption.’” Under this presumption, a party would be prevented from asserting “mutually incompatible claims at different points in a lawsuit, absent a sufficiently compelling reason to defeat the presumption.” She also recommends the adoption of regulatory reforms in the privacy law regime to close gaps that facilitate corporate exploitation.
Solow-Niederman’s insightful description of doctrinal collapse in the copyright law and privacy law regimes should be of particular interest to courts, legislators, and scholars in any of the law-and-technology, intellectual property and privacy law fields.






