May 22, 2025 Scott Skinner-Thompson
For American lawyers, the concept of data protection can seem overly bureaucratic and even a bit obtuse. American legal scholars, in general, prefer to think in terms of privacy, with its manifold methods of potential protection of the liberal individual subject via tort causes of action, criminal law, consumer protection, and, occasionally some actual command and control regulation. In other words, the concept of data protection can—again, particularly for American audiences—seem question begging: protection of what data, whose data, and from whom? (Clearly the same questions can and are asked about privacy protections).
In his recent book, Professor Gianclaudio Malgieri explains why data protection laws matter. The GDPR isn’t an annoying consent regime for internet browsing, but can be mustered to protect people along several axes of vulnerability—including their demographics, yes, but also any power imbalance relative to the data controllers. The GDPR isn’t ideal for guarding against vulnerability because it lacks clear and explicit protections for the precarious and, according to Malgieri, new regimes must be imagined and implemented. But the book’s critically optimistic view helps us see how data protection can be used here and how to guard against vulnerability; in essence, as a form of harm reduction. It is a rigorous book that deftly applies often ethereal (but important) philosophical concepts to a turgid regulatory regime in order to unpack that regime’s anti-subordination potential.
How so? To begin, Malgieri explains while, on its face, the GDPR seems geared toward protecting an “average” data subject, there is room for consideration of contextual factors that might make the law more attentive to the needs of vulnerable subjects. Drawing from the work of Professor Martha Fineman and others, Malgieri recognizes that vulnerability is not a static concept tied to any specific demographic identities, but is a dynamic one that captures various kinds of power imbalances and intersectional identities. He then documents how European law makes room for the concept of a dynamic vulnerable subject in various contexts ranging from human rights to consumer protection. He believes there is support for incorporating this approach into the interpretation of the GDPR in part because of the GDPR’s solicitude for certain kinds of individuals, particularly children, and particular kinds of information, including the so-called special category data or sensitive data.
Assuming that is true, Malgieri explains how the GDPR can be interpreted to consider vulnerability both when evaluating whether data processors are complying with their duties as to those individuals and in determining whether individuals have the capacity to take advantage of the GDPR’s consent-and objection-based safeguards. In other words, there may be some hard and fast limits on what data can be processed with respect to vulnerable individuals. In particular, Malgieri sees potential for the data-protection impact assessments (DPIA) required by the GDPR as a fertile space where vulnerability concepts can be implemented with alacrity.
Make no mistake, Malgieri is clear-eyed that the GDPR is no magic wand for protecting vulnerable data subjects. And he recognizes both that his reading of the GDPR’s obligations with respect to vulnerability is aggressive (albeit textually strong), and that the GDPR could be amended to more explicitly capture the plastic concept of vulnerability without making it so flexible that it loses force and meaning. But Malgieri’s book does a truly commendable job of doing what lawyers ought to do: lawyer. It makes strong textual and normative arguments to advance the law toward justice and it does so in a methodical, disciplined, and yet accessible way. It’s a tremendous intervention for all those concerned about anti-subordination in the digital and physical spheres.
Apr 23, 2025 Ifeoma Ajunwa
With her recent article, A Products Liability Framework for A.I., Professor Catherine Sharkey may have silenced at least some critics of artificial intelligence (A.I.) regulation. At the very least, the article stands as a sharp retort to anti-regulation advocates who often crow: “But how can we regulate A.I. when we don’t even yet know the full extent of what it can do or how it will be used?” Sharkey’s proposed regulatory framework, which eschews ex-ante pre-approval strategies in favor of post-market regulatory monitoring, may just be the answer to one of the critics’ favorite regulatory dodge.
Sharkey has the savoir faire to be afforded credence for any A.I. regulation proposal. As both an A.I./ML (machine learning) law and tort law scholar, what most stands out about Sharkey’s oeuvre is that she has gained enviable access to observe how A.I./ML systems are deployed in the government and has deployed her admirable analytical skills in dissecting those workings. For example, in Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, Sharkey (along with other scholars), conducted a rigorous canvass of A.I. use at 142 federal departments, agencies, and sub-agencies. Sharkey et al’s work in Government by Algorithm has been an inspiration for other scholars taking up the mantle to advocate for guardrails to automated governance.
I found reading A Products Liability Framework for A.I. to be similarly highly generative in my thinking of regulatory legal mechanisms, and I believe this article will become canonical for A.I. legal scholars grappling with the challenges of regulating emerging A.I. technologies. First, the Article notes the peculiar regulatory challenges posed by A.I./ML given their adaptive nature. Sharkey observes, “[c]ritics suggest that regulating A.I./ML demands a unique regulatory approach because, as A.I./ML technologies are sent out into the world and encounter new situations, they learn and change in real time.”
The first helpful contribution of the Article is that Sharkey handily demonstrates why A.I. technologies could be considered “products.” She seizes on the FDA’s stance for governing A.I./ML medical devices as products as a lodestar. Ultimately, she argues for a functional approach, advocating that A.I. technologies should be considered products due to their mass-market distribution and potential for widespread harm, since these are the same underlying public policy concerns of products liability law. Sharkey contends that classifying A.I. as a product ensures that liability frameworks remain effective in protecting consumers.
After establishing that A.I. should be considered a product, Sharkey’s article is built around the idea that the uncertainty produced by the ever-changing nature of A.I. development and use is neither peculiar to that technology, nor an insurmountable challenge to regulation. Rather, other emerging technologies have presented the same uncertainty in their nascent years and those regulatory challenges were still governable.
To Sharkey, the key to those early governance problems was products liability. As she notes in this Article and in previous writings: “Products liability…is a microcosm of how the common law evolves over time to respond to new societal risks—historically, those posed by the automobile, mass-produced goods, digital e-commerce…” Therefore, for Sharkey, it follows that product liability legal frameworks may also work well for regulating emerging technologies like A.I. She argues that products liability law affords legal mechanisms, including: an information-forcing function for safety-related information while more proactive regulatory frameworks are being developed, a liability insurance regime, and the added efficiency of applying the cheapest cost avoider theory.
Sharkey argues: “We can draw lessons from historical examples where society faced new and uncertain risks to demonstrate that, even when risks are uncertain or not entirely understood, tort liability can serve an information-production function during a “transitional period” before an ex ante regulatory scheme is in place.” Second, Sharkey notes the role of liability insurance, especially to produce information and enforce standards to mitigate or prevent harms from A.I. She writes, “Liability insurers can aggregate risk-related information obtained about the expanding universe of policyholders as part of the process of underwriting and premium-setting.” Third, Sharkey believes that the “cheapest cost avoider” theory serves as an effective deterrence. As applied to A.I., the “cheapest cost avoider” framework is less concerned with A.I.’s “Black Box” problem because it is concerned only with reducing the societal cost of accidents. According to Sharkey, “Instead of attempting to attribute each A.I. output to a single party, courts would focus on whether the interactive user or the A.I. developer is in the best position to mitigate or prevent harms.”
The cheapest costs avoider rationale is firmly grounded in the torts literature and was proposed by Professor Guido Calabresi in his groundbreaking book, The Cost of Accidents. Yet Calabresi and Smith also provide something of a warning: “But what is “cheap” and what is “costly” itself derives from the tastes and values of society, which can be influenced by the current set of civil wrongs. This reverse link, which is sometimes missed, may well represent the future of tort law.” This quote demonstrates how what is allowed by law (i.e., the parameters of civil wrongs) may then come to determine the values of society, i.e., what is socially acceptable. In the context of A.I. regulation, we should be attentive to how products liability, as a method of regulation, may come to define what A.I. technologies corporations will develop for society.
Thus, as admirable as I find Sharkey’s intellectually nimble analyses comparing emerging A.I. technologies to other prior emerging technologies regulated by products liability, I must note one concern. Sharkey argues that her proposal aims to balance innovation with consumer protection. I understand her instinct. However, some scholars take issue with regulation being posited as adversarial to innovation and consider the foregrounding of innovation in the governance conversation to be a regulatory dodge in disguise. Given that, as Calabresi and Smith note, what is cheap and what is costly depends on the “tastes of society,” we should question what an innovation-centric paradigm means for A.I. regulation. As Andrew Selbst concluded in Negligence and AI’s Human Users, “[w]here society decides that A.I. is too beneficial to set aside, we will likely need a new regulatory paradigm to compensate the victims of A.I.’s use.”
Is product liability law malleable enough to identify and quantify the harm to all victims of A.I. use? Tort law at its base relies on quantification. There is no recovery for damages if a plaintiff cannot quantify the harm. Thus, products liability may not compensate for reputation and representational harms which are often future or speculative in nature. Consider that privacy law scholars are still valiantly attempting to quantify the harms of privacy violations and that A.I. technologies introduce new opportunities for privacy violations. Even if the harm can be quantified, given that A.I. is being developed by multinational corporations with a deep bench of lawyers and even deeper pockets, is the financial asymmetry too great for any consumer of A.I. to be protected by products liability? My deep worry here is that although Sharkey has presented a noble effort to start to corral the dangers of A.I. innovation, A.I. developers may seize on it as carte blanche to push what they consider A.I. innovation at high cost to human life – what I would term a “break things and pay damages later” approach.
But what is the alternative? I underscore here that Sharkey has positioned her proposed legal framework as a stopgap rather than the end goal of A.I. regulation. I find her proposal then to be a highly creative and ultimately useful temporary solution. Turning to the question of what should be the ultimate objective of regulation, I would argue for a reimagining of our legal principles vis à vis the responsibility of corporations. To be more precise, effective regulation of A.I. technologies will hinge on finding a definitive answer to a longstanding jurisprudence question: How can we expect corporations to evince true corporate responsibility towards society at large, while holding on to the shareholder primacy principle?
Mar 26, 2025 Tal Zarsky
There seems to be a budding consensus among tech pundits and stakeholders: The EU has solidified its role as a leader in one ICT sector—regulation. EU regulation is a growing industry in itself. However, such regulation may not necessarily be beneficial for business and technological progress. Professor Bradford, a leading expert on EU law and its international influence, agrees with the first two statements, but not necessarily with the third. She challenges (and ultimately rejects) the intuitive argument that excessive ICT regulation is responsible for the EU’s innovation lag in this sector. In making her claim, she maps out the many impediments to ICT innovation in Europe, identifying numerous factors beyond the content of regulation – such as its complexity, as well as underdeveloped capital markets, unfitting insolvency laws and the inability to attract and retain talent. Or, to paraphrase J.F.K.: Bradford explains that the EU’s ICT innovation failure has many fathers. Bradford thus argues that the link between regulation and the lack of innovation is weak and that there is no real lesson here for U.S. regulators and lawmakers contemplating tech-related policy.
To illustrate the weak connection between innovation and regulation, Bradford begins the article by outlining the U.S.’s centrality in the ICT sector. She highlights the dominant brands like Google, Meta, Microsoft, Amazon, and Apple that shape contemporary life and discourse, as well as the extraordinary wealth these firms have amassed. The article then examines the U.S.’s tech-friendly regulatory environment, particularly the relative immunity provided by Section 230 of the Communications Decency Act (as part of broader notion of promoting free speech) and the absence of comprehensive federal privacy legislation.
Bradford argues that this regulatory landscape was shaped by persistent lobbying efforts and an overarching U.S. policy commitment to “free market ideals.” She then systematically reviews key European laws and regulations affecting the tech industry— the General Data Protection Regulation (GDPR), Digital Markets Act (DMA), and Digital Services Act (DSA), among others. Beyond providing a thorough analysis of lobbying positions on these issues (and their influence on political discourse), Bradford acknowledges scholarly perspectives (including my own) that have suggested a possible connection between the EU/U.S. regulatory divide and the U.S.’s undisputed leadership in the tech market. The article subsequently engages with broader scholarly discussions on the relationship between innovation and regulation, while striving to prove that heavy regulation in the EU is not the key reason for the continent’s innovative lag.
Bradford then delves into specific regulatory domains and their relationship with innovation: privacy, antitrust, and AI. I will set aside the antitrust discussion, as Bradford’s argument that antitrust enforcement is crucial for innovation is fairly well-established. The relationship between privacy regulation and innovation, however, is more complex. Here, Bradford focuses on how the GDPR may hinder innovation by imposing high compliance costs. But she also explains how privacy laws “have the potential to alter innovation pathways” in different directions, some of which leading to the introduction of privacy-enhancing tools. At the same time, they might also “increase social innovation” (i.e., enhance social welfare as opposed to mere corporate wealth). She applies a similar analysis to the tensions between AI innovation and AI regulation. Subsequent research might consider linking these discussions of technological and social innovation and examine the impact of privacy laws on AI innovation (for a recent exploration of this connection, see Dan Solove’s recent article). The effect of the EU’s privacy laws (in the form of data usage restrictions) on AI development within the continent is likely to be significant.
If tech-related regulation is not the primary reason for the EU’s ICT lag, what is? Bradford identifies several alternative explanations for Europe’s limited technological leadership. Yet before doing so, the analysis presents three particularly insightful arguments, all aimed at weakening the assumed correlation between regulatory intensity and innovation constraints. First, she argues that EU regulation was not significantly different from that of the U.S. until 2010. Yet even during this period, Europe failed to produce ICT leaders, suggesting that other factors are at play. Second, she emphasizes that EU regulation serves dual objectives—protecting rights and fostering the internal EU market—and that the latter goal should, in principle, promote innovation and counterbalance regulatory burdens. Third, she points out that GDPR enforcement has predominantly targeted U.S.-based companies, with little evidence that innovation in these firms has been stifled as a result. While each of these claims can be countered, they are intriguing as part of the ongoing debate.
The final section of the paper examines structural impediments to innovation in the European tech sector, including:
- Regulatory complexity rather than regulatory stringency – Bradford notes that the key issue is not necessarily the severity of regulation but rather the complexity arising from fragmentation across member states and the absence of a true “Digital Single Market.”
- Limited capital markets – Europe faces a shortage of venture capital funding for startups, as well as limited government investment in military-driven technological innovation (more on this, later).
- Punitive insolvency laws and a risk-averse culture – European legal frameworks discourage entrepreneurial risk-taking.
- Inability to attract and retain global talent – This challenge is compounded by higher salaries in the U.S. and stronger institutional connections between academia and industry.
Summarizing these points, Bradford states: “Identifying these alternative explanations does not support an argument that all European tech regulation would enhance welfare and that digital regulations could never adversely affect innovation and slow down technological progress…”. She ultimately calls for a more nuanced discussion on the benefits and drawbacks of tech regulation.
This is a particularly timely article and discussion. Since its (very recent) publication, much has already changed. In the U.S., the new administration appears to be shifting even further toward a pro-business trajectory, particularly regardingthe tech industry. Thus, substantial regulation of the tech sector seems unlikely, except for some rules prohibiting certain forms of content moderation and censorship. Meanwhile, Bradford’s discussion of U.S.-China tensions has become increasingly relevant given the recent success of Chinese AI ventures like DeepSeek. Bradford’s intervention serves as an important reminder that competition in the ICT space is also coming from China. Consequently, the importance of the discussion Bradford chooses to promote has significantly grown. I recommend keeping the potential AI- and tech-related competition coming out of China constantly in mind when considering the arguments noted above. I also call attention to Bradford’s last book, “Digital Empires,” as an important source to acknowledge when reviewing the central approaches to regulating the digital economy, and the differences between them.
Bradford’s work is currently being supplemented by a growing body of scholarship examining the impact of the GDPR on innovation. Even if one accepts Bradford’s argument that the EU’s lack of ICT leadership is not directly attributable to privacy regulation, the GDPR’s enactment presents a valuable opportunity for empirical analysis: a natural experiment (though the absence of a “control group” limits the robustness of such findings). The GDPR serves as an instance in which an additional regulatory burden was introduced, allowing researchers to compare innovation trends before and after its implementation. Scholars have already started researching this question, but their findings remain inconclusive: some studies indicate lower levels of innovation in cutting-edge projects but increased innovation in more compliance-oriented sectors. Yet others showed a “variety of partly countervailing effects”.(see also a review of these discussions in Esra Damir’s dissertation, Ch. 3 and 7).
Looking forward, the introduction of strict EU regulations will allow for closer examination of the potential causal links between regulation and innovation (or the absence thereof). These developments may also facilitate a more precise assessment of the actual costs borne by EU citizens because of stricter data protection and information privacy laws, as well as the broader economic implications of such policies. Bradford’s paper provides a great start, with lots of thoughtful ideas and important facts for those striving to gain a deeper understanding of the reasons for the EU’s ICT innovation standing, including those related to various regulatory realms. Following its lead, questions regarding the connection between innovation and regulation will surely generate additional academic interest in the years to come.
Mar 3, 2025 Stacy-Ann Elvy
Increasingly, attorneys use various generative artificial intelligence (AI) tools in the practice of law. These tools purport to provide targeted answers to specific legal questions and they can be used to facilitate review and drafting of legal documents as well as aid in due diligence assignments, along with various other legal tasks. In response to the rapid rise of generative AI tools in the legal profession, state bar associations have published recommendations on the issue. For instance, in 2023, the California State Bar Association issued practical guidance to attorneys on generative AI in the legal profession. Florida followed suit by issuing an advisory opinion on the topic. Similarly, the American Bar Associationalso released a formal opinion on generative AI tools in 2024.
In her article, Rule 11 is No Match for Generative AI, Professor Jessica R. Gunder offers an impressive contribution to both the law-and-technology and civil procedure fields by exposing the limits of Federal Rule of Civil Procedure 11 in addressing “fictitious cases and false statements of law” that arise from attorneys’ use of generative AI. Gunder convincingly argues that although courts have used Rule 11 to sanction attorneys who fail to conduct sufficient legal research, Rule 11 cannot adequately regulate this behavior in the generative AI context. She goes on to contend that Rule 11’s inadequacies have likely led a growing number of courts to issue standing orders to directly address attorneys’ misuse of generative AI in legal proceedings.
Gunder begins the article with a valuable description of the features associated with generative AI in the legal profession. She documents and critiques well-known cases in which attorneys improperly used generative AI tools. Gunder offers possible explanations for lawyers’ unprofessional use of generative AI, including their failure to understand the technology and “evaluate the work product” produced by the technology.
Gunder then turns her attention to Rule 11. After providing a brief overview of Rule 11’s history, scope, and objective, Gunder contends that attorneys and litigants can violate Rule 11 by filing legal documents that contain an inaccurate representation of law or that “do[] not contain key cases.” However, she argues that even before generative AI, courts already encounter significant hurdles in attempting to determine whether to impose sanctions for failure to perform sufficient legal research, including problems with identifying “how much research is enough?” Due to these difficulties, she posits that courts are reluctant to sanction attorneys for inadequate research. Given attorneys’ ethical obligations, courts are more likely to impose sanctions when there is “an intentional failure to disclose controlling legal authority” or if the conduct “is repeated, particularly after a court has informed the attorney of their error” or “involves misrepresenting or changing the holding of a case.”
Gunder goes on to argue that due to sanction requirements, Rule 11 will largely be ineffective when a litigant or attorney erroneously relies on generative AI and submits legal documents to a federal district court that contain “fictitious cases and false statements of law.” Gunder posits that Rule 11 is not intended to cover all bad faith conduct in a lawsuit and that the rule “cannot be used to sanction oral misrepresentation and testimony.” She argues that cases involving litigant or attorney misuse of generative AI often stem from “lack of knowledge of how generative AI works and its propensity to hallucinate” and while such conduct is perhaps negligent, it does not rise to the level of “contempt or subjective bad faith” for purposes of imposing Rule 11 sanctions.
The well-written article concludes with an examination of standing orders dealing with attorney misuse of generative AI and recommendations for courts moving forward. Gunder argues that standing orders may encourage litigants to refrain from filing legal documents in court that contain inaccurate statements of law or fabricated cases. Moreover, “they may make it easier for a court to find that a litigant violated Rule 11 and impose sanctions.” Despite these potential benefits, she contends that poorly drafted standing orders may discourage litigants and attorneys from adopting and implementing new technology and that “a patchwork of standing orders” issued by different courts may lead to inconsistencies.
Gunder suggests that courts should effectively balance the benefits and risks she associates with generative AI standing orders. She also posits that courts should be reticent to adopt “an anti-technology tone” in standing orders to avoid deterring parties from adopting generative AI and the appearance of “judicial bias.” She advocates for the use of the federal district court local rules process authorized by Federal Rule of Civil Procedure 83 to remedy concerns associated with inconsistent standing orders. Gunder’s insightful description of the current use of generative AI in civil litigation in federal courts should be of particular interest to courts, practitioners and scholars in both the law-and-technology and civil procedure fields.
Feb 6, 2025 Ari Waldman
Over a year before the Supreme Court’s conservative supermajority overturned Roe v. Wade, the Texas legislature passed SB 8, which banned all abortions after six weeks. At the time, fetal heartbeat laws like SB 8 were invalid because Roe and its progeny prohibited the use of state power to prohibit access to abortion services pre-viability. So Republicans in the Texas legislature, supported by the work of anti-abortion movement lawyers, came up with a workaround. SB 8 deputized private citizens to surveil on the state’s behalf and authorized them to bring private civil lawsuits against anyone who provided or facilitated an abortion after six weeks. SB 8 is a perfect and heinous example of what Sarah Brayne, Sarah Lageson, and Karen Levy call “surveillance deputies.”
In Surveillance Deputies: When Ordinary People Surveil for the State, Brayne, Lageson, and Levy define surveillance deputies as “ordinary people us[ing] their labor and economic resources to engage in surveillance activities on behalf of the state.” From one perspective, surveillance deputies are paradigmatic of the engaged citizen: “If you see something, say something” is not, in this understanding, a McCarthyite or totalitarian slogan encouraging tattling and ratting on neighbors. Instead, it’s a message about what constitutes good citizenship. Good citizens speak up and keep everyone safe. From another perspective, however, surveillance deputies are decidedly sinister. The connection between speaking up and keeping everyone safe implies that those listening to surveillance deputies have the best interests of citizens in mind. That is far from a sure thing. Surveillance deputies expand the power of the state and sometimes do so for the mere sociopathic reward of seeing someone else harmed.
At a minimum, surveillance deputies are a conundrum. How are we to understand the people who surveil and the institutional alliance between state power, the surveillance-industrial complex, and ordinary citizens? Brayne, Lageson, and Levy, who contributed equally to this outstanding and insightful article, propose four hypotheses for describing the functions and implications of surveillance deputization: interest convergence, legal institutionalization, technological mediation, and social stratification. Let’s break those down.
By interest convergence, the authors mean that surveillance deputization works best when states and citizens have aligned interests and benefits. For instance, under SB 8, someone could report an abortion provider in Texas for the chance to win $10,000 per incident, or because they hate abortion, or because they have a grudge against a doctor. For private deputies and the state, which wanted to end abortion in Texas, it was a win-win (a lose-lose for just about everyone else, but that’s a different JOT).
Surveillance deputization can also be catalyzed by law and its loopholes. Fourth Amendment case law holds that deputizing an individual to do surveillance work means that the state can avoid many of the constraints typically imposed on state surveillance. Given the access we have to vast amounts of surveillance content, this kind of exception to the Fourth Amendment’s warrant requirement may soon make the provision practically meaningless.
In addition to aligned interests and legal loopholes, profit lies at the foundation of much surveillance deputization. Big tech companies like Amazon and myriad small startups develop and market surveillance technologies to capitalize on people’s fears: fears about the “other”, about what will happen to their children, about anything. The information industry has pushed the notion that ordinary citizens should be monitoring everything, keeping an eye on what’s going on outside their door, by creating the very tools that give people those capabilities. Then they can sell advertisements on their surveillance apps. Having molded citizens into both consumers and avid spies, industry takes the resulting massive treasure trove of data and enters into lucrative contracts with state bureaucracies of violence to provide that data for the state’s use. As one former employee of the company that makes the Citizen App admitted, “The whole idea behind [the Citizen app subscription service] is that you could convince people to pay for the product once you’ve gotten them to the highest point of anxiety you can possibly get them to.” Create surveillance, stoke fear, cash in.
Finally, surveillance deputization may increase or disrupt social inequalities. The former is typified by the Victims of Immigrant Crime Engagement (VOICE) hotline. VOICE was set up by Donald Trump to let people call in and report what they thought were crimes being committed by immigrants. Since no one knows anyone else’s immigration status from afar, this hotline was basically an opportunity to report people of color to ICE. At the same time, technologically mediated surveillance allows citizens to surveil the state and its agents when they engage in racist or discriminatory behavior. In fact, the deputization of surveillance in a technologically driven world opens up a natural path for resistance: mess with the tech. Instead of leaving SB 8’s reporting website to anti-abortion fanatics, someone created a bot that submitted false reports every 10 seconds, overloading the platform and undermining the entire reporting structure. A similar thing happened to VOICE.
Surveillance Deputies highlights underappreciated aspects of the deeply symbiotic relationships between technology and state power. The authors give many examples—some good, some bad, and some ugly—of surveillance deputization beyond SB 8. AMBER alerts engage communities to assist in searching for missing children. The Amazon Ring doorbell camera may be user-installed, and the associated Neighbors app gives individual customers the chance to upload videos of what they see as “suspicious” activity, but the state regularly accesses Neighbor app data. And, of course, there’s VOICE, a way to turn every person who doesn’t look like you, you’re scared of, or you don’t like into an alleged criminal.
But the litany of examples raises one lingering question: When are we as citizens not surveillance deputies? Our labor and economic resources power almost every tool that is data driven: Google Maps, social media, targeted advertisements, and more. As it functions today, much digital infrastructure would collapse if users stopped contributing their own labor (for free) to multibillion dollar technology companies. Perhaps it is time for our own brand of resistance.
Dec 11, 2024 Orla Lynskey
The role of private digital infrastructure providers in shaping the exercise of civil liberties in the digital sphere, and the role the law plays in facilitating this power, have been the subject of debate in recent years. Relatively less attention has been paid to the impact these ‘new governors’ have on the delivery of public services. As the State becomes increasingly dependent on privately provided AI systems, there is a real risk that public values (such as participation, transparency, and accountability) will be weakened. Historically, procurement rules have been used to ensure that public-private partnerships align to public objectives and values. Many lawyers, myself included, therefore surmise that when the State buys AI systems to assist with the delivery of public services, public procurement law will act as a constraint on the power granted to private operators by the arrangement.
In Responsibly Buying Artificial Intelligence: a ‘Regulatory Hallucination,’ Albert Sanchez-Graells clinically dispels such misplaced faith in procurement law, labelling it a ‘regulatory hallucination.’ Like AI hallucinations, this type of regulatory hallucination is ostensibly plausible but ultimately incorrect, leading to immediate tangible consequences (such as the mass harm that resulted from the Australian government’s wrongful demand that welfare recipients pay back benefits based on the Robodebt system). While Sanchez-Graells’ primary analytical focus is the UK, where under the National AI Strategy, public buyers are expected to ‘confidently and responsibly procure AI technologies for the benefit of citizens’, the logic of the argument applies also to other jurisdictions.
The main thrust of the article, and the book which further expands on the claims, is that the public buyer is badly placed to act as a public sector digital gatekeeper and self-regulator. According to UK Government’s AI policy, AI should conform to high level principles including fairness, accountability, contestability, and safety (amongst others). Responsible AI procurement therefore requires public buyers of AI to translate these substantive requirements into tractable contractual terms, which Sanchez-Graells terms AI ‘regulation by contract’. While there has been some positive experience in the UK of using procurement to achieve societal goals (such as environmental protection), this has not been an unmitigated success. More importantly, Sanchez-Graells illustrates how there are two assumptions underpinning the presumed effectiveness of regulation-by-contract that simply do not hold true in the digital context.
The first assumption is that AI regulation-by-contract can act as a two-sided gatekeeper disciplining the behaviour of both the tech provider and the public sector user of AI (for instance, a Welfare Department). However, as Sanchez-Graells illustrates, agency theory assumes the opposite: that a procurement arm of government acts as the agent of public buyers such as Welfare Departments, rather than as a constraint on them. A role reversal where the public buyer (the procurement arm) must act as gatekeeper of the public user (the Department) rather than its agent leads to internal governance challenges that procurement law is not equipped to resolve. If, for instance, the procurement arm is institutionally embedded within the organisation that will use the AI, it is unrealistic to think that the principal-agent relationship will be reversed to enable oversight. Furthermore, the ‘decentred interactions’ between the public sector AI user (the Department) and the tech provider may mean that they can jointly shape the effective deployment of AI systems in a way that escapes the influence of procurement law. This may be because of timing (procurement law primarily bites prior to the entry into force of contracts) and the tools available to public procurers (in terms of their technical expertise, for example).
The second assumption that the article challenges is that, where there is AI regulation-by-contract, the public sector acts as the rule-maker with the tech provider as a rule-taker. Sanchez-Graells emphasises how in the absence of detailed public guidance on how to implement AI principles the public buyer is funnelled towards private standards. Dependence on such private standards to substantiate fundamental rights has been criticised in the context of the EU AI Act. The risk is one of regulatory tunnelling, where decision-making power is displaced from the public buyer to the tech provider. The tech provider has the capacity to translate the contract’s requirements into technical and organisational measures based on industry standards (where they exist) or its own preferences (where they do not). Sanchez-Graells also points to the risk of industry shaping standards for commercial gain where regulatory goals are difficult to define or incommensurable.
Following this bleak assessment of the potential for procurement to shape the public use of AI systems, Sanchez-Graells makes the case for institutional reform and the creation of an independent regulator for public sector AI use. This regulator would prevent the public sector from deploying technological solutions that breach fundamental rights and digital regulation principles and would also be tasked with avoiding regulatory capture and commercial determination. To achieve these aims, it would require independence and digital capability. The new regulator would also set mandatory requirements for public sector digitalisation through standard certification and deployment authorisation.
Irrespective of the political feasibility of this institutional reform, through the preceding analysis Sanchez Graells leaves the reader in no doubt that lawyers concerned with public values and private power should be attentive to procurement law. Procurement law may well be the next legal framework to legitimise the expansion and entrenchment of private power in the digital environment, albeit this time at the direct expense of public power.
Nov 11, 2024 Rebecca Tushnet
Sometimes, it’s the small details that hobble even the most easily explained policies. When California decided to expunge felony records for marijuana offenses, relief for former felons was hampered by a lack of comprehensive recordkeeping and reliance on proactive individual action (the expungement wasn’t automatic; you had to ask for it). These and similar stumbling blocks can be weaponized by opponents, as occurred with the restoration of voting rights to felons in Florida. It’s a technological spin on the well-known legislator’s warning, “If I let you write the substance and you let me write the procedure, I’ll screw you every time.”
In Recoding America, Jennifer Pahlka makes the argument that there doesn’t even have to be a bad guy on the procedure side for this to happen. This is a book by a technocrat with a persuasive argument for a measure of technocracy: America’s ways of lawmaking could be greatly improved by borrowing from the project management concept of agile development, which allows people lower in the hierarchy to make consequential decisions rather than being burdened by having all the rules have to be specified in advance. The latter, “waterfall” development, can lead to deadly (sometimes literally) complexity and policy failure. When policy is too rococo and reticulated, such as having nine different definitions of a “group” of doctors for Medicare purposes, throwing money at the problem rarely helps. Neither does outsourcing and oversight, both of which Pahlka believes can help when properly deployed but often end up generating more layers of bureaucracy.
Pahlka argues that teams building tech to implement a government policy should have the authority to alter it as they go. They should build something that works at least a bit as soon as possible, shift edge cases to human review, and automate the easy stuff rather than building software that’s supposed to accommodate all possible situations. The worst part of directives from above, from her view, is that “nowhere in government documents will you find a requirement that the service actually works for the people who are supposed to use it. The systems are designed instead to meet the needs of the bureaucracies that create them—they are risk-mitigation strategies for dozens of internal stakeholders”—even though they also fail at that regularly.
The failure of service and benefit systems is a special problem for government because people interpret their experiences with bureaucracies as evidence of how government works more generally. Involvement with the criminal system, getting a construction permit, or filing taxes, can be unpleasant enough that it erodes faith in government and deters political participation. (This observation suggests that making voting complex, as when absentee voters are required to use two envelopes and sign and date one of the envelopes, doesn’t just disenfranchise individual voters—the ultimate effect is to deter citizens from even trying.) Unfortunately, the problem is also worse in government because obsolete tech is paired with obsolete policies—not just obsolete, but accreted over rounds and layers of attempted reform, which is how you get those nine definitions of a “group” of doctors in Medicare.
Pahlka compares current policymaking frameworks to waterfall development in software, where directives come from above. Waterfall development uses new data to grade only after the fact. “For people stuck in waterfall frameworks, data is not a tool in their hands. It’s something other people use as a stick to beat them with.” Naturally, they aren’t that interested in collecting ammunition against themselves. In addition, “[e]ven when legislators and policymakers try to give implementers the flexibility to exercise judgment, the words they write take on an entirely different meaning, and have an entirely different effect, as they descend through the hierarchy, becoming more rigid with every step.” She gives numerous examples.
One thing that Pahlka suggests might be fixable is policymakers’ cultural contempt for implementation. They think/hope/expect/imagine that if they write the right rules, everything will be fine. But it isn’t and won’t be. Pahlka criticizes the Administrative Procedure Act rulemaking process that most of government uses, because it essentially invites and requires interest group lobbying for every rule. The required process is more like a jury trial than an expert evaluation. Leftists, she argues, got really good at suing the government to stop bad stuff, but that contributed to an environment of risk aversion at agencies (and didn’t stop the Supreme Court from harming agency power anyway).
At points, Pahlka is pretty clear that there are some no-win scenarios here: Equity usually requires data, which requires paperwork, which favors the powerful. So, what is to be done? One very concrete recommendation from Pahlka is to focus on making things simpler for most people and devote human resources to the tougher situations. She argues that new programs should be launched when they’re ready to handle 85% of the cases, though the edge cases should be addressed technologically eventually. In reality, she points out, policies are launched incrementally anyway, because the systems built under current processes don’t work for a lot of people. Waterfall policymaking merely ensures that rollouts are incremental in the worst possible way.
As one person quoted in the book says of welfare applications, “Every time you add a question to a form, I want you to imagine the user filling it out with one hand while using the other to break up a brawl between toddlers.” Documentation should be required only when needed and responsive to actual circumstances. Sadly, Pahlka doesn’t give much shrift to the idea of just removing eligibility constraints for services and benefits, whether that’s sending money to people with kids or implementing universal health care. More universal safety nets could lead to lots less waste and failure in the administration of exceptions.
For an example of successful agile development in government, Pahlka points to free COVID tests—not for nothing, a universal policy. The rule was that each unique address could order only a certain number of free tests. Initially, the postal service just asked for a requester’s address. But it turned out that, occasionally, “one apartment dweller requesting tests would blacklist other units in the same building.” This wasn’t a programming error. It was a problem with the post office’s records, which hadn’t been updated to reflect division of a building into apartments, even though individual mail carriers were compensating for that when they walked their routes. To compensate, the team added a process involving human review of edge cases, whereby an individual could fill out a short form appealing a denial as error. Pahlka acknowledges that this process disproportionately burdened lower-income individuals. But it also cleaned up around two-thirds of the residential address database as a result.
Even though it sounds scary and even undemocratic to have a random technologist embedded deep in the hierarchy making important distinctions, Pahlka argues that it’s necessary for the success of the actual intended outcomes decided upon by elected representatives. When no one can go ahead and make decisions about how a program should work, but lots of people have the power to add requirements to it—as is now the case—you get lots of paperwork and few good outcomes. Good product management can “reimagine representation and voice so as to honor the values our government is supposed to be founded on.”
To further improve things, Pahlka argues that the government should spend money improving its human resources, especially at the levels of program management/operating expenses. Oversight should ask less about whether a team stuck to a plan, and more about what the team learned in implementation and what user tests are showing now.
There are some things this book misses. Pahlka, who’s not a lawyer, doesn’t suggest that new laws should explicitly allow regulators to easily simplify and even eliminate earlier categories and rules. There are certainly reasons why we don’t do that. For example, lots of systems rely on past categories and rules and changing them could cause a cascade of incongruencies. But the accretion of legal complexity keeps making things worse. In my view, it would be a worthwhile exercise to try to write a law that actually allowed agile development of implementation policies—and then probably a painful exercise to see what happened when courts got their hands on it.
Give people leeway to implement the intent of the overall policy, Pahlka suggests, and you can avoid the layers of bureaucracy that stymie well-intentioned attempts at reform. While there’s merit in the argument, she doesn’t give a lot of weight to the reasons that policymakers try to be comprehensive. Although the perfect shouldn’t be the enemy of the good, it’s also the case that if you get a policy running that works for 90% of people, the 10% excluded are likely to share some demographic characteristics, and historically the policy is unlikely to be revisited to fix it for them. That result is usually worse when it comes from government than when it comes in private software. But if the perfect is not to be the enemy of the good, then oversight that focuses on implementation success, and flexibility to keep working for that 10%, are the proper solutions.
Oct 8, 2024 Natalie Ram
Following the Supreme Court’s decision in Dobbs v. Jackson Women’s Health, pregnant women seeking to terminate a pregnancy, and medical providers who care for them, have found themselves increasingly subject to invasive law enforcement scrutiny in many states. For instance, while many states’ anti-abortion laws permit abortion if the pregnant person was the victim of a sexual assault, many of these laws require that physicians verify that the sexual assault was reported to law enforcement. The exception thus compels physicians to serve as handmaidens to the police.
Yet the abortion context is hardly the first or only one where policing has thrust itself into medical practice. As Teneille R. Brown observes in her new article, When Doctors Become Cops, from gender-affirming care, to prescription drug monitoring programs, to law enforcement demands for DNA samples from hospital staff, policing often encroaches on patient privacy. These intrusions generate medical mistrust that undermines both individual and public health. Moreover, this medical mistrust is likely to exacerbate inequities in population health, as police mistrust is at “record highs” and structural inequities are present in “virtually all aspects of the criminal legal system.” Brown persuasively argues that “[t]o respect patient autonomy, repair medical mistrust, and promote individual and public health,” “law enforcement and health care need to be more completely divorced from one another.”
Brown begins by demonstrating that medical mistrust is a social determinant of health that can only be worsened by injecting policing further into medical relationships. Observing that trust is “vital” to clinical care and that mistrust is a “major barrier to a strong patient-clinician relationship,” Brown cautions that medical mistrust is already a persistent concern across the modern American medical system. Under the common American fee-for-service medical model, “patients and physicians have precious little time to build trust,” as hospitals and physicians are paid “for doing things, but not for talking about whether and how to do things.”
Brown then explains how medical mistrust negatively affects health, both individual and population-wide. Medical mistrust “leads patients to refuse prescribed medications, to miss cancer screenings, to not see their doctor for regular visits, to discourage others from seeking treatment, to not share sensitive medical information with their providers, and to be less likely to comply with the prescribed treatment or health care plan.” Moreover, medical mistrust and its consequences are even worse for already marginalized communities. Medical history is replete with injustice and mistreatment based on race, sex, and other characteristics. Modern medical practice is often no better. As Brown observes, “[i]nfant mortality for Black babies is higher now than it was during the antebellum period.”
Turning to policing practices, Brown explains that there are “few legal hurdles” preventing law enforcement from accessing or using confidential patient data. Fourth Amendment law, which could act as a robust barrier to access, has instead often bent to law enforcement demands. Even if the Fourth Amendment were a hardier guardrail, it is practically triggered only by introducing medical data at trial. Prosecutors might simply avoid doing so, where possible, leaving many privacy violations unremedied. In the case of DNA identifications, police might use medical data from a suspect’s genetic relatives, rather than that from the suspect himself, in an effort to immunize their investigation from Fourth Amendment scrutiny. (In my own work, I have argued that this kind of end-run should not negate a Fourth Amendment claim.) Even where courts conclude that police conduct has violated the Fourth Amendment, moreover, courts often deny exclusion of tainted evidence anyway, based on officers’ “good faith” misunderstanding of the law.
Statutory protections for medical data, including both state privacy laws and the federal HIPAA Privacy Rule, are equally unavailing. HIPAA permits medical providers to share otherwise-protected health information with law enforcement in response to as little as an “administrative subpoena”—a statement police write themselves, without any judicial oversight. Medical providers receiving such requests are likely to cooperate with them, given messy Fourth Amendment law and the power dynamics exerted by (often armed) police.
Brown then argues that it essential to protect the “culture of medicine” from the “culture of policing” because they have very different norms and cultures regarding self-regulation, privacy, accountability, efficacy, honesty, autonomy, and trust.
With respect to self-regulation and accountability, while physicians “extensively self-regulate through governing bodies and professional associations” and “are frequently civilly sued and held accountable for malpractice,” police “rarely hold themselves accountable for the violence that they perpetrate, which is often not just careless, but intentional.” Legal doctrines like the “public duty doctrine” and qualified immunity also often shield law enforcement from liability for violence they could have prevented. By contrast, legal accountability for medical providers has expanded, following the famous Tarasoff case, often to require medical professionals to breach patient confidentiality to warn of “imminent risks to third parties.”
Similarly, with respect to privacy, while every state has codified a physician-patient privilege, there are no confidentiality norms or legal requirements for information shared with law enforcement. Police have tapped clinical laboratories, biobanks, and newborn screening programs for biological samples used to identify or confirm the identity of a criminal suspect.
With respect to efficacy, law enforcement doles out pseudo-medical interventions that are largely ineffective, substandard, and not evidence based, like administering ketamine for “excited delirium” (a highly contested diagnosis) or acting as first responders more broadly. “Treatment courts,” which divert offenders to addiction or other treatment programs overseen by courts, often rely on underregulated programs and clinics and blend punishment with disease treatment. By contrast, medical norms require evidence of safety and efficacy before treatments should be offered to patients.
Regarding honesty, modern medical ethics “universally condemn” deception, while police routinely use deception to get witnesses to cooperate or obtain evidence. Similar are differing approaches to autonomy. Medical providers seek to equip patients to make decisions consistent with the patient’s values. Meanwhile, legislatures charge law enforcement to criminally enforce moral judgments across the population, as in the context of abortion regulation or banning of gender-affirming care.
Finally, Brown returns to trust itself, showing that prescription drug monitoring programs (PDMPs), which track prescriptions and patient requests for controlled substances like opiates, “place law enforcement between a patient and their physician and can violate the trust between them.” Physicians, fearing law enforcement oversight, may under-prescribe needed pain medications, and patients may sensibly view their doctor as an extension of law enforcement when the doctor “check[s] a police database to see if the patient is telling the truth.”
Brown concludes by identifying five strategies to protect medical care from cooptation by law enforcement. First, the federal government should amend HIPAA to make it more difficult for law enforcement to obtain medical data. Second, courts should reconsider Tarasoff-style duties to warn third parties, which have had the perverse effect of “tak[ing] the very thing that makes health care special—confidentiality and patient trust—and exploit[ing] it in a way that harms not only public health, but also medical ethics.” Third, physicians must have autonomy to practice medicine in an ethical manner, something the criminalization of abortion care, for instance, has undermined even where the standard of care is clear. Fourth, medical providers would benefit from training about which disclosures the HIPAA Privacy Rule are permissive, rather than mandatory. Finally, we all must “reimagine health care as being off-limits from police.” Many social ills, currently funneled through the carceral system, would benefit from more nuanced, sensitive, and effective responses.
Brown’s article skillfully brings together three strands of law enforcement encroachment that have largely been analyzed separately to date. The first are recent state laws that functionally compel medical providers to comply with new prosecutorial demands or face criminal penalties or loss of licensure. The second consists of law enforcement efforts to tap the enormous “reservoir of evidence” held in individual medical files and biological samples. The third includes instances in which substandard medical care is dispensed by the criminal legal system directly. By weaving these entanglements into a single story, Brown broadens the narrative on medical mistrust and lends urgency to her call to build stronger, higher barriers between medicine and policing.
Sep 10, 2024 Rebecca Crootof
When a digital financial or medical advisor gives bad advice, when ChatGPT confabulates that a law professor committed sexual assault, when an autonomous weapon system takes action that looks like a war crime—who should be held liable?
Bryan Choi’s excellent AI Malpractice makes an important but often overlooked point: the answer isn’t as simple as choosing between negligence and various other potential regimes (strict liability, products liability, enterprise liability, etc.). That’s an important first step, and for a host of reasons, I share Choi’s conclusion that strict liability is the preferable near-term standard. But as AI agents and decisionmaking technologies proliferate and judges consider the applicability of negligence, there is a critical second order question: In a negligence regime, what standard should be applied for evaluating if a duty was breached? Should AI developers’ choices be evaluated according to the default reasonable person standard? Or, like doctors and lawyers, should their acts be evaluated under a professional standard of care? Under the former, a jury evaluates whether a defendant’s act was reasonable; under the latter, the profession sets the bar.
The choice has far-reaching implications. As Choi notes, if the professional standard is applied, the “law will enforce the customary practices among the AI community.” This would empower those pushing for better design and development policies, as AI ethics principles would have new legal weight and enforcement mechanisms. Meanwhile, if the reasonable care standard is applied, AI modelers may face far greater liability risk, as those harmed by AI systems might be better able to obtain civil recourse.
Consider OpenAI, the company which developed and launched ChatGPT. What is its potential tort liability for the myriad types of harm its generated output facilitates? Cybercriminals have already leveraged ChatGPT’s capabilities to improve social engineering attacks, resulting in malicious phishing emails increasing by 1,265% in the year following its release. Fact checkers are scrambling to address the deluge of LLM-created misinformation. ChatGPT can produce dangerous content, like recommending self-harm or providing instructions on how to commit crimes: A man committed suicide after allegedly being encouraged to do so by ChatGPT, and a Vice reporter learned how to make crack cocaine and smuggle it into Europe. And it can enable the discovery and deployment of new chemical and biological weapons, as when ChatGPT gave amateurs step-by-step instructions on how to cause a pandemic.
If a suit based on any of these harms is evaluated under the reasonable person standard, OpenAI may face a lot of liability. These risks were all foreseeable. In fact, a March 2023 OpenAI publication details its identification of and attempts to mitigate these categories of harms. The question would be whether those efforts were reasonable—and a jury could find that the mitigations were insufficient or that the technology shouldn’t have been released as widely as it was. If, however, a claim is evaluated under a professional standard of care, OpenAI has a credible argument that they are doing far more than most similarly-situated companies to mitigate harms—certainly more than custom requires, given the lack of responsible release norms—and therefore cannot be held liable for resulting unintended harms.
Talk about incentives.
(This is often where some folks default to talking points about the need to promote innovation and protect innovators from liability—and that is a valid consideration, but let’s say the silent corollary out loud. Harms have been created. If AI producers are not liable for the harms they cause and there are no alternative means of redress, the full costs of those harms are borne by the public, individually and collectively.)
To answer the question of which standard should be employed, Choi employs a framework he previously developed for assessing when the professional standard should be applied to a particular industry. One might think that certain professions set their own standard of care because they have particular traits—required degrees, licensing professions, and codes of ethics. But Choi argues that this is historically inaccurate, as doctors were evaluated under the professional standard long before they were “professionals” as we think of them today. Rather, he suggests that the professional standard is applied in situations where judges have good reason to not trust jury instincts and sentiments, as things can go terribly wrong even when a defendant doctor or lawyer does everything right.
Choi’s professional standard framework requires evaluating three factors: (1) “whether the core elements of the work involve substantial uncertainties in knowledge, and therefore require latitude for discretionary judgment”; (2) “whether there are serious harms that are statistically unavoidable because of the lack of scientific precision or control”; and (3) whether “[the defendants] perform an essential societal service even when their customary practices cause harm.”
One thing I love about this piece is that Choi’s exploration of the first two factors provides an impressively detailed and nuanced yet succinct description of what AI development entails. In describing the core elements of the work and where judgement is exercised, Choi clarifies where the relevant design decisions occur, the benefits and risks associated with different choices, and common sources of bias and error.
After reviewing the AI development process in his evaluation of the first factor, Choi concludes that its status as more of an ‘art’ than a ‘science’ weighs in favor of judges applying a professional standard. He notes that, while “[m]uch of the work involved in training neural networks is either menial or guided by well-established mathematical principles,” there is still “an important component [that] involves subjective judgements that are guided by customary practices derived from trial-and-error.”
In his analysis of the second factor, Choi creates a useful 3×2 typology matrix of potential harms. Along one axis, he distinguishes accidental harms, intended harms, and foreseeable misuses; along the other, he distinguishes between harms resulting from inaccurate and incomplete data from harms resulting from the accurate perpetuation of historical bias in data sets. He determines that “harmful outcomes are an expected feature even of competent AI modeling work,” which in his view also weighs in favor of a professional standard.
Compared with the first two factors, Choi’s analysis of the third is slightly perfunctory: He quickly states that AI developers do not (yet) perform an essential societal service, which favors the reasonable care standard. He concludes that, given the nascency of the field, strict liability is the appropriate current standard; however, once “AI becomes an ordinary fixture of everyday society”—at which point AI modelers may be providing more of an essential service—courts will need to wrestle with what negligence standard should be applied.
Jack Balkin has observed that legal analysis has a fractal nature, as the resolution of one question raises a host of others. Deciding to evaluate AI liability under negligence raises the “which standard” question; Choi’s exploration of the “which standard” question raises further ones.
First, each of Choi’s factors include major wiggle words. When is an uncertainty sufficiently “substantial”? When are harms sufficiently “serious”? At what point does a tool become a “societal service”—and when does that service become “essential”? As any despairing 1L will tell you, wiggle words pervade the negligence analysis—what, after all, constitutes “reasonable” care?—but resolving the squishy questions is usually left to the jury and can vary dramatically based on the facts of a case. In contrast, the selection of a standard is the judge’s role, which more easily becomes precedential—and precedent, once set, can be difficult to shift. How should law approach the risk of inapt legal lock-in in this context?
Second, need this be an all-or-nothing determination? Just as AI companies might have certain acts evaluated under strict liability or negligence regimes, it is possible to apply a professional standard to some types of AI development and deployment choices and an ordinary care standard to others. Even doctors’ acts are sometimes evaluated under the reasonable person standard, such as when a pharmacist inadvertently misfills a prescription.
As a teacher, I would recommend the piece for the introduction-to-AI-development inherent in Choi’s analyses alone. It would make a great opener for any AI and the Law course. As a scholar, I’m grateful to Choi for his thought-provoking exploration of a critical second-order question in the AI liability conversation.
Jul 31, 2024 James Grimmelmann
Sarah B. Lawsky’s Coding the Code: Catala and Computationally Accessible Tax Law offers an exceptionally thoughtful perspective on the automation of legal rules. It provides not just a nuanced analysis of the consequences of translating legal doctrines into computer programs (something many other scholars have done), but also a tutorial in how to do so effectively, with fidelity to the internal structure of law and humility about what computers do and don’t do well.
Coding the Code builds on Lawsky’s previous work on formal logic and its advantages for statutory interpretation. (Formal logic, sometimes called “symbolic” or “mathematical” logic, involves the precise and rigorous analysis of symbolic expressions representing arguments, such as “p & ¬q” to mean “p is true and q is not true”.) In her 2017 A Logic for Statutes, she observed that many statutory provisions have a characteristic structure: rules subject to exceptions. A typical rule says that WHEN certain conditions are satisfied, THEN certain consequences follow, UNLESS one of several exceptions applies. Exceptions have exceptions of their own: interest payments are deductible, unless they are personal, unless they are mortgage payments.
Lawsky’s great insight about law and logic is that this characteristic structure of nested exceptions is most naturally modeled using a branch of formal logic called “default logic.” Default logic, unlike standard “monotonic logic,” allows for tentative conclusions. On the basis of what I know now, this is a nondeductible personal interest payment, but let me investigate further, and oh, I see that this is qualified residence interest, so I am withdrawing my tentative conclusion and replacing it with another tentative conclusion that the payment is deductible. And so on, until there are no more clauses of the statute to check, no more exceptions to explore, and the most recent tentative conclusion becomes a definitive one. It is a process of successive refinement, converging on certainty. Monotonic logic, by sharp contrast, requires ruling out all possibilities before drawing a conclusion, which remains valid for all time once drawn.
Default logic is not more powerful than standard logic, but for some kinds of reasoning it is cleaner, and Lawsky’s point is that the back-and-forth of exceptions and subexceptions in statutory analysis maps naturally onto default logic’s structure of defaults and defeats. A formal logician applying a default logic’s inference rules follows a reasoning process that naturally corresponds to the reasoning process followed by a lawyer working through a statute.
Default logic is also a good tool for programming. (It is a formal logic, after all.) Once a human has translated a natural-language statute into a formal-logic representation, it becomes possible to reason automatically and algorithmically about the statute and how it treats various fact patterns. In 2021, a trio of computer scientists—Denis Merigoux, Nicolas Chataing, and Jonathan Protzenko—published Catala: A Programming Language for the Law, which turned Lawsky’s default-logic analysis of statutes from an abstract formalism useful for pencil-and-paper analysis into a concrete implementation useful for programming. (Lawsky herself is now a co-designer of the Catala language.)
As someone who sank years of his life into programming a body of law, I can say that Catala is the cleanest and most broadly useful advance towards making law programmable I have ever seen. A 29-page paper filled with equations and code blocks may be quite daunting (our research group took several weeks to read through the formalisms together in detail), but the basic idea of what it does is beautifully simple and clear. Catala allows a programmer to write the way that lawyers think: by laying out rules that trigger consequences, together with the exceptions that can prevent the consequences from happening.
Tax law in particular has two advantages that make it well-suited for this kind of formalization. First, it depends on—and attempts to produce—clear and determinate answers. Everything comes down to, or should, a specific amount due. And second, much of tax law is what a programmer would “declarative” rather than “imperative”; instead of telling people what to do, it describes the consequences of what they have already done. The Internal Revenue Code, as Lawsky has shown, comprises declarative provisions that are particularly clean to implement in a Catala-style language that uses defaults and exceptions. Taking advantage of this affinity between tax law and programming languages, Merigoux and Protzenko, along with Raphaël Monat, have been developed a toolchain to help the French tax authority modernize its antiquated systems. Their work puts directly into practice Lawsky-ian ideas about the value of clean formal reasoning to improve the application of tax statutes.
Coding the Code is in many ways the summa of Lawsky’s project over the last decade. After an accessible introduction to default logic, other portions of Coding the Code draw on what Lawsky has been up to lately: actually using Catala to code up tax law. She and her collaborators have approached the task with humility and care—virtues that Lawsky describes the need for in interdisciplinary collaborations in the recently-published Computational Law and Epistemic Trespassing. One approach they use is “pair programming”: a lawyer and a computer scientist sit side by side at one computer, discussing a statutory section and making sure that they agree on its translation into code. Another is “literate programming”, in which code is interwoven with comments that document what each part of it is doing. For statutory translations, these comments can include the statutory text itself, making the isomorphism between specification (the statute) and implementation (the code) wholly explicit. Neither pair programming nor literate programming directly affects what the code-ified version of the law does; instead, they are tools to make sure that the people who do the translation do so faithfully, in a way that others who come later can recognize as correct. (Lawrence Lessig, patron saint of code-as-law, would approve.)
Coding the Code, like the rest of Lawsky’s work, stands out in two ways. First, she is actively making it happen, using her insights as a legal scholar and logician to push forward the state of the art. Her Lawsky Practice Problems site—a hand-coded open source app that can generate as many tax exercises as students have the patience to work through—is a pedagogical gem, because it matches the computer science under the hood to the structure of the legal problem. (Her Teaching Algorithms and Algorithms for Teaching documents the app and why it works the way it does.)
Second, Lawsky’s claims about the broader consequences of formal approaches are grounded in a nuanced understanding of what these formal approaches do well and what they do not. Sometimes formalization leads to insight; her recent Reasoning with Formalized Statutes shows how coding up a statute section can reveal unexpected edge cases and drafting mistakes. At other times, formalization is hiding in plain sight. As she observes in 2020’s Form as Formalization, the IRS already walks taxpayers through tax algorithms; its forms provide step-by-step instruction for making tax computations. In every case, Lawsky links carefully links her systemic claims to specific doctrinal examples. She shows not that computational law will change everything, but rather that it is already changing some things, in ways large and small.
It is unusual for an established law professor to go back to school for a PhD. In philosophy. With a dissertation on formal logic. But Coding the Code, published five years after Lawsky submitted her (highly technical) thesis, shows the great value for legal scholars of the approach she developed in her PhD. It refines her distinctive approach to statutory analysis—which mixes careful legal reading with technical tools from formal logic and computer science—in a way that has great potential to help other lawyers and legal scholars be more precise about what tax laws say. All they need to do is talk to computer scientists, and Lawsky provides a roadmap for how. There is no epistemic trespassing in Sarah Lawsky’s work. Everywhere she goes, she is a welcomed guest.
Cite as: James Grimmelmann,
When Law is Code, JOTWELL
(July 31, 2024) (reviewing Sarah B. Lawsky,
Coding the Code: Catala and Computationally Accessible Tax Law, 75
SMU L. Rev. 535 (2022)),
https://cyber.jotwell.com/when-law-is-code/.