With her recent article, A Products Liability Framework for A.I., Professor Catherine Sharkey may have silenced at least some critics of artificial intelligence (A.I.) regulation. At the very least, the article stands as a sharp retort to anti-regulation advocates who often crow: “But how can we regulate A.I. when we don’t even yet know the full extent of what it can do or how it will be used?” Sharkey’s proposed regulatory framework, which eschews ex-ante pre-approval strategies in favor of post-market regulatory monitoring, may just be the answer to one of the critics’ favorite regulatory dodge.
Sharkey has the savoir faire to be afforded credence for any A.I. regulation proposal. As both an A.I./ML (machine learning) law and tort law scholar, what most stands out about Sharkey’s oeuvre is that she has gained enviable access to observe how A.I./ML systems are deployed in the government and has deployed her admirable analytical skills in dissecting those workings. For example, in Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, Sharkey (along with other scholars), conducted a rigorous canvass of A.I. use at 142 federal departments, agencies, and sub-agencies. Sharkey et al’s work in Government by Algorithm has been an inspiration for other scholars taking up the mantle to advocate for guardrails to automated governance.
I found reading A Products Liability Framework for A.I. to be similarly highly generative in my thinking of regulatory legal mechanisms, and I believe this article will become canonical for A.I. legal scholars grappling with the challenges of regulating emerging A.I. technologies. First, the Article notes the peculiar regulatory challenges posed by A.I./ML given their adaptive nature. Sharkey observes, “[c]ritics suggest that regulating A.I./ML demands a unique regulatory approach because, as A.I./ML technologies are sent out into the world and encounter new situations, they learn and change in real time.”
The first helpful contribution of the Article is that Sharkey handily demonstrates why A.I. technologies could be considered “products.” She seizes on the FDA’s stance for governing A.I./ML medical devices as products as a lodestar. Ultimately, she argues for a functional approach, advocating that A.I. technologies should be considered products due to their mass-market distribution and potential for widespread harm, since these are the same underlying public policy concerns of products liability law. Sharkey contends that classifying A.I. as a product ensures that liability frameworks remain effective in protecting consumers.
After establishing that A.I. should be considered a product, Sharkey’s article is built around the idea that the uncertainty produced by the ever-changing nature of A.I. development and use is neither peculiar to that technology, nor an insurmountable challenge to regulation. Rather, other emerging technologies have presented the same uncertainty in their nascent years and those regulatory challenges were still governable.
To Sharkey, the key to those early governance problems was products liability. As she notes in this Article and in previous writings: “Products liability…is a microcosm of how the common law evolves over time to respond to new societal risks—historically, those posed by the automobile, mass-produced goods, digital e-commerce…” Therefore, for Sharkey, it follows that product liability legal frameworks may also work well for regulating emerging technologies like A.I. She argues that products liability law affords legal mechanisms, including: an information-forcing function for safety-related information while more proactive regulatory frameworks are being developed, a liability insurance regime, and the added efficiency of applying the cheapest cost avoider theory.
Sharkey argues: “We can draw lessons from historical examples where society faced new and uncertain risks to demonstrate that, even when risks are uncertain or not entirely understood, tort liability can serve an information-production function during a “transitional period” before an ex ante regulatory scheme is in place.” Second, Sharkey notes the role of liability insurance, especially to produce information and enforce standards to mitigate or prevent harms from A.I. She writes, “Liability insurers can aggregate risk-related information obtained about the expanding universe of policyholders as part of the process of underwriting and premium-setting.” Third, Sharkey believes that the “cheapest cost avoider” theory serves as an effective deterrence. As applied to A.I., the “cheapest cost avoider” framework is less concerned with A.I.’s “Black Box” problem because it is concerned only with reducing the societal cost of accidents. According to Sharkey, “Instead of attempting to attribute each A.I. output to a single party, courts would focus on whether the interactive user or the A.I. developer is in the best position to mitigate or prevent harms.”
The cheapest costs avoider rationale is firmly grounded in the torts literature and was proposed by Professor Guido Calabresi in his groundbreaking book, The Cost of Accidents. Yet Calabresi and Smith also provide something of a warning: “But what is “cheap” and what is “costly” itself derives from the tastes and values of society, which can be influenced by the current set of civil wrongs. This reverse link, which is sometimes missed, may well represent the future of tort law.” This quote demonstrates how what is allowed by law (i.e., the parameters of civil wrongs) may then come to determine the values of society, i.e., what is socially acceptable. In the context of A.I. regulation, we should be attentive to how products liability, as a method of regulation, may come to define what A.I. technologies corporations will develop for society.
Thus, as admirable as I find Sharkey’s intellectually nimble analyses comparing emerging A.I. technologies to other prior emerging technologies regulated by products liability, I must note one concern. Sharkey argues that her proposal aims to balance innovation with consumer protection. I understand her instinct. However, some scholars take issue with regulation being posited as adversarial to innovation and consider the foregrounding of innovation in the governance conversation to be a regulatory dodge in disguise. Given that, as Calabresi and Smith note, what is cheap and what is costly depends on the “tastes of society,” we should question what an innovation-centric paradigm means for A.I. regulation. As Andrew Selbst concluded in Negligence and AI’s Human Users, “[w]here society decides that A.I. is too beneficial to set aside, we will likely need a new regulatory paradigm to compensate the victims of A.I.’s use.”
Is product liability law malleable enough to identify and quantify the harm to all victims of A.I. use? Tort law at its base relies on quantification. There is no recovery for damages if a plaintiff cannot quantify the harm. Thus, products liability may not compensate for reputation and representational harms which are often future or speculative in nature. Consider that privacy law scholars are still valiantly attempting to quantify the harms of privacy violations and that A.I. technologies introduce new opportunities for privacy violations. Even if the harm can be quantified, given that A.I. is being developed by multinational corporations with a deep bench of lawyers and even deeper pockets, is the financial asymmetry too great for any consumer of A.I. to be protected by products liability? My deep worry here is that although Sharkey has presented a noble effort to start to corral the dangers of A.I. innovation, A.I. developers may seize on it as carte blanche to push what they consider A.I. innovation at high cost to human life – what I would term a “break things and pay damages later” approach.
But what is the alternative? I underscore here that Sharkey has positioned her proposed legal framework as a stopgap rather than the end goal of A.I. regulation. I find her proposal then to be a highly creative and ultimately useful temporary solution. Turning to the question of what should be the ultimate objective of regulation, I would argue for a reimagining of our legal principles vis à vis the responsibility of corporations. To be more precise, effective regulation of A.I. technologies will hinge on finding a definitive answer to a longstanding jurisprudence question: How can we expect corporations to evince true corporate responsibility towards society at large, while holding on to the shareholder primacy principle?






