The Journal of Things We Like (Lots)
Select Page
Jennifer Cobbe, Michael Veale & Jatinder Singh, Understanding Accountability in Algorithmic Supply Chains (May 22, 2023), available at Arxiv.

Most proposed regulations for algorithmic accountability mechanisms have a common feature: they assume that there is a regulatory target with the power to control the system’s inputs, structure, or outputs. Maybe it’s the algorithm’s creator, or the vendor, or the deployer—but surely there’s an entity that can be held to account!

In Understanding Accountability in Algorithmic Supply Chains, Jennifer Cobbe, Michael Veale, and Jatinder Singh upend that assumption. In ten tightly but accessibly written pages, they detail how there is often no single entity that may be legitimately held accountable for an algorithmic conclusion. This is partially due to the “many hands” problem that has already spurred arguments for strict liability or enterprise liability for algorithmic systems. But designing a governance regime is also difficult, the authors argue, because of how algorithmic systems are structured. The authors use the “supply chain” metaphor to capture the fact that these systems are comprised of multiple actors with shifting interdependencies and shared control, contributing varied data and changing elements of the infrastructure, all while data flows in multiple directions simultaneously. The difficulty in regulating algorithmic systems is not just that it is hard to identify which of many entities is the cheapest cost avoider or the one that can be fairly held accountable; instead, it may be impossible to identify which entity or even which combination of entities is causally responsible for any given output.

The authors identify four distinct characteristics of algorithmic supply chains, all of which muddle traditional accountability analyses: (1) “production, deployment, and use are split between several interdependent actors”; (2) “supply chain actors and data flows perpetually change”; (3) “major providers’ operations are increasingly integrated across markets and between production and distribution”; and (4) “supply chains are increasingly consolidating around systemically important providers.” The first three elements make it challenging to identify which actor caused a given result; the fourth creates a practical and political impediment to accountability, as certain entities may become “too big to fail.” Refreshingly, the authors’ precise descriptions of these complex systems are interspersed with ruminations on how technological affordances, law, and political economy realities foster elements of the supply chain, while being careful not to slip into technological determinism.

The authors’ first observation is one of those concepts that I had never considered, but which seemed obvious after reading this piece: algorithms often “involve a group of organizations arranged together in a data-driven supply chain, each retaining control over component systems they provide as services to others” (emphasis in original) (I have only one critique of this paper: the authors are extremely fond of italics). It is “no longer the case that software is generally developed by particular teams or organizations.” Rather, “functionality results from the working together of multiple actors across various stages of production, deployment and use of AI technologies.” These various actors are (sometimes unknowingly) interdependent. Each one “may not be aware of the others, nor have consciously decided to work together towards [an] outcome . . . . However, each depends on something done by others.”

This interdependent dynamic is somewhat abstract, so the authors helpfully provide diagrams and concrete examples. Consider their Figure 2 (below) which showcases how one AI service provider (the red dot) might play three different roles in the provision of an algorithmic result, including providing AI as a service infrastructure to one entity, providing AI as a service to second, and providing technical infrastructure for an app to a third:

Figure 2: A representative AI supply chain. The application developer (blue) initiates a series of data flows by sending input data to an AI service provider (grey). One AI service provider (red) appears at multiple key points in the supply chain – providing infrastructure (A) for an AI service offered by (grey); providing an AI service (B) to another cloud service provider (orange); and providing technical infrastructure (C) for application deployment.
© 2023 Copyright Jennifer Cobbe, Michael Veale & Jatinder Singh. Reproduced by permission subject to cc-by-nc license.

The authors’ second observation is that the interdependencies among the various actors are dynamic and unstable: a supply chain “may differ each time it is instantiated,” as it may be comprised of different data, different actors, and different data flows.” And the outputs change as actors introduce new features or retire older ones, employ additional support services, or otherwise tinker with the system.

These dynamic and unstable interdependencies of the algorithmic supply chain raise accountability issues. One is a variant on Charles Perrow’s Normal Accident theory, writ large: “Interdependence helps problems propagate.” If accidents are inevitable in complex systems, they are certainly inevitable in algorithmic supply chains! The other accountability challenge is that, even when a problem is identified, it may be impossible to determine how it arose or what might be done to correct or mitigate it.

That being said, the authors’ third and fourth observations suggest that some actors—namely, ones which have been able to consolidate and entrench power within an algorithmic supply chain—play more stable and predictable roles than others. Some actors are horizontally integrated and operate across markets and sectors, repurposing infrastructural or user-facing technology for a range of services. Amazon Web Services, for example, is a cloud computing service used by newspapers, food companies, and retailers. Others are vertically integrated, controlling multiple stages of production and distribution of a particular algorithmic supply chain. And a few are both horizontally and vertically integrated, rendering them practically inescapable. (For a visceral description of the inescapability of Amazon, Facebook, Google, Microsoft, and Apple, I strongly recommend Kashmir Hill’s 2019 Goodbye Big Five project). Their centralization renders these entities tempting regulatory targets—but it also means they have the power and resources to affect how regulations take shape.

This is hardly the first time legal actors have had to confront the questions of how to create the right incentives for complex systems or hold multiple entities liable. The varied forms of the administrative state and joint-and-several liability, products liability, market share liability, and enterprise liability are all still useful models for constructing governance mechanisms.

But algorithmic accountability proposals that focus on a discrete actor will likely be insufficient and unfair, unless they account for the complicated interrelations of different entities within the supply chain. Meanwhile, proposals that target centralized actors will need to attend to the risks of assisting incumbents in building regulatory moats and otherwise creating barriers to entry. As Cobbe, Veale, and Singh’s excellent article details, both policymakers and scholars will need to wrestle with the complicated reality of how algorithmic supply chains actually operate.

Download PDF
Cite as: Rebecca Crootof, Algorithmic Accountability is Even Harder Than You Thought, JOTWELL (September 28, 2023) (reviewing Jennifer Cobbe, Michael Veale & Jatinder Singh, Understanding Accountability in Algorithmic Supply Chains (May 22, 2023), available at Arxiv), https://cyber.jotwell.com/algorithmic-accountability-is-even-harder-than-you-thought/.