The Journal of Things We Like (Lots)
Select Page
John Danaher, Tragic Choices and the Virtue of Techno-Responsibility Gaps, 35 Phil & Tech 26 (2022).

I always love scholarship that forces me to pause and question my baseline assumptions. And so—as someone who has written of the need to close accountability gaps associated with malicious cyberoperations, IoT devices, and autonomous weapon systems—I was delighted to read John Danaher’s Tragic Choices and the Virtue of Techno-Responsibility Gaps. In this work, Danaher challenges everyone who has ever argued that new technologies problematically undermine traditional accountability structures by quietly observing that these new gaps are…maybe sometimes a good thing?

While Danaher tends to focus more on moral responsibility than legal liability, if you are a techlaw scholar thinking about accountability gaps in any context, add this to your reading list. Danaher writes in a relaxed and engaging style, includes a fantastic literature review of non-legal texts on accountability gaps, and explores a counterintuitive argument—all in a piece that clocks in at a svelte 22 pages of text. (Would that I could accomplish so much, so smoothly, in so few words!)

Danaher defines a “Techno-Responsibility Gap” as follows: “As machines grow in their autonomous power (i.e. their ability to do things independently of human control or direction), they are likely to be causally responsible for positive and negative outcomes in the world. However, due to their properties, these machines cannot, or will not, be morally or legally responsible for these outcomes. This gives rise to a potential responsibility gap: where once it may have been possible to attribute these outcomes to a responsible agent, it no longer will be.” Danaher then distinguishes the various forward- and backward-looking forms techno-responsibility gaps might take. There are (1) accountability gaps, which exist when there’s no one to provide a public account for the harm; (2) culpability gaps, which exist when there’s no one to take the blame; (3) compensation gaps, which exist when there’s no one to pay for the harm; (4) obligation gaps, which exist when there’s no one who ensures the harm is avoided; and (5) virtue gaps, which exist when no one takes responsibility for the harmful acts. Danaher also notes the distinction between positive responsibility (“Great job there!”) and negative responsibility (“Why didn’t you . . .?!”).

Danaher then summarizes familiar proposed means of eliminating these gaps, most of which boil down to justifications for ascribing accountability to a prescribed human or non-human entity. He concludes that, for all of the disagreement around how best to address them, “most contributors to the techno-responsibility gap debate tend to agree on one thing: the creation of techno-responsibility gaps is a problem.” Why? Because responsibility is always a good thing. Right? Right?

Maybe not! To set up his argument for why we might sometimes want to prioritize other goals over ensuring accountability, Danaher starts with the problem of tragic choices. Human decision-makers often confront questions where moral considerations simultaneously weigh in favor of different answers and it is difficult or even impossible to reach a morally comfortable conclusion. We all face these choices in our daily lives. (Do I give my discretionary funds to this or that charity?) But they become policy questions when we need to determine how best to allocate scarce resources (Should hospitals privilege this or that type of patient when deciding who receives a needed ventilator?) or weigh costs to X against costs to Y (How to balance a right to speech against a right not to be threatened?).

When confronted with these tragic choices, we—as individuals, as institutions, or as societies—may handle the moral difficulty of reaching a conclusion in various ways. First, we might delude ourselves into believing it’s actually an easy question (“illusionism”). This can manifest in ignoring costs, compartmentalizing them, or rationalizing them away. Second, we might delegate the choice to another (“delegation”). We do this when we ask waitstaff what we should order, look to a panel of judges to decide the scope of a law, or flip a coin to determine our next course of action. Third, we might make a decision and bear the psychological costs ourselves (“responsibilization”).

One of Danaher’s main points is that none of these responses will always be better or worse than the others. Rather, in a classically lawyerly move, Danaher maintains that the preferable response will depend on the situation and context. Despite our collective bias towards responsibilization, each of these responses has distinct benefits and drawbacks.

Illusionism permits mental comfort at the expense of honesty. Delegation allows for shifting the psychological and moral costs to a (possibly more informed, capable, or impartial) substitute actor. But it also risks a concentrated group or institution bearing these costs, transference to an inept decision-maker and consequently poor outcomes, and the failure of the original actor to develop or maintain decision-making skills. Finally, responsibilization enables moral agency and all sorts of accountability—but does so at the possible cost of unjustly transforming decision-makers into scapegoats. (This point reminded me of an argument against including steering wheels in fully autonomous vehicles: the idea was that, in the event of a deadly crash, the human operator would unfairly blame themselves for not intervening despite not being able to act with the reflexes necessary to prevent the accident.)

If each response to a tech-fostered accountability gap has distinct pros and cons, there will necessarily be situations where delegation will be preferable to responsibilization. Further, Danaher argues, the possibility of delegating to an algorithm, rather than to another human, may change the balance of benefits and harms associated with these different responses, insofar as it eliminates the delegation drawback of concentrating the psychological and moral costs of tragic choices with few individuals. To take advantage of this reduced cost on human decision-makers, Danaher concludes, we must be willing to live with some techno-responsibility gaps.

Danaher suggests that human online content moderators provide an example of when this tradeoff might be worthwhile. These decision-makers have a stressful, difficult job; they save untold numbers of platform users from having to view offensive and traumatizing content, but they do so at great psychological expense. Assuming both human and algorithms performed moderation tasks equally well, transferring content moderation decision-making power to an algorithm would minimize harm to humans. Similar arguments could be made for drone operators, police body-cam reviewers, and any other human charged with sifting through painful content to determine what can be cleared for public release.

Danaher is quick to qualify his argument. To the extent they are made, delegations should be made carefully; his analysis does not suggest that we should always delegate decisions to machine systems. And the fact that algorithmic decision-makers reduce some of the costs of delegation does not mean they eliminate other costs; there are still plenty of reasons to be wary of accountability gaps. Danaher also engages, in a wonderfully non-defensive manner, with various alternative versions and critiques of his argument. He explores a proposal to employ randomization as a low-cost form of algorithmic delegation, the concern that delegation fosters agency-laundering and liability evasion, and a query as to when we might (and might not) want to make the costs and tradeoffs inherent in tragic choices more explicit.

This thoughtful, dense, yet accessible piece invites readers to question our assumptions about why we assume accountability—and, specifically, responsibilization—is always preferable to the alternatives. I will likely to continue to argue for closing tech-fostered accountability gaps, but thanks to this piece, my arguments will now be far more nuanced.

Download PDF
Cite as: Rebecca Crootof, The Argument for Not Closing Accountability Gaps, JOTWELL (October 26, 2022) (reviewing John Danaher, Tragic Choices and the Virtue of Techno-Responsibility Gaps, 35 Phil & Tech 26 (2022)), https://cyber.jotwell.com/the-argument-for-not-closing-accountability-gaps/.