The Journal of Things We Like (Lots)
Select Page
Hideyuki Matsumi & Daniel J. Solove, The Prediction Society: Algorithms and the Problems of Forecasting the Future, GWU Legal Studies Rsch. Paper (forthcoming), available at SSRN (June 5, 2023).

In their draft paper, The Prediction Society: Algorithms and the Problems of Forecasting the Future, Matsumi and Solove distinguish two ways of making predictions: “the first method is prophecy–based on superstition” and “the second is forecasting–based on calculation.” Initially, they seem convinced that the latter, calculative, type of prediction is more accurate and thus capable of transforming society as it shifts control over peoples’ future to those who develop or deploy such systems. Over the course of the paper, however, that distinction between deceptive prophecy and accurate prediction blurs. The authors make the argument that the pervasive and surreptitious use of predictive algorithms that target human behaviour makes a difference for a whole range of human rights beyond privacy, highlighting the societal impact these systems generate, and requiring new ways of regulating the design and deployment of predictive systems. The authors foreground the constitutive impact of predictive inferences on society and human agency, moving beyond utilitarian approaches that require the identification of individual harm, arguing instead that these inferences often create the future they predict.

Most of the points they make have been made before (e.g. here), but the lucid narrative argumentation presented in Matsumi’s and Solove’s paper could open a new conversation in the US as to how legislatures and courts should approach the issue of pre-emptive predictions with regard to constitutional rights beyond privacy. The paper also expands that same discourse beyond individual rights, highlighting the pernicious character of the manipulative choice architectures that build on machine learning, and showing how the use of ‘dark patterns’ is more than merely the malicious deployment of an otherwise beneficial technology.

To make their argument, the authors tease out a set of salient “issues” that merit a brief discussion here, as they are key to the constitutive societal impact of pre-emptive predictions. The first issue concerns the “fossilisation problem” that foregrounds the fact that algorithmic predictions are necessarily based on past data and thus on past behavioural patterns, thereby risking what I have called (in this book)  “scaling the past while freezing the future.” The second issue concerns the “unfalsifiability problem” that underscores the fact that data-driven predictions are probabilistic, making it difficult to contest their accuracy, which – according to the authors – sits in a grey zone between true and false data (I should note that under the GDPR personal data need not be true to qualify as such). The third issue concerns the “pre-emptive intervention problem” that zeros in on the fact that measures taken based on these predictions make testing their accuracy even more illusionary as we cannot know how people would have acted without those measures. This relates to the so-called Goodhart effect that foresees that “when using a measure as a target, it ceases to be a good measure.” The fourth issue concerns the “self-fulfilling prophecy” problem that reminds us of the seminal Thomas Theorem that states that “if men define a situation as real it is real in its consequences” which can be translated to our current environment as “if machines define a situation as real it is real in its consequences.”

The paper is all the more interesting because it refrains from framing everything and anything in terms of harm or risk of harm, foregrounding the constitutive impact of predictive inferences on society and human agency. Though the utilitarian framework of harm is part of their argument, the authors manage to dig deeper, thus developing insights outside the scope of cost-benefit analyses. Utilitarianism may in point of fact be part of the problem rather than offering solutions, because the utilitarian calculus cannot deal with the risk to rights unless it can be reduced to a risk of harm. In asserting the specific temporal nature of predictive inferences when used to pre-empt human behaviour, the constitutive impact on individual agency and societal dynamics becomes clear. It is this temporal issue that – according to the authors – distinguishes these technologies from many others, requiring new regulatory ways of addressing their impact.

To further validate their argument, the authors proceed to address a set of use cases, where the nefarious consequences of algorithmic targeting stand out, notably also because of their dubious reliability: credit scoring (now widely used in finance, housing, insurance or education), criminal justice (with a longstanding history of actuarial justice, now routinely used in decisions of bail, probation or sentencing, but also deployed to automate suspicion), employment (continuing surveillance-Taylorism while also targeting recruitment in ways that may exclude people from entering a job based on algorithmic scoring), education (where a focus on standardised testing and ‘early warning systems’ based on quantification of quality criteria may have perverse effects for those already disadvantaged) and insurance (where actuarial methods originated and the chimera of quantified efficiency of data-driven predictions could result in quasi-personalised premiums that charge people based on the statistical group they are deemed to fit). In all these contexts, the use of predictive and pre-emptive targeting restricts or enables future action, thus redefining the space for human agency. The design and deployment of predictive inferences enables corporations and public administration to create the future they predict, due to the performative effects they generate. Even if such creation is imperfect or was not intended, the authors highlight how it changes the dynamics of human society and disempowers those whose life is being predicted.

Matsumi and Solove end with a set of recommendations for legislatures, calling for legal norms that specifically target the use of predictive inferences, requiring scientific testability combined with evaluative approaches grounded in the humanities. They ask that legislatures develop a proper focus, avoiding over- and under-inclusivity, highlighting the relevance of context and stipulating specific requirements for training data in the case of data-driven systems. They call for the possibility to “escape” the consequences of unverifiable predictions and suggest an expiry date for predictive inferences, while emphasizing that individual redress cannot resolve issues that play out at the societal level. As they note, the EU AI Act addresses many of the problems they detect, providing many of the recommended “solutions,” though their current analysis of the Act remains cursory. (This is understandable as the final text was not yet available at the time of the release of this paper draft.)

Whereas the authors start their paper with a distinction between shamanic prophecies and calculated predictions, the distinction crumbles in the course of the paper, and rightly so. The initial assumption of objective and reliable predictive algorithms turns out to be a rhetorical move to call out the shamans of allegedly scientific predictions that may be refuted based on mathematical and empirical testing. It is key for lawyers to come to terms with the claimed functionalities of predictive tools that hold a potentially illusionary promise of reliable objective truth. We need to follow Odysseus’ strategy, when he bound himself to the mast after waxing the ears of his sailors, to avoid giving in to the Sirens of algorithmic temptation. To do so we cannot merely depend on self-binding (as the authors seem to suggest towards the end of their paper) but, as they actually convincingly advocate, we need to institute countervailing powers. That will necessitate legislative interventions beyond privacy and data protection, directly targeting e.g. digital services and ‘AI’ in the broad sense of that term. Matsumi & Solove’s paper holds great promise for an in-depth analysis of what is the key problem here and it should inform the development of well-argued and well-articulated legal frameworks.

Download PDF
Cite as: Mireille Hildebrandt, Addressing the Modern Shamanism of Predictive Inferences, JOTWELL (November 27, 2023) (reviewing Hideyuki Matsumi & Daniel J. Solove, The Prediction Society: Algorithms and the Problems of Forecasting the Future, GWU Legal Studies Rsch. Paper (forthcoming), available at SSRN (June 5, 2023)), https://cyber.jotwell.com/addressing-the-modern-shamanism-of-predictive-inferences/.