There is a remarkable body of work on the US government’s burgeoning array of high-tech surveillance programs. As Dana Priest and Bill Arkin revealed in their Top Secret America series, there are hundreds of entities which enjoy access to troves of data on US citizens. Ever since the Snowden revelations, this extraordinary power to collate data points about individuals has caused unease among scholars, civil libertarians, and virtually any citizen with a sense of how badly wrong supposedly data-driven decision-making can go.
In Big Data Blacklisting, Margaret Hu comprehensively demonstrates just how well-founded that suspicion is. She shows the high stakes of governmental classifications: No Work, No Vote, No Fly, and No Citizenship lists are among her examples. Persons blackballed by such lists often have no real recourse—they end up trapped in useless intra-agency appeals under the exhaustion doctrine, or stonewalled from discovering the true foundations of the classification by state secrecy and trade secrecy laws. The result is a Kafkaesque affront to basic principles of transparency and due process.
I teach administrative law, and I plan to bring excerpts of Hu’s article into our due process classes on stigmatic harm (to update lessons from cases like Wisconsin v. Constantineau and Paul v. Davis.) What is so evident from Hu’s painstaking work (including her diligent excavation of the origins, methods, and purposes of a mind-boggling alphabet soup of classification programs) is the quaint, even antique, nature of the Supreme Court’s decisionmaking on stigmatic harm. A durable majority on the Court has held that erroneous, government-generated stigma, by itself, is not the type of injury that violates the 5th or 14th Amendment. Only a concrete harm immediately tied to a reputational injury (stigma-plus) raises due process concerns. As Eric Mitnick has observed, “under the stigma-plus standard, the state is free to stigmatize its citizens as potential terrorists, gang members, sex offenders, child abusers, and prostitution patrons, to list just a few, all without triggering due process analysis.” Mitnick catalogs a litany of commentators who characterize this standard as “astonishing,” “puzzling,” “perplexing,” “cavalier,” “wholly startling,” “disturbing,” “odious,” “distressingly fast and loose,” “disingenuous,” “ill-conceived,” an “affront [to] common sense,” “muddled and misleading,” “peculiar,” “baroque,” “incoherent,” and my personal favorite, “Iago-like.” Hu shows how high the stakes have become thanks to the Court’s blockage of sensible reform of our procedural due process jurisprudence.
Presented numerous opportunities to do so, the Court simply refuses to deeply consider the cumulative impact of a labyrinth of government classifications. We need legal change here, Hu persuasively argues, because there are so many problems with the analytical capacities of government agencies (and their contractors), as well as the underlying data they are relying on. Cascading, knock-on effects of mistaken classification can be enormous. In area after area, from domestic law enforcement to anti-terrorism to voting roll review, Hu collects studies from experts that indicate not merely one-off misclassifications, but a deeper problem of recurrent error and bias. The database bureaucracy she critiques could become an unchallengeable monolith of corporate and government power arbitrarily arrayed against innocents, which prevents them from challenging their stigmatization both judicially and politically. When the state can simply use software and half-baked algorithms to knock legitimate voters off the rolls, without notice or due process, the very foundations of its legitimacy are shaken. Similarly, a lack of programmatic transparency and evaluative protocols in many settings makes it difficult to see how the traditional touchstones of the legitimacy of the administrative state could possibly be operative in some of the databases Hu describes.
Many scholars in the field of algorithmic accountability have been focused on procedural due process, aimed at giving classified citizens an opportunity to monitor and correct the data stored about them, and the processes used to analyze that data. Hu is generous in her recognition of the scope and detail of that past work. But with the benefit of her comprehensive, trans-substantive critique of big data blacklisting programs, she comes to the conclusion that extant proposals for reform of such programs may not do nearly enough to restore citizens’ footing, vis a vis government, to the level of equality and dignity that ought to prevail in our democracy. Rather, Hu argues that, taken as a whole, the current panoply of big data blacklisting programs offend substantive due process: basic principles that impose duties on government not to treat persons like things.
This is a bold intellectual move that reframes the debate over the surveillance state in an unexpected and clarifying way. Isn’t there something deeply objectionable about the gradual abdication of so many governmental, humanly-judged functions to private sector, algorithmically-processed databases and software—especially when technical complexity is all too often a cloak for careless or reckless action? For someone unfamiliar with the reach, fallibility, and stakes of big data blacklisting, it might seem jarring to contemplate that a pervasive, largely computerized method of classifying citizens might be as objectionable as, say, a law forbidding the teaching of foreign languages, or denying the right to marry to prisoners (other laws found to violate substantive due process). However, Hu has done vital work to develop a comprehensive case against big data blacklisting that makes several of its instantiations seem at least as offensive to constitutional values as those restrictions.
Moreover, when blacklisting itself is so resistant to traditional procedural due process protections (for example, in cases of black box processing), substantive due process claims may be the only way to relieve citizens of burdens it imposes. Democratic processes cannot be expected to protect the discrete, insular minorities targeted unfairly by big data blacklisting. Even worse, these “invisible minorities” may never even be able to figure out exactly what troubling classifications they have been tarred with, impairing their ability to even make a political case for themselves.
Visionary when it was written, Big Data Blacklisting becomes more relevant with each data breach and government overreach in the news. It is agenda-setting work that articulates the problem of government data processing in a new and compelling way. I have rarely read work that so meticulously credits pathbreaking work in the field, while still developing a unique perspective on a cutting edge legal issue. I hope that legal advocacy groups will apply Hu’s ideas in lawsuits against arbitrary government action cloaked in the deceptive raiments of algorithmic precision and data-driven empiricism.