The Journal of Things We Like (Lots)
Select Page

What STS Can (and Can’t) Do for Law and Technology

Ryan Calo, The Scale and the Reactor (2022), available at SSRN.

The field of law and technology has come a long way since we last heard the unmistakable squeal of a modem connecting to cyberspace.  Most of us that remember that sound now probably have more grey hair than we used to. We’ve covered a lot of ground since “Lex Informatica” and “Code is Law,” so you’d think our field would have a deeply sophisticated method for understanding the relationship between law, society, and technology, right?

Professor Ryan Calo thinks the field can do better. In this concise and accessible unpublished article that is part of a new book project, Calo highlights how Science and Technology Studies, or STS, has been overlooked and could contribute to the field of law and technology. To Calo, law and tech took decades to wind up where STS would have started. It’s not that law and tech is redundant of STS, rather, the problem is that “law and technology has been sounding similar notes to STS for years without listening to its music.” As a result, our field “does not benefit from the wisdom of scholars who have covered roughly the same ground.” Calo looks to showcase critical STS ideas and debates “for the unfamiliar law and technology reader,” so that we no longer have an excuse to claim ignorance of the field. He accomplishes this in spades with a clear and deeply informed article that is a must read for anyone writing in the field of law and technology.

Calo wrote this article because he believes that “a working knowledge of STS is critical to law and technology scholarship.” He argues that the core insights of STS will help scholars avoid “the pitfalls and errors that attend technology as social fact.” Calo’s contribution has three parts. The first is a brief STS crash course for the uninitiated. If you are unfamiliar with STS and regularly read this journal, stop reading this and check out Calo’s highly efficient summary of STS in Part One (it’s only seven pages!). I imagine the work of Langdon Winner, Bruno Latour, Sheila Jasanoff, and many other STS scholars will resonate with you as it did for me when I first encountered them. This introduction to the field is both informative and enjoyable because of Calo’s palpable enthusiasm for STS. (As I wrote this, I laughed at how I’m writing a review about how much I like Calo’s article, which is about how much he likes STS. It’s like I’m writing a Jot about a Jot. A meta-Jot.)

The second part of this article is an exploration of STS insights that make up the “road not taken by law and technology.” Calo highlights what could have been gained if legal scholars had more explicitly embraced STS earlier, including more nuanced metaphors, more case studies, and fewer redundancies. Calo cites two downsides that arise from law and technology overlooking STS. First, failing to deeply engage with STS denies the field of law and tech wisdom and nuance. Additionally, law and tech scholarship often falls into some of the very traps STS grew up to avoid, such as a strong sense of technological determinism and the idea that technology will shape behavior in one single way and no other.

In the article’s third part, Calo highlights the limitations of STS scholarship for law and technology scholars. First, STS is relatively uncomfortable with normativity, compared with the law’s embrace of it. Additionally, STS sometimes struggles to translate concepts and observations in ways that can influence levers of power.  Calo notes that STS scholarship sometimes gets lost in its own complexity, a critique levied by some STS scholars themselves. But as Julie Cohen has noted, law is relentlessly pragmatic in its identification and attempt to solve real world problems. While other disciplines might hesitate to offer up messy and even internally conflicting prescriptions, legal scholars do it for a living when inaction means injustice. Calo highlights the dangers of law and tech avoiding normativity and pragmatism, including getting stuck in a “constant state of watchful paralysis.” This happens when legal actors wait so long to fully understand the social impacts of technologies that when clarity finally arrives these tools and systems are too entrenched to resist. In STS scholarship this is referred to as the “Collinridge dilemma,” and it gives more nuance to what I’ve heard some law and tech scholars describe as the “avocado ripeness” problem. (Not yet…not yet…not yet……..too late.)

Thus, Calo’s article ends up being part STS-primer and part STS-implementation guide for law and technology scholars. According to Calo, you shouldn’t simply chuck a bunch of STS into every corner of cyberlaw, because “importing STS wholesale…has the potential to undermine what is unique about the [law and technology] field.” In the final part, Calo recommends that legal scholars should be mindful of how technologies have value-laden affordances and social forces behind them while holding firm to legal scholarship’s normativity and pragmatism. I appreciated Calo’s suggestion that one major strength of law and technology scholarship is making ideas and concepts concrete enough for people to act on.

I like this article because it is clear, concise, and even witty. (It wouldn’t be a Calo article without puns and he even managed to work one into the title). And I like this article lots because of its meditation on the virtues, vices, and proper role of “law and technology” as a field of scholarship. This is one of the main aspects of Calo’s forthcoming book. My only complaint in this review is I wish the article had previewed his larger project on law and technology more.

If we are going to take a serious look as the relationship between rules and artifacts, we must have a good sense of both. This article uses STS to show where the field of law and technology can improve, and what it does best.

Cite as: Woodrow Hartzog, What STS Can (and Can’t) Do for Law and Technology, JOTWELL (May 19, 2023) (reviewing Ryan Calo, The Scale and the Reactor (2022), available at SSRN), https://cyber.jotwell.com/what-sts-can-and-cant-do-for-law-and-technology/.

Trust, Trustworthiness, and Misinformation Shared by the Government

Janet Freilich, Government Misinformation Platforms, __U. Pa. L. Rev. __(forthcoming 2023), draft available at SSRN (Feb. 27 2023).

Where does trusted information come from? In a world of misinformation, where everyone is skeptical of everything, at least we can rely on expert, authoritative government agencies like the Environmental Protection Agency, the Centers for Disease Control, the Patent Office, and the Food and Drug Administration, right? Right!?

Not so fast, Professor Janet Freilich persuasively but depressingly argues in the smart, eye-opening, “why didn’t I think of that” Government Misinformation Platforms. Freilich’s central point is fairly straightforward (although the article is rich with nuance and detail): We usually laud the government’s sharing of information because government-provided information is usually pretty trustworthy and useful for all kinds of things, and because transparency is usually a good goal. There’s a whole law (the Freedom of Information Act) about getting government to share information on request, supplemented by various transparency efforts. But there are also many government-run platforms that share information that the government itself didn’t produce—and in fact, that share unvetted, frequently incorrect, sometimes deliberately misleading information. When people see information on these platforms and think “government information = trustworthy,” then the problems start.

But is this really a big problem? Isn’t it just a couple of examples? In Part II, Freilich convincingly dismantles that resistant questioning. She recounts a disheartening parade. Let’s say people want to know who’s releasing toxins into the environment. The first obvious step would be to visit an EPA site to find out…but the data are compiled by companies and unvetted. What about a government-run list of ongoing clinical trials? Not vetted by NIH or FDA! Patents are examined, so they must surely be correct, at least. Nope, pronounces Freilich, relying on some of her terrific earlier work showing that patents are full of fictional experiments (with results laid out!) that the patentee never actually conducted. Maybe the most prominent example is the Vaccine Adverse Events Reporting System (VAERS), run by CDC. It lists thousands of people who died after getting the COVID-19 vaccine. You guessed it—those reports are self-uploaded, unvetted, and have absolutely nothing even pretending to demonstrate causation. But there the data are, on a CDC website.

This information both matters and misleads. Freilich persuasively shows that people do rely on information on these government-run platforms, and at least some treat it as authoritative. Scientists read patents, even when the contents aren’t accurate or are based on totally fictitious experiments (Did you know that the difference between an experiment that happened in a patent versus one that didn’t is whether it’s described in the past or present tense? Lots of patent-reading scientists don’t!). People rely on clinical trial listings as some sort of imprimatur. And VAERS data are trumpeted on news sites, despite big disclaimers on the website about the unreliability of the information.

So that’s one big problematic consequence: People believe things that are wrong because they see them on government websites and mistakenly think they’re government vetted.

The opposite problem also occurs: People start to mistrust the government because it’s sharing bad information. If there’s garbage on CDC and FDA and EPA and PTO and NIH websites, how can people be sure that those agencies are worthy of trust—or at least that the things on their websites are worth trusting? That decrease in trust is awfully problematic for those agencies, especially in a time when trust in government agencies is declining.

On a broader level, Freilich exposes the fascinating, troublesome, and unstable gap between “trusted” and “trustworthy.” It’s a space where con artists live, one that research hospitals have struggled with in the bioethics space, one the government seems to have wandered unwittingly into—and one the government needs to exit expeditiously.

The problem of government misinformation, alas, is easier limned than solved. Freilich presents a menu of options—including increased disclaimers, hurdles to posting information, correcting incorrect information, and more—but they’re all partial palliatives limited by capacity, will, or law. There’s no silver bullet here.

In a sense, the complex tangle of partial potential solutions is unsurprising. This paper exemplifies a really fun genre of legal scholarship, what we might call the “Hey this problem is actually widespread” paper. Freilich has deep expertise in the foibles of the patent system, and some of that work has focused on how patents aren’t so reliable, even though one might reasonably think they are a high-quality source of technical information (that’s part of the point of the patent system, after all). There’s the aforementioned issue of fictitious experiments. Even worse, when patents are based on experiments that are so wrong the associated scientific papers are actually retracted, the patent system seems…unconcerned. (Not great!). Government Misinformation Platforms steps back to show that this information quality problem is disturbingly widespread across many contexts. But while there might be at least quasi-straightforward solutions in the limited context of patent examination and publication, the nuances of how those solutions work, or don’t, changes quite a bit from context to context. Freilich deftly and clearly recognizes this complexity, but it’s an ongoing challenge.

A particularly fun thing about the paper is that it lends itself to exploration and further work both conceptual and applied. On a broad theoretical level, how should the government perform its weirdly mixed role in developing, promulgating, aggregating, and sharing information going forward? Where’s the right balance between easy, quick access and maintaining trust and accuracy? Is information-provision trustworthiness distinct from other-stuff trustworthiness, or are they inextricably intertwined? And on the nitty-gritty practical level, after Freilich has unearthed so many spheres of government-enabled misinformation, what’s the right solution for each? Should EPA treat misinformation differently than FDA? CDC versus NIH? How might one practically taxonomize them and link effective interventions to contextual cues? There’s so much to be done! Freilich has opened a new and tremendously interesting door in how we think about information and the government; I look forward to seeing what grows on the other side.

Cite as: Nicholson Price, Trust, Trustworthiness, and Misinformation Shared by the Government, JOTWELL (April 19, 2023) (reviewing Janet Freilich, Government Misinformation Platforms, __U. Pa. L. Rev. __(forthcoming 2023), draft available at SSRN (Feb. 27 2023)), https://cyber.jotwell.com/trust-trustworthiness-and-misinformation-shared-by-the-government/.

The Dawn of Influencer Law

Catalina Goanta & Sofia Ranchordás, The Regulation of Social Media Influencers (2020).

Ever since Judge Easterbrook famously declared Cyberlaw to be “The Law of the Horse”, and despite Professor Lessig’s excellent rebuttal, there has been a reluctance to declare new areas of legal study spurred by new technologies. Easterbrook claimed that we are in danger of descending into narrower legal sub-categories when most behaviour in what was known then as cyberspace was “easy to classify under current property principles”. At times this message has resonated with legal audiences, and we have largely not seen a push towards the creation of new legal categories. It would be difficult to say that there is such a thing as blockchain law, or artificial intelligence law, to name just two subjects close to this reviewer’s heart.

Nevertheless, after reading the excellent collection The Regulation of Social Media Influencers, edited by Catalina Goanta and Sofia Ranchordás, it is possible to envision a world in which we may have a new legal sub-category: Influencer Law. Importantly, the editors never claim the existence of a new branch of legal study, but the richness of the subject on display leads me to think of this relatively new area of research as its own thing. This is a rich subject that covers free speech, labor, consumer protection, advertising, intellectual property, and contract law, just to name a few. While these separate subjects could be analysed in their own separate niches, there is an argument to be made to bring them all together as a separate area of study, as they often interact with one another in manners that encourage a single thematic analysis. In general, edited books can be the poor relative of scholarly publications; in European academia for example, these books are the academic outputs that are valued the least. In this case, however, there is not a weak chapter in this collection and there is a very clear structure running throughout the book, with each section clearly delineated and well-executed.

The showpiece of the book is undoubtedly the introduction to the subject by Catalina Goanta and Sofia Ranchordás, who set out to define what is an influencer, and describe the legal status and regulatory pitfalls that they face. It is, of course, difficult to delineate the subject and define the concept of “influencer” in a way that can be used for legal analysis. The word influencer itself comes from an era of celebrities, and it is intrinsically linked with advertising. However, recently there has been a rise in the social media influencer that responds to the shift in audiences from traditional media to social networks. Social media in this context is understood as a platform that allows users to upload and share content to an audience. The social media influencer is not a celebrity in the traditional sense, but she/he may carry an incredible amount of clout in his/her niche area of interest. The challenge from a regulatory perspective is that these influencers are often operating in unprofessional settings, relying on monetisation schemes that are controlled entirely by tech platforms, and where commercial endorsements are not always transparent. This is a problem because it makes deception easier, but it also gives the impression that an influencer is endorsing a product because they like it, instead of the reality that showcasing it is part of a commercial deal. Influencers have considerable power to shape trends with their audiences, and as they say, with great power comes great responsibility.

So, what is an influencer? There are a few common elements. Social media influencers operate in “word-of-mouth” advertising environments where audience trust is paramount.  Influencers exert their, well, influence, in online communities through the constant production of content and through engagement with their peers. The authors identify four identifying elements of being an influencer: 1) the industry in which an influencer operates (e.g. beauty, tech, gaming, pets, kids, etc); 2) the source of influence (e.g. by already being famous as actors or sports personalities, while others are considered influencers by the number of followers in social media); 3) the reach of influence in the shape of detailed audience analytics; and 4) the legal status, namely whether an influencer operates in a corporate environment, freelances, or even acts as a consumer in more informal settings.

After providing a very thorough definition of an influencer, the authors engage in a discussion of the legal issues that surround the new influencer industry. Their central concern is with advertising. Advertising is a regulated activity in many countries, either by self-regulation, or as is the case in most of Europe, through advertising standards agencies, so the role of the influencer in the shaping of opinions, particularly in young audiences, has received the most level of scrutiny. The authors go through several examples of legal concerns that arise from the blurring of consumer/professional boundaries, particularly when it comes to endorsements, and the disclosure of whether an influencer may be praising a product in exchange for payment, or goods or services.

The authors end the chapter analyzing several other legal areas of concern. With the increased role of influencers as figures worthy of trust in their communities, the main question is one of the possible liabilities they may incur when promoting dubious products and events, such as the ill-fated Fyre Festival, defective products, or even disreputable schemes such as multi-level marketing. The book was written in 2019 and published in 2020, so the authors do not cover the recent wave of influencers promoting failed cryptocurrency schemes. Many of these influencers often fall outside of existing regulation, so it will be useful to have an area of the law dedicated solely to analysing the reach and effect of these actors, but most importantly, prepared to understand the environment in which they move. This could probably be an area of research in the future for the nascent area of Influencer Law.

The edited volume as a whole is filled with other excellent chapters. These include a notable discussion of child labor in the influencer world, by Valerie Verdoot, Mark Leiser, and Simone van der Hof. I also enjoyed and learned a lot from the chapter on the potential regulation of the influencer market using mandated disclosures, by Rossana Ducato.

I highly recommend this edited book. I enjoyed reading it cover to cover, which is rare in works of this nature. While Influencer Law may not yet be its own course or field, this book at least makes a compelling argument for the existence of a vibrant area of research.

Cite as: Andres Guadamuz, The Dawn of Influencer Law, JOTWELL (March 9, 2023) (reviewing Catalina Goanta & Sofia Ranchordás, The Regulation of Social Media Influencers (2020)), https://cyber.jotwell.com/the-dawn-of-influencer-law/.

Surveilling Truckers and the Future of the Workplace

Karen Levy’s book Data Driven, an incisive and accessible sociolegal study of workplace surveillance in the trucking industry, begins with a tale of superheroes. These superheroes are machines from a far-off world dedicated to saving humanity from other machines bent on our destruction. (Think “The Transformers.”) The problem is: Our would-be saviors can’t move. They’ve worked too hard for too long, saving humanity from all sorts of harm, and now, by law and by design, they must rest.

Levy, a professor in Cornell University’s Department of Information Science, tells this story, drawn directly from the pages of a trucking industry periodical, to introduce us to the electronic logging device, or ELD. ELDs are now integrated by law into every commercial truck driving across state lines. They are designed to force compliance with federal “hours-of-service” regulations, which limit the number of hours truckers can drive before taking rest breaks. Like our would-be robot saviors, trucks constrained by ELDs cannot move when their drivers have reached their hours limits. That isn’t necessarily so bad; trucker fatigue is dangerous to truckers and everyone else on the road. But, as Levy explains, ELDs are a lot more insidious.

ELDs treat the symptom, not the disease. If the symptom is fatigue, the disease is the trucking industry’s perverse economic incentives. Truckers are paid by the miles they drive, and they are paid nothing during the many hours of fueling, loading, unloading, and bathroom and meal breaks necessary to doing their jobs. (Pp. 36-48.) Plus, ELDs do more than trigger federal rest mandates. They reveal to employers when, where, and how fast a trucker is driving, when and for how long they have been resting, and when and where truckers are doing something they shouldn’t. And that is Data Driven’s central story. ELDs were sold as a new technology that would make trucking (and driving) safer. But they also enable extensive workplace surveillance by trucking companies and by the state, give management weapons to manipulate their employees, and perpetuate extractive capitalism.

Data Driven, like much of my own work, explores the gap between the law on the books—ELDs in this case—and the law on the ground, including the way the law is practiced, understood, experienced, and resisted in the real world. The book is based on years of extensive field research. Levy interviewed truckers at truck stops, read their literature, met with regulators and management, sat in on meetings, engaged with labor organizers, all while upholding ethical standards as a sociolegal researcher. Based on this work, Levy finds that ELDs are legal creations, economic tools, and cultural objects all at the same time. They help regulators enforce the law to the letter. They disrupt long-standing norms among truckers, particularly about their knowledge of the road and their independence. And they are part of a larger system of surveillance that helps firms force employee alignment with corporate goals. (P. 55.)

A particularly vivid exchange explores the last point. (P. 61.) One afternoon, starting at 12:57 PM and continuing for the next 90 minutes (sometimes at one-minute intervals!), a trucker received several messages from management: “Are you headed to delivery?” “Please call.” “What is your ETA to delivery?” “Need to start rolling.” “Why have you not called me back?” The trucker was sleeping; management didn’t care. “Why aren’t you rolling? You have hours ….” Having hours refers to additional time before legally mandated rest. Seven minutes later came the next message insisting the trucker get back on the road. Seven minutes later comes another one. The trucker responds: “Bad storm. Can’t roll now.” Three minutes later, management chimes in: “Weather Channel is showing small rain shower in your area, 1-2 inches of rain and 10 mph winds ???”

Workers, many of whom chose this grueling job specifically for its independence, now have employers looking over their shoulders. But disrupting trucking’s cultural norms is the tip of this iceberg. Having hours is a quantified metric, a decontextualized number that presumes that the only barrier to driving is the federal rest mandate. Truckers also get tired, have to use the restroom, and need to eat. “Having hours” elides all of that. Then comes the Weather Channel. Rain “in your area” says little about rain “where I am right now.” Nor does it speak to driving conditions; even light rain “in your area” can mean slippery conditions, landslide risks, and other dangers from hours of heavy rain.

ELDs, just like algorithms that purport to process large data sets try to predict people’s behavior, privilege strict and inflexible data analysis over holistic assessment and discretion. They presume that numbers tell the whole story, or at least enough of a story to make policy. As Levy shows, numbers miss the realities of trucking. (Pp. 50-51.) If you were told you had around 10 hours to complete a journey, you’d drive much more safely than if you were told you must arrive in exactly 10 hours or else. Why? The former gives you flexibility; you can drive faster when it’s safe and slower when you need to; you can factor in bathroom, meal, and rest breaks without stressing that you’re going to be a few minutes late. The latter incentivizes recklessness. The ELD turns trucking into a constant barrage of threats.

Data Driven concludes by speaking to the larger message of the ELD mandate. Faced with an epidemic of dangerous trucker fatigue, policymakers turned to surveillance and technology design rather than to addressing the underlying economic incentives that push truckers to drive while tired in the first place. (P. 153.) A richer, worker-protective solution should have been to pay truckers for their work, not for their miles driven. This would include what truckers call “detention time,” or the hours spent loading and unloading, during which they are at the mercy of dock workers and other laborers who operate on their own schedules and have radically different payment structures and incentives. But no, the neoliberal policymakers of late capitalism chose surveillance. They chose a weapon of managerial control rather than a structural change. In so doing, the trucking industry colluded with policymakers to socially construct the ELD as a tool of social control, and they made the roads more dangerous as a result.

In the end, Levy is right to warn us that truckers are the canaries in the coal mine of workplace surveillance. (P. 9.) Employers may have always kept watchful eyes on their employees, but things are worse now. Remote workplaces like trucks or home offices are no longer immune from tracking. The data—collected from inputs like wearables and social media—are more diverse. The analytics, now driven by complex algorithms, are more invasive. Modern workplace surveillance extends even beyond systems administrators knowing when you’re checking Instagram. Our bosses are watching everything; Levy’s outstanding Data Driven opens our eyes before it’s too late.

 

Cite as: Ari Waldman, Surveilling Truckers and the Future of the Workplace, JOTWELL (February 7, 2023) (reviewing Karen Levy, Data Driven: Truckers, Technology, and the New Workplace Surveillance (2022)), https://cyber.jotwell.com/surveilling-truckers-and-the-future-of-the-workplace/.

There’s A Great Big Beautiful Tomorrow (For Pittsburgh)

Michael J. Madison, The Kind of Solution a Smart City Is: Knowledge Commons and Postindustrial Pittsburgh in Governing Smart Cities as Knowledge Commons (forthcoming 2023).

“Retrofuturism” in art and literature is a look back at the (sometimes recent) past and how the stories of the future were told. The retrofuturist aesthetic can be found in present-day theme parks like Walt Disney World’s Tomorrowland and EPCOT and in the concept of steampunk. Through retrofuturism, we try to understand what was once hoped for, often as a way of understanding success or failure and of critiquing present-day efforts and priorities.

Retrofuturist impulses are particularly important in technology law scholarship. Critical appraisals of ‘smart city’ and urban innovation projects and initiatives examine how people joined the digital with the material to imagine a better world. You can’t tell the story of the smart city without at least engaging with the tales of the city. And so, in a very real and immediate way, the literature of geography, planning, and–yes–physical architecture is a key resource for the legal scholar. In The Kind of Solution a Smart City Is: Knowledge Commons and Postindustrial Pittsburgh, Michael Madison gives us a compelling retrofuturist account of Pittsburgh, the smart city. Madison’s account of a range of projects in Pittsburgh (including those of the 21st century) tells a story that is both universal and particular, tapping into the need to understand the roads taken and not taken, and what was imagined or foreseen in the recent and not so recent past.

The Governing Knowledge Commons framework, an approach to which Madison himself has made founding and abiding contributions, allows for the study of how intellectual and cultural resources (e.g. information, science, and software), as distinct from natural resources, are created and shared, and in turn governed through, by, and with communities. Applying this framework, Madison digs into the past ideas and initiatives meant to improve (or “fix”) this mid-sized Pennsylvanian city. Imagined overlapping and data-driven futures like the pioneering Pittsburgh Survey of a century ago, the 3RC (Three Rivers Connect) civic computing initiative of 1999, or reaching the final of (but not winning) the USDoT Smart City Challenge in 2014 offer rich resources for anyone seeking to understand how cities attempt to anticipate and evolve in the face of disparate and dynamic challenges.

Madison also tells the stories of the conditions for urban reform and renewal in Pittsburgh, contributing to the overall argument that context, geography and history all matter. Physical infrastructure is old. Social and political infrastructures are tied up in long-standing institutions and networks. Pittsburgh’s population has been declining. The geography of the city, in its ups, downs, rivers, and bridges, is irregular. The city’s many neighbourhoods are disconnected from political power. The City Council is not the only game in town, due to the presence of regional and other government structures. Where once there was steel, now there are universities and hospitals (“eds and meds”) and, increasingly, an innovation economy (“tech-centred development”). Yet until recently, data systems weren’t often used in municipal government.

Readers will recognize many of these facets in “post-industrial” cities around the world. Pittsburgh is one of a number of cities where ‘economic renewal efforts’ dominate cultural and political discourse, decades after the decline of a major production or extractive industry. But, as Madison makes clear, Pittsburgh is unique in how its infrastructure, population decline, geography, and changing industry are interconnected with money, power, and people, meaning that the factors and actors affecting economic renewal and the digital transition require particularly close attention.

Some political institutions and funders nowadays emphasise the “knowledge square” (e.g., as the European Commission now puts it, the interconnection between education, research, innovation, and service to society). Though Madison does not put it quite this way, his careful attention to the roles of philanthropic organisations (a distinctive part of the Pittsburgh civic story) and universities tells another dimension of Pittsburgh’s reform and development trajectory. He highlights the different roles played by the University of Pittsburgh and Carnegie Mellon University and its projects. These include the smart cities institute Metro21. Madison also traces the distinctive character of a number of major interventions, such as the Western Pennsylvania Regional Data Centre, and the individuals who have led and championed them.

This article is not (only) a celebration of a great city, though. Madison highlights the difference between the problems the city tries to solve and the biggest problems that need to be solved. He retains an appropriate scepticism about extreme smart city boosterism, calling for greater attention to evolution over creation and to the enduring role of physical infrastructure and its limits.

The dream of a better tomorrow is at the core of urban (re)imagination. As Pittsburgh moves towards being a smart city, Madison draws a contrast between the city’s “older smoky self” and its aspirations towards becoming an “equitable and forward-looking ‘green’ community”. Yet Madison has also shown us that the smoke never fully clears. Even the ‘recent’ history of what was tried and why it did or didn’t work in the late 1990s is in his account an essential part of a proper understanding of the choices now available to this particular city. Other cities will face different physical and political factors, but Madison is rightly calling on us to map and understand those local conditions; some common questions, but different answers, are what we can hope to find.

Cite as: Daithí Mac Síthigh, There’s A Great Big Beautiful Tomorrow (For Pittsburgh), JOTWELL (January 4, 2023) (reviewing Michael J. Madison, The Kind of Solution a Smart City Is: Knowledge Commons and Postindustrial Pittsburgh in Governing Smart Cities as Knowledge Commons (forthcoming 2023). ), https://cyber.jotwell.com/theres-a-great-big-beautiful-tomorrow-for-pittsburgh/.

Novel Language Models as a Technological Solution to the No-Reading Problem

Yonathan A. Arbel & Samuel Becher, Contracts in the Age of Smart Readers, 90 Geo. Wash. L. Rev. 83 (2022).

Consumers accessing goods and services online are inundated with numerous disclosures, privacy policies, end user license agreements and terms and conditions. In connection with the so-called “duty to read,” consumers have historically been presumed and expected to fully review contract terms as part of the contract-making process. Yet, as several scholars have observed, consumers do not appear to consistently review contract terms: what some have called the “no-reading problem.” The failure of consumers to review and understand contract provisions before manifesting assent may incentivize companies to offer one-sided contracts with terms that are primarily beneficial to businesses.

In their new article, Contracts in the Age of Smart Readers, Professors Yonathan A. Arbel and Samuel Becher make a noteworthy contribution to scholarship in the technology and contract law fields by highlighting how nascent technological advancements in language models associated with artificial intelligence can disrupt the status quo. Their powerful article adds to an existing body of scholarship exploring the important connection between technological developments and what the authors describe as one of the underlying justifications for legal intervention in consumer transactions: the “no reading problem.”

Arbel and Becher tout various possible benefits of novel language models, which they label as “smart readers,” by offering several examples of this technology in action. They observe that armed with a smart reader app, a consumer could in theory use their smartphone to scan and receive a plain and concise explanation of boilerplate provisions in a company’s terms and conditions. Contractual text could be personalized based on the needs of each reader by factoring in cognitive, linguistic, and cultural patterns. A consumer using a smart reader could request concrete examples describing the possible implications of boilerplate clauses.

Arbel and Becher note that smart readers have the capacity to compare the terms of a company’s privacy policy with those offered by other businesses and generate an industry score that the consumer could then use to comparison shop. The authors convincingly argue that, if widely adopted, this technology could potentially enhance consumer understanding of contract terms and privacy policies and the risks associated with the same, as well as increase consumer awareness of market alternatives. They contend that smart readers may facilitate “term competition” (P. 91) in certain markets, even if the technology is not widely adopted.

After persuasively describing the potential advantages of smart readers, Arbel and Becher highlight the possible risks associated with smart readers. These concerns include the possibility of courts over-relying on consumer access to such apps, which may negatively impact outcomes for consumers. Adversarial attacks, which the authors describe as “a method of exploiting the statistical nature of machine learning models” (P. 121) may also make contractual explanations and industry scores less accurate and reliable. Arbel and Becher note that in some cases smart readers could oversimplify boilerplate terms, which could decrease consumer understanding. Lastly, businesses could offer better terms to those consumers who they believe will use smart readers and comparison shop, and less favorable terms to those who do not, thereby exacerbating discrimination concerns.

Arbel and Becher posit that legal interventions in favor of consumers are often “couched in the no-reading problem.” (P. 134.) However, smart readers offer a different way of tackling the no-reading issue. They suggest that the no-reading problem is perhaps a technological issue that smart readers can help to solve, rather than an ethical one deserving of legal intervention. The authors contend that while smart readers do not address various other justifications for pro-consumer legal intervention, such as other forms of market failure, smart readers may soon render the no-reading justification obsolete. Arbel and Becher’s notable and insightful description of smart readers’ growing potential should be of particular interest to technology law, contract law, and consumer law scholars, as well as others who are interested in learning more about the ways in which technological advancements may impact core justifications for consumer protection intervention.

 

Cite as: Stacy-Ann Elvy, Novel Language Models as a Technological Solution to the No-Reading Problem, JOTWELL (November 28, 2022) (reviewing Yonathan A. Arbel & Samuel Becher, Contracts in the Age of Smart Readers, 90 Geo. Wash. L. Rev. 83 (2022)), https://cyber.jotwell.com/novel-language-models-as-a-technological-solution-to-the-no-reading-problem/.

Why Bad Privacy Happens to Good People

In the aftermath of the Cambridge Analytica fiasco, Facebook was pummeled by legislators, regulators, and advocates around the globe for their poor privacy practices stemming from the way the company seemed to prioritize growth and profit over other all else. As one small part of a multipronged defense, the company hired four prominent privacy advocates, former fierce critics of the company. The early evidence suggests that these four—and other likeminded Facebook employees—haven’t had much success reorienting the company. As one data point, two years after they were hired, Frances Haugen blew the whistle on how Facebook had not done enough to weed out misinformation, combat threats to democracy, and protect vulnerable teens, again due to a relentless pursuit of growth. To be fair, the Haugen story isn’t only or primarily a privacy fiasco, but it belies the idea that good people in positions of authority have helped the fix the company from within.

This isn’t just a Facebook story. Every large technology company employs people who profess to be privacy advocates in positions of authority, yet their collective efforts do not seem to have had done much to alter the troubling trajectory of their employers’ products and services. Ari Waldman, the deeply interdisciplinary privacy law scholar from Northeastern University, has written a vital and important book investigating why bad privacy outcomes occur at firms that employ well-meaning and well-trained privacy professionals. Drawn from dozens of interviews with software engineers and privacy professionals from many technology companies, Waldman presents a compelling and distressing picture, revealing the way companies constrain the influence of privacy-focused employees, repurposing their work toward serving data extractive goals, eventually redefining privacy law itself in narrow, compliance-focused terms.

A trained sociologist and legal scholar, Waldman conducted 125 interviews over four years and insinuated himself into product design meetings, industry conferences, and company breakrooms, revealing a rigorous and detailed description of the way privacy is subverted and denied inside these companies. The work builds on and pays due credit to the groundbreaking qualitative work of Deirdre Mulligan and Ken Bamberger, the famous “privacy on the ground” study from a decade ago, even as Waldman offers a respectful corrective, pushing back on many of the sunnier conclusions of the earlier work.

Waldman’s conclusions are layered and sophisticated and hard to do justice to in a short review. Technology companies deploy a “coercive bureaucracy,” multiple strategies designed to limit privacy reforms and to disempower privacy professionals. One key mechanism of the coercive bureaucracy is “managerialism”, borrowing from Julie Cohen (who in turn borrowed from Judith Resnik and others), meaning the cynical transmutation of laws like the GDPR and CCPA from obligations designed to protect consumers into narrow compliance measures focused on limiting liability and deflecting regulator attention, in some cases essentially inverting these laws to require nothing that might impede the company’s growth and revenue goals.

Managerialism is but one tool of the coercive bureaucracy, and Waldman identifies too many others to list comprehensively, but to highlight a few: privacy gets redefined to being about giving users control over their personal information. (Chapter 2 is an amazing primer of the vast literature making this argument.) Privacy gets translated into narrow, codeable targets, such as finding new places to apply encryption. Privacy is what you outsource to growing armies of GDPR and CCPA consultants.

Although Waldman has written a book for scholars, it will also prove useful to privacy professionals who might recognize the disconnect between the hard work they are doing and the poor privacy outcomes their companies are producing. Chapters 5 and 6 read like how-to guides for stuck privacy professionals, building from the micro to the macro. At the individual level, Waldman surveys the subtle, small “traps” that companies use to constrain the influence of their workers, such as the “expertise trap,” which siloes people into narrow lanes of expertise, or the “access trap,” the belief that advocates should choose their battles rather than complain about every privacy transgression lest they be cut out of the decisionmaking loop. Waldman’s book will help those living inside a coercive bureaucracy spot, and maybe resist, the mechanisms constraining their work.

Ultimately, Waldman does not believe that individual awareness and resistance will be enough. Chapter 6 is a broad call to action, if not revolution, to recruit privacy professionals into a new movement, one that might serve as a “counterweight to corporate power,” the chapter’s oft-repeated mantra. He outlines fixes for privacy discourse, privacy law, and privacy organizing, to help us find new ways to break coercive bureaucracies. He makes several explicit calls to the labor movement, at one point calling for the formation of a new union of privacy workers.

There is so much I like (lots!) about this book. It provides deep, rich, and rigorously gathered empirical data about the forces that keep privacy at bay inside technology companies. It synthesizes these observations into compelling explorations of the mechanisms at play. It engages deeply and efficiently with multiple vast literatures, making it a readable and concise recommendation for newcomers to the field. I have recommended Chapter 2 to anybody still under the thrall of the consent-and-control model of privacy law; Chapter 3 to the staff working for state regulators drafting privacy rules; and the entire book to those trying to operationalize Julie Cohen’s theories. It offers multiple concrete prescriptions on how we might do better, ranging from the narrowly practical to the audaciously ambitious. It does all of this in crystal clear prose, studded with quotes and conversations from the empirical work, and suffused throughout with the considerable humanity of the author. It’s a welcome and rightful new inductee into the canon of privacy law, a must-read for students, scholars, policymakers, and privacy professionals.

Cite as: Paul Ohm, Why Bad Privacy Happens to Good People, JOTWELL (November 2, 2022) (reviewing Ari Ezra Waldman, Industry Unbound: The Inside Story of Privacy, Data, and Corporate Power (2021)), https://cyber.jotwell.com/why-bad-privacy-happens-to-good-people/.

The Argument for Not Closing Accountability Gaps

John Danaher, Tragic Choices and the Virtue of Techno-Responsibility Gaps, 35 Phil & Tech 26 (2022).

I always love scholarship that forces me to pause and question my baseline assumptions. And so—as someone who has written of the need to close accountability gaps associated with malicious cyberoperations, IoT devices, and autonomous weapon systems—I was delighted to read John Danaher’s Tragic Choices and the Virtue of Techno-Responsibility Gaps. In this work, Danaher challenges everyone who has ever argued that new technologies problematically undermine traditional accountability structures by quietly observing that these new gaps are…maybe sometimes a good thing?

While Danaher tends to focus more on moral responsibility than legal liability, if you are a techlaw scholar thinking about accountability gaps in any context, add this to your reading list. Danaher writes in a relaxed and engaging style, includes a fantastic literature review of non-legal texts on accountability gaps, and explores a counterintuitive argument—all in a piece that clocks in at a svelte 22 pages of text. (Would that I could accomplish so much, so smoothly, in so few words!)

Danaher defines a “Techno-Responsibility Gap” as follows: “As machines grow in their autonomous power (i.e. their ability to do things independently of human control or direction), they are likely to be causally responsible for positive and negative outcomes in the world. However, due to their properties, these machines cannot, or will not, be morally or legally responsible for these outcomes. This gives rise to a potential responsibility gap: where once it may have been possible to attribute these outcomes to a responsible agent, it no longer will be.” Danaher then distinguishes the various forward- and backward-looking forms techno-responsibility gaps might take. There are (1) accountability gaps, which exist when there’s no one to provide a public account for the harm; (2) culpability gaps, which exist when there’s no one to take the blame; (3) compensation gaps, which exist when there’s no one to pay for the harm; (4) obligation gaps, which exist when there’s no one who ensures the harm is avoided; and (5) virtue gaps, which exist when no one takes responsibility for the harmful acts. Danaher also notes the distinction between positive responsibility (“Great job there!”) and negative responsibility (“Why didn’t you . . .?!”).

Danaher then summarizes familiar proposed means of eliminating these gaps, most of which boil down to justifications for ascribing accountability to a prescribed human or non-human entity. He concludes that, for all of the disagreement around how best to address them, “most contributors to the techno-responsibility gap debate tend to agree on one thing: the creation of techno-responsibility gaps is a problem.” Why? Because responsibility is always a good thing. Right? Right?

Maybe not! To set up his argument for why we might sometimes want to prioritize other goals over ensuring accountability, Danaher starts with the problem of tragic choices. Human decision-makers often confront questions where moral considerations simultaneously weigh in favor of different answers and it is difficult or even impossible to reach a morally comfortable conclusion. We all face these choices in our daily lives. (Do I give my discretionary funds to this or that charity?) But they become policy questions when we need to determine how best to allocate scarce resources (Should hospitals privilege this or that type of patient when deciding who receives a needed ventilator?) or weigh costs to X against costs to Y (How to balance a right to speech against a right not to be threatened?).

When confronted with these tragic choices, we—as individuals, as institutions, or as societies—may handle the moral difficulty of reaching a conclusion in various ways. First, we might delude ourselves into believing it’s actually an easy question (“illusionism”). This can manifest in ignoring costs, compartmentalizing them, or rationalizing them away. Second, we might delegate the choice to another (“delegation”). We do this when we ask waitstaff what we should order, look to a panel of judges to decide the scope of a law, or flip a coin to determine our next course of action. Third, we might make a decision and bear the psychological costs ourselves (“responsibilization”).

One of Danaher’s main points is that none of these responses will always be better or worse than the others. Rather, in a classically lawyerly move, Danaher maintains that the preferable response will depend on the situation and context. Despite our collective bias towards responsibilization, each of these responses has distinct benefits and drawbacks.

Illusionism permits mental comfort at the expense of honesty. Delegation allows for shifting the psychological and moral costs to a (possibly more informed, capable, or impartial) substitute actor. But it also risks a concentrated group or institution bearing these costs, transference to an inept decision-maker and consequently poor outcomes, and the failure of the original actor to develop or maintain decision-making skills. Finally, responsibilization enables moral agency and all sorts of accountability—but does so at the possible cost of unjustly transforming decision-makers into scapegoats. (This point reminded me of an argument against including steering wheels in fully autonomous vehicles: the idea was that, in the event of a deadly crash, the human operator would unfairly blame themselves for not intervening despite not being able to act with the reflexes necessary to prevent the accident.)

If each response to a tech-fostered accountability gap has distinct pros and cons, there will necessarily be situations where delegation will be preferable to responsibilization. Further, Danaher argues, the possibility of delegating to an algorithm, rather than to another human, may change the balance of benefits and harms associated with these different responses, insofar as it eliminates the delegation drawback of concentrating the psychological and moral costs of tragic choices with few individuals. To take advantage of this reduced cost on human decision-makers, Danaher concludes, we must be willing to live with some techno-responsibility gaps.

Danaher suggests that human online content moderators provide an example of when this tradeoff might be worthwhile. These decision-makers have a stressful, difficult job; they save untold numbers of platform users from having to view offensive and traumatizing content, but they do so at great psychological expense. Assuming both human and algorithms performed moderation tasks equally well, transferring content moderation decision-making power to an algorithm would minimize harm to humans. Similar arguments could be made for drone operators, police body-cam reviewers, and any other human charged with sifting through painful content to determine what can be cleared for public release.

Danaher is quick to qualify his argument. To the extent they are made, delegations should be made carefully; his analysis does not suggest that we should always delegate decisions to machine systems. And the fact that algorithmic decision-makers reduce some of the costs of delegation does not mean they eliminate other costs; there are still plenty of reasons to be wary of accountability gaps. Danaher also engages, in a wonderfully non-defensive manner, with various alternative versions and critiques of his argument. He explores a proposal to employ randomization as a low-cost form of algorithmic delegation, the concern that delegation fosters agency-laundering and liability evasion, and a query as to when we might (and might not) want to make the costs and tradeoffs inherent in tragic choices more explicit.

This thoughtful, dense, yet accessible piece invites readers to question our assumptions about why we assume accountability—and, specifically, responsibilization—is always preferable to the alternatives. I will likely to continue to argue for closing tech-fostered accountability gaps, but thanks to this piece, my arguments will now be far more nuanced.

Cite as: Rebecca Crootof, The Argument for Not Closing Accountability Gaps, JOTWELL (October 26, 2022) (reviewing John Danaher, Tragic Choices and the Virtue of Techno-Responsibility Gaps, 35 Phil & Tech 26 (2022)), https://cyber.jotwell.com/the-argument-for-not-closing-accountability-gaps/.

Advertising Fraud: Is There No Alternative?

“Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” This statement is often attributed to retail mogul John Wanamaker. In his provocative new book, Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet, Tim Hwang argues that online, Wanamaker’s statement is far too optimistic. This slim volume is packed with infuriating details about how the immense, opaque, and inescapable machinery of online advertising exists primarily to move massive amounts of money to intermediaries, many of them fraudulent or fraud-indifferent. The benefits to consumers and publishers have been minor and incidental; the harm to democracy has been severe.

Hwang argues that the current internet is built on a fundamentally flawed model of ad-supported content. He explains that there are no real checks on online advertising fraud; there are too many intermediaries for advertisers (or publishers) to police in any coherent way. Ad failure online is pervasive. A publisher may claim it displayed ads online but consumers nevertheless may not have seen it due to poor placement (such as at the bottom of the page). Hwang notes, “[i]n 2014, Google released a report suggesting that 56.1 percent of all ads displayed on the internet are never seen by a human.” (P. 81.) Even when real humans are involved, they may not be the actual audience for ads; Hwang cites research that up to 50 percent of all click-throughs on mobile devices may be the result of accidental “fat finger” clicks. (P. 79.) He also notes fraud estimates of up to $1 out of every $3 spent on digital advertising, including ads served to devices that were “not real phones at all, or … were phones running automated scripts, unseen by any actual members of the public.” (P. 85.) (This isn’t just phones; it seems to be a pervasive problem with all digital advertising, including digital TV ads.) Even with exculpatory contracts, some of these problems have spilled over into litigation.1 But litigation will never catch up with today’s problems, even if (implausibly) it provided a full remedy for past harms.

The existing system involves too many intermediaries to provide any certainty that real people, much less the right people, are seeing the ads supposedly targeted to them, even before the ever-increasing presence of ad-blockers is taken into account. Rather than publishers being ad-supported, mostly it’s intermediaries sucking out as much as 70 percent of the revenues from an ad. Hwang’s critique of targeted advertising—it’s way more expensive than nontargeted ads and often quite inaccurate and permeated with fraud—is persuasive for anyone looking for how they should allocate their ad budgets.

But in the broader sense, I worry that Hwang is shouting into the void. Along with Wanamaker’s quote, I kept thinking of two other phrases as I read the book:

  1. “If something cannot go on forever, it will stop.”2
  2. “This time is different.”3

Hwang’s model of the “subprime attention crisis” makes a clear analogy to the subprime mortgage bubble, which collapsed and did huge amounts of harm—financial and otherwise—to millions of people (though largely not to the firms deemed too big to fail, who had profited from the bubble, and the principals of those firms). Hwang foresees a potential collapse of the online advertising market, taking with it the people who are actually doing journalism and otherwise creating expression other people would like to see.

But a bubble can last a lot longer than it should when there don’t seem to be other, better alternatives, which may be the case here. Given the constant renaming and tweaking of digital business models, the infinite promise of revised algorithms, and the rise of new platforms—Tiktok and its influencers, for example, don’t appear in the book—it is always possible for advertisers to hope that this time is different. Fundamentally, advertisers have budgets and want to use them, and while the occasional giant like Procter & Gamble can give up on big online advertising venues because of its enormous brick and mortar footprint (and its other advertising options), most advertisers can’t and won’t. Individual publications, from the New York Times to a Dutch public broadcaster are experimenting with promising advertising-funded models that cut out more intermediaries. And yet cutting out intermediaries generally means not scaling up, putting inherent limits on those alternatives, especially for new market entrants. If those initiatives expand, they will then become intermediaries for other publications, with some of the same risks and pressures (though perhaps a better ethical compass or at least an incentive to divert more money into publishers’ pockets).

So should we give up on preventing ad fraud and the marketplace distortions it causes? Not at all. But we also shouldn’t expect that a market correction will take care of the problem. Online ad money moves in ways so fast and complicated that Dina Srinivasan has suggested regulating ad exchanges the way we regulate financial exchanges for transparency and fairness. But if, as Hwang persuasively argues, lots of the money involved is just stolen by people who don’t provide the promised services, we should be thinking not just about transparency/antidiscrimination rules but anti-money laundering, know-your-customer, and anti-fraud regulations. Mark Lemley has pointed out that a lot of “fair access” rules are also “access for fraudsters/bad guys” rules, making platform regulation extremely hard to get right. Given that fraudsters and bad guys aren’t having much trouble accessing the current online advertising regime, however, Hwang’s book strengthens the case for looking much more aggressively under the hood of online advertising systems, using both advertisers’ purchasing power to require better-audited results and regulators’ power to set enforceable rules.

I would be remiss not to note the risks of regulatory intervention. Texas’s Attorney General is right now pursuing Twitter for, ostensibly, understating the number of “bot” accounts in order to make Twitter more commercially appealing. But it is obvious that he is doing so to punish a perceived political enemy and to reward a newly announced Republican billionaire supporter, Elon Musk. Nonetheless, fraud is fraud, and the risks of biased enforcement have to be balanced against the current freedom of private parties to steal from people who produce valuable things, including the news, with apparent impunity. We can’t start making the necessary decisions without understanding the scope of the problem. Hwang’s book helps with that.

  1. See DZ Reserve v. Meta Platforms, Inc., 2022 WL 912890 (N.D. Cal. Mar. 29, 2022) (certifying a class of U.S. ad buyers who allegedly overpaid for Facebook ads based on misleading statements of ads’ “Potential Reach”).
  2. Herbert Stein, A Symposium on the 40th Anniversary of the Joint Economic Committee, Hearings Before the Joint Economic Committee, Congress of the United States, Ninety-Ninth Congress, First Session; Panel Discussion: The Macroeconomics of Growth, Full Employment, and Price Stability, at 262 (Jan. 16, 1986).
  3. Carmen M. Reinhart & Kenneth S. Rogoff, This Time Is Different: A Panoramic View of Eight Centuries of Financial Crises, NBER Working Paper Series at 13882 (Mar. 2008).
Cite as: Rebecca Tushnet, Advertising Fraud: Is There No Alternative?, JOTWELL (September 23, 2022) (reviewing Tim Hwang, Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet (2020)), https://cyber.jotwell.com/advertising-fraud-is-there-no-alternative/.

Privacy Depends

Solon Barocas & Karen Levy, Privacy Dependencies, 95 Wash. L. Rev. 555 (2020).

American law typically treats privacy and its associated rights as atomistic, individual, and personal—even though in many instances, that privacy is actually relational and interdependent in nature. In their seminal article on The Right to Privacy, for instance, Samuel Warren and Louis Brandeis described privacy as a “right to be let alone.” Doctrines of informed consent are generally concerned with “respect[ing] individual autonomy,” even as the information disclosed or withheld by that consent may implicate the privacy of others. Similarly, consumer genetics platforms seek authorization from a single individual before processing or uploading a genetic profile, even though law enforcement now routinely searches those profiles to identify distant relatives who may have committed prior criminal acts.

In their article, Privacy Dependencies, Solon Barocas and Karen Levy move beyond the observation that privacy is relational to provide a typology of the “varied ways in which one person’s privacy is implicated by information others reveal.” They identify three broad types of privacy dependencies: those based on our social or other ties (tie-based dependencies), those drawn from our similarities to others (similarity-based dependencies), and those revealed by our differences from others (difference-based dependencies). While social norms or legal obligations may serve to discipline some of these privacy dependencies, they will be inapplicable or inapposite for many others. Barocas and Levy masterfully survey the wide range of normative values and diverse areas of law that may be affected by privacy dependencies. Taking genetic data as a case study, Barocas and Levy then demonstrate how each form of privacy dependency can arise in this context—and how each has been exploited in criminal investigations. They conclude that a greater attentiveness to privacy dependencies, and when and how they arise, can inform better policymaking and give us greater purchase on the values that privacy serves.

Barocas and Levy devote the bulk of their article to identifying and explaining each of the three forms of privacy dependencies that make up their typology, subdividing each into several subtypes. The first category of privacy dependencies, tie-based dependencies, exploits information gathered about one individual (Alice) to learn about another individual (Bob) by virtue of some relationship between them, whether known or unknown to Alice and Bob themselves. Barocas and Levy further subdivide this category into four types. A “passthrough” is a tie-based dependency in which Alice passes information about Bob on to some observer, or Alice and Bob share information through some third-party intermediary like Facebook or Gmail. A “bycatch” occurs where information about Bob is incidentally, but foreseeably, collected in the process of learning about Alice, as with police body-worn cameras. “Identification” can turn on a tie-based dependency, as where an unknown Bob can be identified due to his connection to a known Alice. Finally, “tie-justified dependencies” exploit social ties between Alice and Bob to justify expanding surveillance from Alice alone to also include Bob.

The government has exploited each of these forms of privacy dependency in national security and criminal investigations as, for instance, in the investigative use of consumer genetic data to target genetic relatives as suspects or the National Security Agency (NSA) bulk telephony metadata program. So too have social media entities as, for instance, in the Cambridge Analytica scandal at Facebook or Amazon Ring’s surveillance devices. Troublingly, for the most part, the law has not vested individuals whose privacy is affected by a tie-based dependency with protections against these kinds of privacy losses. Indeed, key Fourth Amendment doctrines encourage the government to exploit our interdependent data privacy. Moreover, social norms may be of limited utility in regulating against unwelcome exposure, particularly where the tie being exploited is involuntary or unknown to its subjects.

The second category of privacy dependencies that Barocas and Levy identify is based on similarity, in which information that Alice discloses about herself may be imputed to Bob insofar as Bob “is understood to be similar to Alice.” This form of dependency may turn on three ways in which individuals may be “similar” to others: based on “the company you keep”; on some “social salient characteristics that you share with others (e.g., gender, race, and age), but with whom you hold no explicit social ties”; or more distantly, on “non-socially-salient” characteristics, as in behavioral advertising.

Insurance is a paradigm example of similarity-based inference at work, but these dependencies may also arise in the context of criminal law (where bail, sentencing, and other decisions may turn in part on statistical risk assessments tools), credit scoring, advertising, and others. As Barocas and Levy observe, “[s]imilarity-based dependencies violate the moral intuition that people deserve to be treated as individuals and subject to individualized judgment.” And yet, “there is no way to avoid using generalizations or avoid being subject to them.” Moreover, similarity-based dependencies may be troubling both “when they subject people to coarse generalizations” and “when they allow for overly granular distinctions.” Particularly when they depend on non-socially-salient characteristics, similarity-based dependencies may fail to elicit the social solidarity that might restrain the excesses of this data inference mechanism.

Finally, difference-based dependencies arise when, by revealing some information about herself, Alice enables an observer to learn something about Bob by making herself distinguishable from him. Here, too, this dependency may occur in three ways: by “process of elimination,” in which Alice’s disclosure makes an unknown Bob’s ultimate identification more likely; by “anomaly detection,” in which Bob’s atypicality becomes apparent by comparing his data to that of many “normal” Alices; or by “adverse inference,” in which Bob’s refusal to disclose some information appears more suspect because most Alices disclose. Importantly, unlike tie-based and similarity-based dependencies, none of these forms of difference-based dependency requires a prior connection between Alice and Bob. Moreover, there is little Bob can do to protect his privacy in these cases. As Barocas and Levy observe “any attempts he might make to do so may, perversely, make him stand out even more.” The difficulty of this kind of dependency is evident in the NSA’s approach to encrypted communications, which has treated the fact of encryption itself as a basis for retention and analysis.

For these difference-based dependencies, collectivity is “essential to privacy preservation here.” Yet collective action may be difficult to muster where individuals may be “unaware of the effects of their disclosures or acting out of requirement or self-interest.” Instead, difference-based dependencies, Barocas and Levy conclude, are best restrained by restricting mass data collection in the first instance, since difference becomes apparent only in comparison to many others.

The payoffs for Barocas and Levy’s detailed typology of privacy dependencies are several. For one thing, as Barocas and Levy explain in a case study of privacy dependencies in genetic data, statutory protections may yield unexpected privacy dividends, where a protection adopted with one type of dependency in mind may come to protect against manipulation of another. Consider the Genetic Information Non-discrimination Act (GINA), which, although enacted as an anti-discrimination statute, has demonstrated value as an employee-privacy statute as well. Barocas and Levy also describe myriad ways in which law enforcement has exploited privacy dependencies in the context of genetic data. In so doing, as Barocas and Levy observe, identifying the various privacy dependencies at work can “help us determine if and when we even recognize Bob as a party with a legitimate privacy claim,” “shed light on the varied normative goals that we expect privacy to serve,” and “suggest possible targets for intervention.”

Perhaps most forcefully, Barocas and Levy provide a further perspective on the inadequacy of notice-and-choice as a paradigm for privacy regulation. As they explain, “[i]f we are scarcely able to make decision that attend to our own privacy interests, the goal of recognizing shared interests should not be to further burden our individual choices with an expectation that we take into account the interests of others.” And they conclude that “[r]ecognizing the mechanisms that create different forms of dependency does more than demonstrate the shortcomings of privacy individualism; it lays the groundwork for well-tailored policymaking and advocacy.” Ultimately, Barocas and Levy give an irrefutable accounting of the many ways in which individualism fails privacy, and their typology for organizing and understanding these failures make better privacy law possible.

Cite as: Natalie Ram, Privacy Depends, JOTWELL (August 11, 2022) (reviewing Solon Barocas & Karen Levy, Privacy Dependencies, 95 Wash. L. Rev. 555 (2020)), https://cyber.jotwell.com/privacy-depends/.