American law typically treats privacy and its associated rights as atomistic, individual, and personal—even though in many instances, that privacy is actually relational and interdependent in nature. In their seminal article on The Right to Privacy, for instance, Samuel Warren and Louis Brandeis described privacy as a “right to be let alone.” Doctrines of informed consent are generally concerned with “respect[ing] individual autonomy,” even as the information disclosed or withheld by that consent may implicate the privacy of others. Similarly, consumer genetics platforms seek authorization from a single individual before processing or uploading a genetic profile, even though law enforcement now routinely searches those profiles to identify distant relatives who may have committed prior criminal acts.
In their article, Privacy Dependencies, Solon Barocas and Karen Levy move beyond the observation that privacy is relational to provide a typology of the “varied ways in which one person’s privacy is implicated by information others reveal.” They identify three broad types of privacy dependencies: those based on our social or other ties (tie-based dependencies), those drawn from our similarities to others (similarity-based dependencies), and those revealed by our differences from others (difference-based dependencies). While social norms or legal obligations may serve to discipline some of these privacy dependencies, they will be inapplicable or inapposite for many others. Barocas and Levy masterfully survey the wide range of normative values and diverse areas of law that may be affected by privacy dependencies. Taking genetic data as a case study, Barocas and Levy then demonstrate how each form of privacy dependency can arise in this context—and how each has been exploited in criminal investigations. They conclude that a greater attentiveness to privacy dependencies, and when and how they arise, can inform better policymaking and give us greater purchase on the values that privacy serves.
Barocas and Levy devote the bulk of their article to identifying and explaining each of the three forms of privacy dependencies that make up their typology, subdividing each into several subtypes. The first category of privacy dependencies, tie-based dependencies, exploits information gathered about one individual (Alice) to learn about another individual (Bob) by virtue of some relationship between them, whether known or unknown to Alice and Bob themselves. Barocas and Levy further subdivide this category into four types. A “passthrough” is a tie-based dependency in which Alice passes information about Bob on to some observer, or Alice and Bob share information through some third-party intermediary like Facebook or Gmail. A “bycatch” occurs where information about Bob is incidentally, but foreseeably, collected in the process of learning about Alice, as with police body-worn cameras. “Identification” can turn on a tie-based dependency, as where an unknown Bob can be identified due to his connection to a known Alice. Finally, “tie-justified dependencies” exploit social ties between Alice and Bob to justify expanding surveillance from Alice alone to also include Bob.
The government has exploited each of these forms of privacy dependency in national security and criminal investigations as, for instance, in the investigative use of consumer genetic data to target genetic relatives as suspects or the National Security Agency (NSA) bulk telephony metadata program. So too have social media entities as, for instance, in the Cambridge Analytica scandal at Facebook or Amazon Ring’s surveillance devices. Troublingly, for the most part, the law has not vested individuals whose privacy is affected by a tie-based dependency with protections against these kinds of privacy losses. Indeed, key Fourth Amendment doctrines encourage the government to exploit our interdependent data privacy. Moreover, social norms may be of limited utility in regulating against unwelcome exposure, particularly where the tie being exploited is involuntary or unknown to its subjects.
The second category of privacy dependencies that Barocas and Levy identify is based on similarity, in which information that Alice discloses about herself may be imputed to Bob insofar as Bob “is understood to be similar to Alice.” This form of dependency may turn on three ways in which individuals may be “similar” to others: based on “the company you keep”; on some “social salient characteristics that you share with others (e.g., gender, race, and age), but with whom you hold no explicit social ties”; or more distantly, on “non-socially-salient” characteristics, as in behavioral advertising.
Insurance is a paradigm example of similarity-based inference at work, but these dependencies may also arise in the context of criminal law (where bail, sentencing, and other decisions may turn in part on statistical risk assessments tools), credit scoring, advertising, and others. As Barocas and Levy observe, “[s]imilarity-based dependencies violate the moral intuition that people deserve to be treated as individuals and subject to individualized judgment.” And yet, “there is no way to avoid using generalizations or avoid being subject to them.” Moreover, similarity-based dependencies may be troubling both “when they subject people to coarse generalizations” and “when they allow for overly granular distinctions.” Particularly when they depend on non-socially-salient characteristics, similarity-based dependencies may fail to elicit the social solidarity that might restrain the excesses of this data inference mechanism.
Finally, difference-based dependencies arise when, by revealing some information about herself, Alice enables an observer to learn something about Bob by making herself distinguishable from him. Here, too, this dependency may occur in three ways: by “process of elimination,” in which Alice’s disclosure makes an unknown Bob’s ultimate identification more likely; by “anomaly detection,” in which Bob’s atypicality becomes apparent by comparing his data to that of many “normal” Alices; or by “adverse inference,” in which Bob’s refusal to disclose some information appears more suspect because most Alices disclose. Importantly, unlike tie-based and similarity-based dependencies, none of these forms of difference-based dependency requires a prior connection between Alice and Bob. Moreover, there is little Bob can do to protect his privacy in these cases. As Barocas and Levy observe “any attempts he might make to do so may, perversely, make him stand out even more.” The difficulty of this kind of dependency is evident in the NSA’s approach to encrypted communications, which has treated the fact of encryption itself as a basis for retention and analysis.
For these difference-based dependencies, collectivity is “essential to privacy preservation here.” Yet collective action may be difficult to muster where individuals may be “unaware of the effects of their disclosures or acting out of requirement or self-interest.” Instead, difference-based dependencies, Barocas and Levy conclude, are best restrained by restricting mass data collection in the first instance, since difference becomes apparent only in comparison to many others.
The payoffs for Barocas and Levy’s detailed typology of privacy dependencies are several. For one thing, as Barocas and Levy explain in a case study of privacy dependencies in genetic data, statutory protections may yield unexpected privacy dividends, where a protection adopted with one type of dependency in mind may come to protect against manipulation of another. Consider the Genetic Information Non-discrimination Act (GINA), which, although enacted as an anti-discrimination statute, has demonstrated value as an employee-privacy statute as well. Barocas and Levy also describe myriad ways in which law enforcement has exploited privacy dependencies in the context of genetic data. In so doing, as Barocas and Levy observe, identifying the various privacy dependencies at work can “help us determine if and when we even recognize Bob as a party with a legitimate privacy claim,” “shed light on the varied normative goals that we expect privacy to serve,” and “suggest possible targets for intervention.”
Perhaps most forcefully, Barocas and Levy provide a further perspective on the inadequacy of notice-and-choice as a paradigm for privacy regulation. As they explain, “[i]f we are scarcely able to make decision that attend to our own privacy interests, the goal of recognizing shared interests should not be to further burden our individual choices with an expectation that we take into account the interests of others.” And they conclude that “[r]ecognizing the mechanisms that create different forms of dependency does more than demonstrate the shortcomings of privacy individualism; it lays the groundwork for well-tailored policymaking and advocacy.” Ultimately, Barocas and Levy give an irrefutable accounting of the many ways in which individualism fails privacy, and their typology for organizing and understanding these failures make better privacy law possible.






