It may seem odd to put this article in the category of “Cyberlaw,” since it is so thoroughly about the embodied nature of new business models usually attributed to the distributed, placeless internet. But that’s precisely the point: the internet has a materiality that is vital to its functioning, and so do specific parts of it. Regulation, too, must contend with the physical basis of online activities. Julie Cohen has often written about the situatedness of the digital self and its construction within a field of other people, institutions, and activities; Davidson and Infranca explore that situatedness by explaining why local government law is an important matter for internet theorists.
Davidson and Infranca’s article thus puts an important emphasis on the materiality of internet-coordinated activities, even if my take is ultimately more pessimistic than that of the authors. They begin by noting that
[u]nlike for earlier generations of disruptive technology, the regulatory response to these new entrants has primarily been at the municipal level. Where AT&T, Microsoft, Google, Amazon and other earlier waves of technological innovation primarily faced federal (and international) regulatory scrutiny, sharing enterprises are being shaped by zoning codes, hotel licensing regimes, taxi medallion requirements, insurance mandates, and similar distinctly local legal issues.
Why? The authors argue that these new services “fundamentally rely for their value proposition on distinctly urban conditions. … [I]t is the very scale, proximity, amenities, and specialization that mark city life that enable sharing economy firms to flourish.” An Uber driver in a rural area doesn’t have the same customer base that could easily take advantage of the extra space in her car, or her house; someone like me who wants to find a Latin tutor for one hour per week is going to have much more luck in the Metro Washington area than in a rural area. Indeed,
the sharing economy is actually thriving … because it recombines assets and people in a decidedly grounded, place-based way. Sharing economy firms have found success by providing innovative solutions to the challenges of life in crowded urban areas. Even the reputation scoring and other trust mechanisms that are so central to sharing economy platforms create value by responding to particular urban conditions of dense, mass anonymity.
Moreover, urban regulations can limit the supply of urban amenities, like taxis and cheap spaces to sleep in during visits, making the need for relief greater than in rural areas. And the new economic entities can improve matching between people who would want to transact if only they knew about each other, a process that gets better at larger scales and thus works best in larger groups of people. The authors’ account of these benefits and their relationship to the affordances of the city is persuasive and readable. Their point about using platform-based reputation to mitigate some of the risks of anonymity while preserving most of its benefits is especially insightful.
There are also, of course, risks associated with these new entities. Davidson and Infranca primarily identify congestion (such as housing shortages allegedly exacerbated by investors’ use of properties for Airbnb guests rather than long-term residents) and “bad” regulatory arbitrage as the risks to which municipal regulation can be an appropriate response. The authors are largely positive about the potential these changes offer for local governments, arguing that “the political economy of the sharing economy is nudging local governments to be more transparent about the goals of public intervention and to justify empirically the link between local regulatory regimes and their intended outcomes.” Thus, Uber, Airbnb, and the like will create not only a new economy, but also “a new urban geography,” and a new regulatory landscape.
It’s a really nice story, in which everyone can win. For example, big data can improve regulatory outcomes: “Given the intersection between the data generated by the sharing economy and the local spaces through which goods and services move, local governments are well situated to tailor regulation in a holistic but still fine-grained manner.” But can local governments actually take advantage of this data? When we look at Uber’s market capitalization and ability to hire national political figures as lobbyists, versus the resources of a city struggling to make regulatory distinctions, can we be sure that Uber will share the data that a city needs? So far, Uber’s release of information has been extremely controlled, except when dissemination is in its own interests, including its interest in deterring criticism. Davidson and Infranca do note Uber’s pushback on local regulations as well as its successful battle with New York City’s mayor. (Pushback might be the nicest term. Intentional lawbreaking might also fit.) The authors also rightly highlight Airbnb’s apparent manipulation of data it released to lawmakers in order to support its claims that there weren’t a lot of investor-owned units in New York. I’m all for regulatory transparency, but it has to be matched with transparency and truth from the regulated.
Consider, in relation to these regulatory struggles, Anil Dash’s point on Twitter that Alton Sterling and Eric Garner, two African-American men who were killed by the police in the course of their on-street sales of consumer goods, were “bending the law to [a] far lesser degree than execs at AirBNB & Uber.” In the same thread, he continued, “The ‘gig economy’ that’s being advocated — who can participate without being endangered?” Whose “regulatory arbitrage” is met with discussion over whether it’s wrongful or brilliant, and whose with bullets? This is a topic also explored in Kate Losse’s The Unbearable Whiteness of Breaking Things. If only certain entrepreneurs can stress and strain local regulation without being met with physical force, then the distributional effects of the sharing economy will be even more tilted in favor of those who already have access to cultural and market capital.
And then there’s the separately harmful but related problem of participating in sharing economy institutions while black. The authors are hopeful that even if the “sharing economy” companies weaken some social ties by encouraging the monetization of ordinary neighborliness, “[t]he platforms that facilitate the pairing of providers and users of sharing-economy services and goods might enable interactions across heterogeneous groups that would not occur in the absence of the platform.” But they don’t explicitly discuss racial discrimination, either structural or individual. They offer one example of a “sharing economy” institution targeting members of the African diaspora for co-working space, but in a world where Trump supporters have their own dating app, it seems to me that the risks of discrimination deserve more attention. Davidson and Infranca briefly note the problem of ADA compliance, but it merits even more attention, especially since avoiding the cost of accessibility is one of the things that enables new sharing economy entrants to avoid cost-spreading and underprice existing services.
Consider these ads for TaskRabbit, which I saw on the DC Metro a few weeks ago, as statements about economic and social class: A white woman in a yoga pose, captioned “Mopping the Floors,” and a white man on a climbing wall, captioned “Hanging Shelves,” with the TaskRabbit slogan “We do chores. You live life,” beneath both. But then when do “we” live our lives? Or are “our” lives appropriately lived doing chores, while “yours” are not? (In reality, I am among the “you” hailed here, even though I don’t do yoga.) And, invisibly, there are the owners of TaskRabbit, who actually don’t do the chores, though they take their cut of the payments. What do you mean, “we”?
To deal with distributional problems, Davidson and Infranca suggest encouraging local co-ops and government provision of sharing services—which might actually justify the name “sharing.” Those suggestions are promising, but not very much like most existing models, except for that venerable institution so rarely invoked in discussions of the “sharing economy,” the public library. Indeed, the authors’ analysis might have been strengthened by further reference to the coordinating and capacity-enhancing roles played by public libraries.
Reluctant or unable to tax in order to fund libraries and other public services, though, many municipalities have decided to make lots of their money by ticketing the poor. Meanwhile, a lot of the regulatory arbitrage of the sharing economy means that the “platforms” aren’t bearing costs related to inspection, taxes, etc. that are imposed on local operators who aren’t backed by Silicon Valley. One could argue that this isn’t just a contrast, but instead that these phenomena are linked and mutually reinforcing. But this is not the kind of separating equilibrium that we should be aiming for.
And this leads me to another point: Davidson and Infranca convincingly explain why municipalities would want to, and should, regulate the “sharing economy,” given its likely profound impact on them. But why does that mean that states and the national government wouldn’t want to regulate sharing economy actors, given that cities are pretty substantial parts of most states and of the nation as a whole? Many phenomena that were and are characteristic of urban life invoked federal or state intervention in previous decades, including the Clean Air Act; multiple rounds of federal housing legislation; and the Highway Act of 1973, which provided funding for public transit. Municipalities are currently being left on their own to regulate because many state governments, and an urban-investment-hostile Congress, are repeating President Ford’s famous advice to cities: drop dead! (I’m a fan of Section 230, but I can see where it fits into a narrative in which cities are not left to themselves, but actively precluded from regulating in the interest of their own citizens.)
From all this, one might conclude that the online “sharing economy” is a variant on Eddie Murphy’s classic skit: it’s a way for mainly non-African-Americans to get the benefits of urban living without having to deal with a feared urban underclass. Just as white suburbs, historically, often benefited from the amenities of the city without having to pay for them or for the city’s schools, reintermediation using new online entities allows that ability to pick and choose urban interactions, so “our” connections become ever more granular. Davidson and Infranca reference Jane Jacobs’ classic account of the benefits of the city, but, as they note, many of those benefits came from positive externalities conferred on other people. Many “sharing economy” entrepreneurs are struggling mightily to internalize those benefits for themselves.
Ultimately, the authors provide an important descriptive account that makes the physicality of new online businesses more salient in ways that will assist in any discussion of the appropriate regulatory responses to them. And they offer an optimistic view of the future of municipal governance—one I am more than happy to hope materializes.
(Title courtesy of James Grimmelmann.)
Cite as: Rebecca Tushnet, New App City
(September 13, 2016) (reviewing Nestor M. Davidson & John J. Infranca, The Sharing Economy as an Urban Phenomenon
, 34 Yale L. & Pol’y Rev.
215 (2016)), https://cyber.jotwell.com/new-app-city/
Lachlan Urquhart & Tom Rodden, A Legal Turn in Human Computer Interaction? Towards ‘Regulation by Design’ for the Internet of Things
(2016), available at SSRN
Ten years have passed since the second edition of Lawrence Lessig’s Code; John Perry Barlow’s A Declaration of the Independence of Cyberspace, in turn, came ten years before that. In their working paper A Legal Turn in Human Computer Interaction?, doctoral researcher Lachlan Urquhart (with a background in law) and computing professor Tom Rodden, both based at the University of Nottingham in England, make an avowedly post-Lessig case for greater engagement between the cyberlaw concept of regulation and the field of human-computer interaction (HCI).
Their work is prompted by the growing interest in “privacy by design” (PbD). First the subject of discussion and recommendation, it has taken on a more solid form in recent years, through legislative changes such as the EU’s new General Data Protection Regulation. An area where PbD seems particularly promising is the second prompt for this working paper, namely the so-called “Internet of Things” and the emergence of various technologies, often for use in a domestic setting, which prompt a reconsideration of the relationship between privacy and technological developments.
Although the authors demonstrate a keen understanding of the “post-regulatory state,” of Zittrain’s approach to generativity, and of Andrew Murray’s important and powerful response to Lessig (that Lessig understates the agency and dynamism of the target of regulation), they clearly wish to push things a little further. This comes in part through an application of Suzanne Bødker’s argument (also of a decade ago!), within HCI, that the incorporation of technologies into everyday, domestic life raises particular challenges – a “third wave” as she put it. For this study, this means that the systems-theory-led scholarship in cyberlaw may have its limitations, as the authors criticize. Emergent approaches in HCI, including Bødker’s third wave, may address these barriers to understanding.
In particular, Urquhart and Rodden contend that two intellectual traditions within HCI are important to turning PbD away from the fate of being limited to law as a resource, and into something that might actually make an appreciable difference to the realisation of privacy rights. These approaches are participatory design and value-led or value-sensitive design. The latter encompasses an interesting argument that specifically legal values could be the subject of greater attention. The former approach is provocative, as the authors draw on the history of participatory design in Scandinavian labour contexts; with the industrial and economic models of first Web 2.0 and now the sharing economy continuing to provoke debate, this might prove a turn too far for some. However, the fact that they situate their argument, and a case study of each HCI approach, within the context of the stronger legal mandates for PbD makes their contentions relevant and capable of application even in the short term.
This is a working paper, and some of the ideas are clearly still being developed. The authors draw upon a wide range of literature about both regulation and HCI, and some of the key contributions come from juxtaposition (e.g. Hildebrandt’s ambient law work set alongside separate and perhaps longer-established scholarship in HCI, which is not particularly well-known even in cyberlaw circles). This may indeed be another and quite different take on Murray’s important question of 2013, on where cyberlaw goes from here. One thing is certain: “code is law” still shapes much of how we write and teach, but the most interesting work seems to go deeper into the code box and the law box – with, as in the case of this fascinating study, surprising and stimulating results.
Works mentioned in this review:
- Steven M. Bellovin, Matt Blaze, Sandy Clark & Susan Landau, Lawful Hacking: Using Existing Vulnerabilities for Wiretapping on the Internet, 12 Nw. J. Tech. & Intell. Prop. 1 (2014)
- Ahmed Ghappour, Searching Places Unknown: Law Enforcement Jurisdiction on the Dark Web, Stan. L. Rev. (forthcoming 2016), available at SSRN
- Elizabeth E. Joh & Thomas W. Joo, Sting Victims: Third-Party Harms in Undercover Police Operations, 88 S. Cal. L. Rev. 1309 (2015)
- Elizabeth E. Joh, Bait, Mask, and Ruse: Technology and Police Deception, 128 Harv. L. Rev. F. 246 (2015)
- Jonathan Mayer, Constitutional Malware (2015), available at SSRN
- Brian L. Owsley, Beware of Government Agents Bearing Trojan Horses, 48 Akron L. Rev. 315 (2015)
- Stephanie K. Pell & Christopher Soghoian, A Lot More Than a Pen Register, and Less Than a Wiretap: What the StingRay Teaches Us About How Congress Should Approach the Reform of Law Enforcement Surveillance Authorities, 16 Yale J.L. & Tech. 134 (2013)
Police carry weapons, and sometimes they use them. When they do, people can die: the unarmed like Walter Scott and Tamir Rice, and bystanders like Akai Gurley and Bettie Jones. Since disarming police is a non-starter in our gun-saturated society, the next-best option is oversight. Laws and departmental policies tell officers when they can and can’t shoot; use-of-force review boards and juries hold officers accountable (or are supposed to) if they shoot without good reason. There are even some weapons police shouldn’t have at all.
Online police carry weapons, too, because preventing and prosecuting new twists on old crimes often requires new investigative tools. The San Bernadino shooters left behind a locked iPhone. Child pornographers gather on hidden websites. Drug deals are done in Bitcoins. Hacker gangs hold hospitals’ computer systems for ransom. Modern law enforcement doesn’t just passively listen in: it breaks security, exploits software vulnerabilities, installs malware, sets up fake cell phone towers, and hacks its way onto all manner of devices and services. These new weapons are dangerous; they need new rules of engagement, oversight, and accountability. The articles discussed in this review help start the conversation about how to guard against police abuse of these new tools.
In one recent case, the FBI seized control of a child pornography website. For two weeks, the FBI operated the website itself, sending a “Network Investigative Technique” — or, to call things by their proper names, a piece of spyware — to the computers of people who visited the website. The spyware then phoned home, giving the FBI the information it needed (IP addresses) to start identifying the users so they could be investigated and prosecuted on child pornography charges.
There’s something troubling about police operation of a spyware-spewing website; that’s something we normally expect from shady grey-market advertisers, not sworn officers of the law. For one thing, it involves pervasive deception. As Elizabeth E. Joh and Thomas W. Joo explain in Sting Victims: Third-Party Harms in Undercover Police Operations, this is hardly a new problem. Police have been using fake names and fake businesses for a long time. Joh and Joo’s article singles out the underappreciated way in which these ruses can harm third parties other than the targets of the investigation. In child abuse cases, for example, the further distribution of images of children being sexually abused “cause[s] new injury to the child’s reputation and emotional well-being.”
Often, the biggest victims of police impersonation are the specific people or entities being impersonated. Joh and Joo give a particularly cogent critique of this law enforcement “identity theft.” The resulting harm to trust is especially serious online, where other indicia of identity are weak to begin with. The Justice Department settled for $143,000 a civil case brought by a woman whose name and intimate photographs were used by the DEA to set up a fake Facebook account to send a friend request to a fugitive.
Again, deception by police is not new. But in a related essay, Bait, Mask, and Ruse: Technology and Police Deception, Joh nicely explains how “technology has made deceptive policing easier and more pervasive.” A good example, discussed in detail by Stephanie K. Pell and Christopher Soghoian in their article, A Lot More Than a Pen Register, and Less Than a Wiretap: What the StingRay Teaches Us About How Congress Should Approach the Reform of Law Enforcement Surveillance Authorities, is IMSI catchers, or StingRays. These portable electronic devices pretend to be cell phone towers, forcing nearby cellular devices to communicate with them, exposing some metadata in the process. This is a kind of lie, and not necessarily a harmless one. Tricking phones into talking to fake cell towers hinders their communications with real ones, which can raise power consumption and hurt connectivity.
In an investigative context, StingRays are commonly used to locate specific cell phones without the assistance of the phone company, or to obtain a list of all cell phones near the StingRay. Pell and Soghoian convincingly argue that StingRays successfully slipped through holes in the institutional oversight of surveillance technology. On the one hand, law enforcement has at times argued that the differences between StingRays and traditional pen registers meant that they were subject to no statutory restrictions at all; on the other, it has argued that they are sufficiently similar to pen registers that no special disclosure of the fact that a StingRay is to be used is necessary when a boilerplate pen register order is presented to a magistrate. Pell and Soghoian’s argument is not that StingRays are good or bad, but rather that an oversight regime regulating and legitimizing police use of dangerous technologies breaks down if the judges who oversee it cannot count on police candor.
In a broader sense, Joh and Joo and Pell and Soghoian are all concerned about police abuse of trust. Trust is tricky to establish online, but it is also essential to many technologies. This is one reason why so many security experts objected to the FBI’s now-withdrawn request for Apple to use its code signing keys to vouch for a modified and security-weakened custom version of iOS. Compelling the use of private keys in this way makes it harder to rely on digital signatures as a security measure.
The FBI’s drive-by spyware downloads are troubling in yet another way. A coding mistake can easily destroy data rather than merely observing it, and installing one piece of unauthorized software on a computer makes it easier for others to install more. Lawful Hacking, by Steven M. Bellovin, Matt Blaze, Sandy Clark, and Susan Landau, thinks through some of these risks, along with more systemic ones. In order to get spyware on a computer, law enforcement frequently needs to take advantage of an existing unpatched vulnerability in the software on that computer. But when law enforcement pays third parties for information about those vulnerabilities, it helps incentivize the creation of more such information, and the next sale might not be to the FBI. Even if the government finds a vulnerability itself, keeping that vulnerability secret undercuts security for Internet users, because someone else might find and exploit that same vulnerability independently. The estimated $1.3 million that the FBI paid for the exploit it employed in the San Bernadino case — along with the FBI’s insistence on keeping the details secret — sends a powerful signal that the FBI is more interested in breaking into computers than in securing them, and that that is where the money is.
The authors of Lawful Hacking are technologists, and their article is a good illustration of why lawyers need to listen to technologists more. The technical issues — including not just how software works but how the security ecosystem works — are the foundation for the legal and policy issues. Legislating security without understanding the technology is like building a castle on a swamp.
Fortunately, legal scholars who do understand the technical issues — because they are techies themselves or know how to listen to them — are also starting to think through the policy issues. Jonathan Mayer’s Constitutional Malware is a cogent analysis of the Fourth Amendment implications of putting software on people’s computers without their knowledge, let alone their consent. Mayer’s first goal is to refute what he calls the “data-centric” theory of Fourth Amendment searches, that so long as the government spyware is configured such that it discloses only unprotected information, it is irrelevant how the software was installed or used. The article then thinks through many of the practicalities involved with using search warrants to regulate spyware, such as anticipatory warrants, particularity, and notice. It concludes with an argument that spyware is sufficiently dangerous that it should be subject to the same kind of “super-warrant” procedural protections as wiretaps. Given that spyware can easily extract the contents of a person’s communications from their devices at any time, the parallel with wiretaps is nearly perfect. Indeed, on any reasonable measure, spyware is worse, and police and courts ought to give it closer oversight. To similar effect is former federal magistrate judge Brian Owsley’s Beware of Government Agents Bearing Trojan Horses, which includes a useful discursive survey of cases in which law enforcement has sought judicial approval of spyware.
Unfortunately, oversight by and over online law enforcement is complicated by the fact that a suspect’s device could often be anywhere in the world. This reality of life online raises problems of jurisdiction: jurisdiction for police to act and jurisdiction for courts to hold them accountable. Ahmed Ghappour’s Searching Places Unknown: Law Enforcement Jurisdiction on the Dark Web points out that when a suspect connects through a proxy-based routing service such as Tor, mapping a device’s location may be nearly impossible. Observing foreigners abroad is one thing; hacking their computers is quite another. Other countries can and do regard such investigations as violations of their sovereignty. Searching Places Unknown offers a best-practices guide for avoiding diplomatic blowback and the risk that police will open themselves up to foreign prosecution. One of the most important suggestions is minimization: Ghappour recommends that investigators proceed in two stages. First, they should attempt to determine the device’s actual IP address and no more; with that information, they can make a better guess at where the device is and a better-informed decision about whether and how to proceed.
This, in the end, is what tainted the evidence in the Tor child pornography investigation. Federal Rule of Criminal Procedure 41 does not give a magistrate judge in Alexandria, Virginia the authority to authorize the search of a computer in Norwood, Massachusetts. This NIT-picky detail in the Federal Rules may not be an issue much longer. The Supreme Court has voted — in the face of substantial objection from tech companies and privacy activists — to approve a revision to Rule 41 giving greater authority to magistrates to issue warrants for “remote access” searches. But since many of these unknown computers will be not just in another district but abroad, the diplomatic issues Ghappour flags would remain relevant even under a revised Rule 41. So would Owsley’s and Mayer’s recommendations for careful oversight.
Reading these articles together highlights the ways in which the problems of online investigations are both very new and very old. The technologies at issue — spyware, cryptographic authentication, onion routing, cellular networks, and encryption — were not designed with much concern for the Federal Rules or law enforcement budgeting processes. Sometimes they bedevil police; sometimes they hand over private data on a silver platter. But the themes are familiar: abuse of trust and positions of authority, the exploitation of existing vulnerabilities and the creation of new ones. Oversight is a crucial part of the solution, but at the moment it is piecemeal and inconsistently applied. The future of policing has already happened. It’s just not evenly distributed.
This book is about using data noise to make your personal information less easily digestible by privacy-consuming systems.
This book is a necessary book because it presents hopeful tactics and strategies for privacy defense at a time when—in spite of half a century of debates about (electronic) privacy laws, regulations and court decisions, best practices and privacy enhancing technologies—we seem to be living in a state of privacy resignation.
This book is concise, rich with examples, written in clear language, does not shy away from the moral hazards and practical limitation of data noise creation, and clarifies again and again that privacy is about informational power relationships in which the powerless have to enlarge their options.
In the authors’ words, “[o]bfuscation is the deliberate addition of ambiguous, confusing, or misleading information to interfere with surveillance and data collection” (P. 1.) clarifying that obfuscation is not about total disappearance or erasure. It is what they call a “relative utility,” (P. 58.) but it is useful nevertheless. It helps to win time for privacy in the rush for completing personal profiles by the informational powerful. At a minimum, it raises the costs of gaining meaningful information and may do so significantly. The authors provide examples, historical ones, like chaff confusing anti-aircraft measures, to bring across the concept, and contemporary ones from the networked life like Twitter bots, CacheCloak, and TrackMeNot to encourage use and further design.
Obfuscation is a tool to be used when and where opting out is not an option, and where one is faced with an asymmetrical information power relationship, when it is unclear what is being done with the information with which consequences, and when there is neither trust nor adequate safeguards. “We aim,” the authors say at P. 44, “to persuade readers that for some privacy problems obfuscation is a plausible solution, and that for some it is the best solution.”
Yet, obfuscation poses its own moral challenges: What about dishonesty, what about wasting bandwidth, polluting or even damaging systems, what about free riding? Brunton and Nissenbaum lead us through exemplary uses of obfuscation, explaining where they see sufficient proportionality in the balance between ends and means to justify obfuscation, emphasizing that the values we attach to means and ends are ultimately social ones and as such need to be negotiated politically. For those reflecting on the use of obfuscation, the authors provide a checklist with a taxonomy of goals, threats, and benefits to allow for a realistic assessment of obfuscation’s ramifications and likely success. Success, the authors hope, would not only encompass a specific outcome of a specific use of obfuscation, but widespread use that eventually leads to progress in research, regulations and policies, and to changing social practices.
Ultimately, as you are putting down the book, you become aware that with obfuscation you cannot tilt any power balance significantly. You may also wonder if these complex means of obfuscation will not accentuate that imbalance between the less and more powerful that the book’s authors seek to address. But obfuscation practices may indeed catch the imagination of more system designers, programmers, and even politicians to develop structural mechanisms to counterbalance the current organizational omnipotence fantasies of foreseeability.
In the meantime it may at least help users to gain and maintain—what has been emphasized in another recently published How-to-Guide, Spy Secrets that Can Save your Life, A Former CIA Officer Reveals Safety and Survival Techniques to Keep You and Your Family Protected, by Jason Hanson—“Situational Awareness.” But this was in a different context, and besides that would be a different jot …
RonNell Anderson Jones & Lyrissa Barnett Lidsky, Of Reasonable Readers and Unreasonable Speakers: Libel Law in a Networked World
, Va. J. Soc. Pol'y & L.
(forthcoming 2016), available at SSRN
Though it can be uplifting and life affirming to read law review articles written by people you almost always agree with, better cerebral benefits are usually obtained from reading the writings of people who challenge your ideas and force you to reconsider your views a bit. Of Reasonable Readers and Unreasonable Speakers: Libel Law in a Networked World by Lyrissa Barnett Lidsky and RonNell Andersen Jones, forthcoming in the Virginia Journal of Social Policy and the Law, is an engaging article that taught me a lot about the state of online defamation litigation.
Both co-authors tend to be more libertarian about the First Amendment than I am, so I always learn a lot from reading their scholarship. I also appreciate their clear and accessible writing. The older I become, the less patience I have for tangled prose, poor organization and conclusions so thick with ambiguity you have to eat them with a fork. Though the previous sentence reflects my exercise of the opinion privilege, the bad writers responsible will remain unnamed, due to the actual malice that infuses those words. (A good companion piece to this excellent article is The Death of Slander by Leslie Yalof Garfield.)
Lidsky and Jones explicitly state that the goal furthered by their article is to assist future courts by providing specific guidance about adapting the opinion privilege and the actual malice rule to social media. The authors suggest applying the opinion privilege (the constitutional doctrine protecting statements that are unverifiable or cannot be interpreted as stating actual facts) broadly to social media expression with a detailed awareness of the internal and external contexts in which the speech occurred to allow unfettered expression to flourish.
The actual malice rule, however, needs to be read narrowly by courts, according to the authors. This is to prevent vengeful or delusional speakers from escaping liability when they engage in character assassination against pubic figures or public officials.
Lidsky and Jones spend the bulk of the article explaining and illustrating the importance of context for evaluating defamation claims based on speech that was uttered via social media. The article theorizes that courts that have addressed online defamation claims have stretched the opinion privilege a bit wider than it is typically deployed in traditional print media. The evidence is offered via summaries of cases that have been decided and reported. The judges rendering these opinions typically list various aspects of the contexts of the challenged speech as justifying a broad latitude for allowable opinion. Important contextual factors have included Twitter conversations in totality, use of informal language, use of social media venues that are “understood” to traffic in un-intermediated opinion, and to prize speed of information delivery over accuracy, use of supporting links, the signaling function of hashtags, and the goddamn frequent use of fucking expletives. Based on the cases reported by Lidsky and Jones, judges seem eager to avoid finding actionable defamation. The authors push back a little, reminding readers that “Defamation law should continue to play a role in preventing character assassination and guaranteeing that public discourse has at least some anchor in truth, even in the social-media age.” (P. 21.)
Lidsky and Jones spend somewhat less time discussing actual malice, the standard derived from New York Times Co. v. Sullivan, which requires libel plaintiffs who are public officials to prove that a defendant published a defamatory statement with knowledge or reckless disregard of its falsity. As with the opinion privilege, actual malice is a subjective determination that so far at least seems to be very context driven when the speech at issue is delivered over social media. What little case law so far exists suggests the possibility that actual malice may become even harder to prove in online venues. The authors caution readers here too, reminding us that libel that reaches large numbers of readers can have an enormous impact that may not be adequately addressed by judges who write angry and false allegations off as inevitable and unavoidable parts of the normative culture of social media platforms.
It would be reassuring to think that Internet users are so used to reading hyperbolic insults and allegations online that they do not take them seriously, as many judges seem to believe. But the well documented destructive impact that social media driven excoriation has had on individuals and businesses (see e.g. these books) suggests that the speech torts are legal tools that are more necessary than ever to regulate (or at least temper) some kinds of online speech. The authors were wise to remind judges of this fact, and I fervently hope their message is heard. This is a topic of terrific importance now and looking forward.
Olivier Sylvain, Network Equality
, 67 Hastings L.J.
443 (2016), available at SSRN
From the halls of Congress to the cocktail parties of Davos, “innovation” is celebrated as the central rationale for Internet policy. Whatever its utility decades ago, the term is now overused, a conceptual melange that tries to make up in capaciousness what it lacks in rigor. Fortunately, legal scholars are developing more granular accounts of the positive effects of sociotechnical developments. Olivier Sylvain’s Network Equality is a refreshing reminder that Internet policy is more complex than innovation maximization. Sylvain carefully documents how access disparities interfere with the internet’s potential to provide equal opportunity.
Network Equality makes a critical contribution to communications law scholarship because it questions the fundamental terms of the last twenty years of debates in the area. For at least that long, key internet policymakers have assumed what Sylvain calls the “trickle-down theory of Internet innovation”—that if policymakers incentivized more innovation at the edge of the network, that would in the end redound to the benefit of all, since increased economic activity online would lead to better and cheaper infrastructure. Now that once-“edge” firms like Facebook are rich enough to propose to dictate the terms of access themselves, this old frame for “net neutrality” appears creaky, outdated, even obsolete. Sylvain proposes a nuanced set of policy aims to replace it.
As Susan Crawford’s Captive Audience shows, the mainstream of internet policymaking has not inspired confidence from American citizens. Large internet service providers are among the least popular companies, even for those with access. They also tend to provide slower service, at higher prices, than ISPs in the rest of the developed world. But the deepest shame of the US internet market, as Sylvain shows, is the troubling exclusion of numerous low-income populations, disproportionately affecting racial minorities.
Sylvain is exactly right to point out that these disparities will not right themselves automatically: policy is needed. Nor should we embrace “poor internet for poor people,” ala the “poor programs for poor people” so common in U.S. history. The situation in Flint shows what happens when the state simply permits some of its poorest citizens to access lower-quality infrastructure. It is not hard to imagine similar results when catch-as-catch-can internet access is proposed as a “solution” to extant infrastructure’s shortcomings.
Sylvain shows that enabling statutes require better access to telecommunications technologies, even as the policymakers charged with implementing them repeatedly demonstrate more interest in innovation than access. Their “trickle down” ideal is for innovation to draw user interest which, in turn, is supposed to attract further investment in infrastructure. But in a world of vast inequalities, that private investment is often skewed, reinforcing structural inequalities between the “information haves and have nots” regarding access to and use of the internet.
Treating the internet more like a public resource would open the door to substantive distributional equality. We generally do not permit utilities to market cheaper-but-more-dangerous, or even intermittent, electricity to disadvantaged communities, however “efficient” such second-rate services may be. Nor should we permit wide disparities in quality-of-service to become entrenched in our communicative infrastructure. Sylvain’s Network Equality may spur state-level officials to assure a “social minimum” of internet access available to all.
Sylvain’s work is an exceptionally important contribution to scholarship on access to the internet, not just in the US, but globally. Indian regulators recently stunned Facebook by refusing to permit its “Free Basics” plan. When activists pointed out that the project smacked of colonialism, celebrity venture capitalist Marc Andreessen fumed, “Anti-colonialism has been economically catastrophic for the Indian people for decades.” For him and many other techno-libertarians, the innovation promised by Facebook was worth whatever power asymmetries may have emerged once so much control was exercised by a largely foreign company. If the price of innovation was colonialism—so be it.
Andreessen’s comment was dismissed as a gaffe. But it reveals a great deal about the mindset of both elites. “Innovation” has become a god term, an unquestionable summum bonum. Few pause to consider that new goods and services can be worse than the old, or merely spark zero-sum competitions. (Certainly the example of high frequency trading in Sylvain’s article suggests that access speed and quality could be decisive in some markets, without adding much, if anything, to the economy’s productive capacity.) Nor is the unequal spread of innovation critically interrogated enough. Finally, the terms of access to innovation may be dictated by “philanthrocapitalists” more devoted to their own profits and political power than to eleemosynary aims.
According to Sylvain, the FCC has been wrong to treat distributive equality as a second-order effect of innovation, rather than directly pursuing it as a substantive goal. Since inequalities in internet access track demographic differences in race, class, and ethnicity, it is clear that the innovation-first strategy is not working. Sylvain’s perspective should embolden future FCC commissioners to re-examine the agency’s approach to inclusion and equal opportunity, going beyond innovation and competition as ideals. Among academics, it should spur communications law experts to consider whether the goal of greater equality per se (rather than simply striving to assure everyone some minimum amount of speed) is important to the economy. Sylvain’s oeuvre makes the case for internet governance institutions that can better deliberate on these issues. His incisive, insightful work is a must-read for the communications and internet policy community.
Have you ever thought of who will have access to your email when you die? If you have social media, have you prepared a digital will that will allow your loved ones to dispose of your online presence? Have you ever wondered what happens to people’s digital accounts when they pass away? These and many other questions are part of a growing number of legal issues arising from our increasingly networked life, and it is the main subject of Virtual Worlds – a Legal Post-Mortem Account, which looks at the issue of post-mortem digital arrangements for virtual world accounts, where the author discusses several possible ways of looking at virtual goods to allow them to be transferred when the owner of the account dies. The article is a great addition to the growing scholarship in the area, but it is also an invaluable shot-in-the-arm to the subject of virtual worlds.
The legal discussion of virtual worlds has gone through a rollercoaster ride, if you pardon the use of the tired cliché. In 1993 author Julian Dibbell published a remarkable article entitled A Rape in Cyberspace. In it he recounts the happenings of a virtual world called LambdaMOO, a text-based environment with roughly one hundred subscribers where the users adopted assumed personalities (or avatars) and engaged in various role-playing scenarios. Dibbell describes how the community dealt with perceived sexual offences committed by a member upon other avatars. The story of LambdaMOO has become a classic in Internet regulation literature, and has been pondered and retold in seminal works such as Lessig’s Code and Goldsmith and Wu’s Who Controls the Internet. Dibbell’s powerful story of the virtual misconduct of an avatar during the early days of Cyberspace still resonates with legal audiences because it brings us back to crucial questions that have been the subject of literature, philosophy and jurisprudence for centuries. How does a community organise itself? Is external action needed, or does self-regulation work? What constitutes regulatory dialogue? How does regulatory consensus arise? And most importantly, who enforces norms?
There was a period of maturity in the literature as other interesting legal questions began to arise, such as ownership of virtual goods, customer protection, contractual validity of end user licence agreements (EULAs), just to name a few. The growing legal interest arose from the evident value of the virtual economy. A report on the virtual economy for the World Bank calculated that the global market for online games was $12.6 billion USD in 2009, and that the size of the secondary market in virtual goods (the monetary value of real money transactions in virtual goods) reached an astounding $3 billion USD. The culmination of this more mature era of research consists of two excellent books, Virtual Justice by Greg Lastowka and Virtual Economies: Design and Analysis by Vili Lehdonvirta and Edward Castronova.
However, after that golden period we have had a marked decline in the number of papers discussing legal issues, with the exception of the continuing existence of the Journal of Virtual Worlds Research. The apparent drop in published research could be caused by the fact that virtual worlds themselves have been losing subscribers. The once-mighty Second Life is now mostly mentioned in phrases that begin with “Whatever happened to Second Life”? Even popular massively multiplayer online games (MMOGs) such as World of Warcraft have also been losing subscribers. But most importantly, many legal issues that seemed exciting some time ago, such as virtual property, or the legal status of the virtual economy, did not produce the level of litigation expected. Most legal issues have been solved through a combination of consumer and contract law.
Edina Harbinja’s article resurrects the interest in virtual worlds with the study of an area of research that has been often neglected, and it is the status of virtual world accounts after the death of the user. While subscriptions figures have been on the wane, the value of the virtual economy has remained the same. Blizzard recently made it easy for subscribers of World of Warcraft to transfer funds from the real world into the virtual economy, and vice versa, with the introduction of in-game token systems. This has meant an injection of real money into virtual economies, potentially resulting in an increased legal interest as to the assets of virtual goods.
Harbinja describes the various types of virtual assets and virtual property, using a range of theories of property to justify the existence of virtual worlds as viable and valuable assets subject of the same rights as ‘real’ property. These include rivalrouness, permanence and interconnectedness as elements that are present in virtual goods making them worthy of legal protection as property. For example, in order to apply tangible notions of property to virtual goods, commentators remark that the possession and consumption of a virtual good must exclude “other pretenders to the same resource.” If virtual goods can have some of the similar characteristics that make tangible goods valuable and worthy of protection, then they should be similarly protected.
She then explores various theories of how to deal with virtual property, including the use of contract law in the shape of end-user licensing agreements, the constitutionalization of virtual worlds, and even going as far as suggesting the creation of virtual usufruct to describe the situation of property in virtual worlds. An usufruct is a civil law concept dating back to Roman times (as a type of personal servitude) that “entitles a person to the rights of use of and to the fruits on another person’s property.” A virtual usufruct would therefore contain limited rights by a person to use an item, to transfer an item, and even to exclude others from exercising the above. Harbinja proposes that since the usufruct would terminate on death, the personal representative of the deceased would be required to assess whether any of these rights can be monetised and the value transferred to the account-holder’s estate.
That being the case, the author explores various options of how to deal with virtual property after the death of the subscriber. This is tricky, as at the moment there is not a single regime of property allocation of virtual goods, and some type of rights may hinge on the value of the virtual goods. The author seems to favour strongly legal reform to allow for some form of usufruct after death as described above.
This is a welcome addition to the body of virtual world literature, and it may help to inject life to a declining genre, pun intended.
Kristen Eichensehr, Cyber War & International Law Step Zero
, 50 Tex. Int'l L.J. 355 (2015), available at SSRN
Kristen Eichensehr recently published a piece entitled Cyberwar & International Law Step Zero that describes an unfolding of events that is by now familiar to international lawyers contemplating the emergence of new military technologies. First, a new military technology X (where X has been drones, cyber weapons, nuclear weapons, lethal autonomous weapons) appears. Nations then ask the “step-zero” question — “does international law apply to the use or acquisition of X”? And the answer is inevitably, “yes, but in some ways existing international law needs to be tweaked to adjust for some of the novel characteristics of X.”
Eichensehr offers a compelling explanation for both the persistence of this question and the recurrent answer. Regarding persistence, she points out that for international law, unlike domestic law, the bound parties—nations—bind themselves consensually. For example, she writes that “The tradition of requiring state consent (or at least non-objection) to international law predisposes the international legal community to approach new issues from the ground up: When a new issue arises, the question is whether international law addresses the issue, because if there is no evidence that it does, then it does not.” In other words, asking the step-zero question is the first step in proceeding down a path that may result in a state’s opting out.
Regarding the frequent recurrence of the same answer (i.e., “yes”), she points out that international law—especially International Humanitarian Law (“IHL”)—is often adaptable to new weapons technologies, in large part because the interests that IHL seeks to protect are constant. (I would prefer the term “values” rather than “interests,” but the point is the same.) For example, she writes that “[e]xisting law was designed, for example, to protect civilians from the consequences of conflict. That concern transcends the type of weapon deployed. Thus, although the nature of the weapon has changed, the underlying concern has not, which reduces one possible justification for altering existing law.” Lastly, she argues that even if existing law does not perfectly apply to new technologies, asserting the contrary raises the fearsome prospect of a world in which a new technology is not subject to any legal constraint at all. In her words, “[e]ven if existing law is an imperfect means of regulating States’ actions . . ., imperfect law is preferable to no law at all.”
The explanation seems compelling to me, though I confess from the start that my understanding of law is that of an amateur. But I’m also a long-time observer of many military technologies. I’ve thought often about how international law attends to these technologies, and I suggest that her explanation is applicable to a broader range of phenomena than she discusses.
Speaking in very broad terms, law—and especially international law—depends heavily on precedent. Precedent provides stability, which is regarded as a desirable attribute of law in large part because in the absence of legal stability, people—and nations—would have no way of knowing how law would regard their actions. But technologists have very different goals. Rather than stability, the dream of every technologist is to invent a disruptive technology, one that completely changes the way people can accomplish familiar goals. Even better is when a technologist can create not just new ways of doing old business, but can invent entirely new lines of business.
Against this backdrop, consider a broadened step-zero sequence of events. A new technology A is invented. At first, when the use of A is small and limited, the law pays little or no attention to it. But as A becomes used by more and more and more people, a variety of unanticipated consequences appear, some of which are regarded as undesirable by some people. These people look to the law for remedy, and they naturally ask the question “how, if at all, does existing law apply?” Their lawyers look for precedent—similar cases handled in the past that may provide some guide for today—and there is always a previous case involving technology that bears some similarity to A today. So the answer is, “yes, existing law applies, but tweaks are necessary to apply precedent properly.”
So, I suggest, Eichensehr’s step-zero analysis of cyber weapons and international law sheds light on a very long standing tension between technological change and legal precedent. For that reason, I think anyone interested in that tension should consider her analysis.
The Atomic Age of Data: Policies for the Internet of Things
Report of the 29th Annual Aspen Institute Conference on Communications Policy, Ellen P. Goodman, Rapporteur, available at SSRN
The phrase “Internet of Things,” like its cousin “Big Data,” only partially captures the phenomenon that it is meant to describe. The Atomic Age of Data, a lengthy report prepared by Ellen Goodman (Rutgers Law) following a recent Aspen Institute conference, bridges the gap at the outset: “The new IoT [Internet of Things] – small sensors + big data + actuators – looks like it’s the real thing. … The IoT is the emergence of a network connecting things, all with unique identifiers, all generating data, with many subject to remote control. It is a network with huge ambitions, to connect all things.” (P. 2) The Atomic Age of Data is not a scholarly piece in a traditional sense, but it is the work of a scholar, corralling and shaping a critical public discussion in an exceptionally clear and thoughtful way.
The IoT is in urgent need of being corralled, at least conceptually and preliminarily, so that a proper set of relevant public policy questions may be asked. What are the relevant opportunities and hazards? What are its costs and benefits, to the extent that those can be discerned at this point, and where should we be looking in the future? That set of questions is the gift of this report, which is the documented product of many expert and thoughtful minds collaborating in a single place (face to face, rather than via electronic networks).
Simply defining the IoT is one continuing challenge. As The Atomic Age of Data affirms, the IoT isn’t the Internet, though it is enabled by the Internet and in many ways it extends the Internet. (P. 2) What it is, where it is, how it functions, what it might do in the future – or permit other to do – remains at least a little cloudy. The first contribution that The Atomic Age of Data makes is simply to map these contours, contrasting the Internet of Things with the network of networks that today we call the Internet, or the Internet of People. It identifies several distinguishing characteristics of the IoT: its sheer scale (the amount of data that can be gathered from ubiquitous sensor networks); the reduction or even elimination of user control over data collection; the widespread deployment of actuators, embedding a level of agency in the IoT; data analytics that rest atop communications and transactions; its demonstrably global character (in contrast to the initiated-in-the-US character of the Internet); and its framing of data as infrastructure, enabling the provision of a broad variety of services.
The bulk of The Atomic Age of Data consists of a comprehensive sorting of policy questions and recommendations. The foundational premise is the idea that data itself is (or are) infrastructure – “as a vital input to progress much like water and roads, and just as vulnerable to capture, malicious or discriminatory use, scarcity, monopoly and sub-optimal investment”. (P. 12) The analogy between data infrastructure and communications infrastructure is purposeful. Characterizing data as infrastructure, like characterizing communications as infrastructure, only frames policy and technical questions; it doesn’t resolve them. Data ownership and data access are related questions. They connect to questions of data formats, interoperability and interconnectivity, and common technical standards. Identifiability of data is a cross-cutting concern for privacy purposes. The respective domains of public and private investment in the IoT, and corresponding expectations of public access and use and private returns, remains open questions. The report clusters these topics together; one might label the cluster with a single theme: governance.
How, or more precisely, by whom, will all of this data be produced? The report examines the adequacy of incentives for private (commercial) provision of data and the appropriate role for government as regulator and supplier of subsidies.
This “data as infrastructure” section of The Atomic Age of Data concludes with a series of policy recommendations, focusing on two overarching principles (also reduced to several more specific recommendations): that there should be broad accessibility of data and data analytics, with open access to some (but not all); and that government should subsidize and facilitate data production, particularly in cases where data is an otherwise under-produced public good.
The Atomic Age of Data moves next to a review of privacy topics in the context of the IoT, beginning with when, whether, and how to design privacy protections into systems from the start, and the role and implementation of Fair Information Practice Principles (FIPPs). As the report notes, these are critical questions for the IoT because so much of the IoT is invisible to individuals and has no user interface to which data protection and FIPPs might be applied. To what extent should privacy protection be designed in to the IoT, and to what extent should privacy protection be a matter of strategies that focus on individual choice?To what extent might choice undermine the production, collection, and processing (aggregation) of good data, or the right data? Privacy questions thus intersect with incentive questions. Cost, benefit, and values questions extend further. To what extent is choice even technologically feasible without compromising other societal values? Production, collection, identification, and processing/aggregating data lead next to related privacy questions about retention and curation of data.
This privacy section concludes with brief set of recommendations, focusing on three overarching principles (again with several more specific points): that IoT systems should design in privacy controls to minimize the collection of personally identifiable information; that IoT systems should effectuate FIPPs to the extent possible; and that individuals should have a way to control collection and transmission of their personal data.
The balance of the report is divided among four additional topics that are treated more briefly, though in each case the topic concludes with a short set of basic recommendations. The first is “Equity, Inclusion, and Opportunity,” which collects questions about prospects of citizen empowerment and disempowerment via the IoT. Data collection in some respects signifies “who counts” in modern society – whose voice and presence “matters,” both individually and collectively, but also, in some respects, whose voice and presence is worth watching. The report points out the relevance of comparable concerns with respect to the deployment of broadband communications infrastructure and its impacts on things like access to education and health resources. The second is “Civic Engagement,” which touches on how IoT technologies might be used both by governments and by the private sector to increase democratic accountability. The third is “Telecommunications Network Architecture,” which concerns the intersection of the IoT and competition, service innovation, and interoperability among IoT systems and related communications networks. The key topic here is the heterogeneity of the data generated by IoT applications, recalling the question of whether the Internet of Things is, or should be, truly a single “Internet” at all, with interconnected networks, connections across verticals (home, health, transport, for example), and common platforms. (P. 39) The fourth is security, which raises the relatively simple question of security vulnerabilities introduced at both the level of individual devices and at systemic levels. The question may be simple but the answer assuredly is not; this section of the report is comparatively brief, perhaps because the salience of the interest is so obvious.
The Atomic Age of Data finishes with a case study, on The Smart City, which refers to the idea of networks of ubiquitous sensors deployed within urban infrastructure to generate data about usage patterns and service needs. (P. 45) The discussion of this use case is decidedly and appropriately pragmatic, putting utopian hopes for the Smart City in context and noting privacy and surveillance concerns and related but independent equity concerns.
To conclude this review:
This is an enormously clear, useful, and timely product. One cannot critique a report of a conference on the ground that it did not address a critical topic, if the conference itself did not address that topic. Yet as helpful as The Atomic Age of Data is in canvassing the policy territory of the IoT, I couldn’t help but notice how the boundaries of that territory are implicitly defined. The Atomic Age of Data contains a lot of discussion of “Internet” topics and less discussion of “things.” In this day and age, one should never take things or thing-ness for granted. What is a thing? 3D printing, the current label for additive manufacturing, promises to revolutionize the meaning of “thingness” – because objects may be dynamic and emergent, as well as static and fixed – just as the “Internet of Things” promises to revolutionize the meanings of identity and presence.
“Data for Peace,” the title of this review, builds a bit on the naïve sense of modernity and progress expressed (purposefully, no doubt) by the report’s Atomic Age title. During the 1950s and 1960s, “atomic” things were full of optimism. Later, we learned that splitting the atom changed the meanings of matter in unexpected ways. “Atomic” gave way to a variety of more complex political, cultural, and technological expressions and concerns, few of which were foreseen at the dawn of the Atomic Age. Similarly, 3D printing may turn out to change the meanings of matter in unexpected – but other – ways. As the IoT and Big Data mature — along with 3D printing – I expect that future reports on its implications will be similarly but unexpectedly complex.
Ira Rubinstein & Woodrow Hartzog, Anonymization and Risk,
91 Wash. L. Rev.
(forthcoming 2016), available on SSRN
In the current Age of Big Data, companies are constantly striving to figure out how to better use data at their disposal. And it seems that the only thing better than big data is more data. However, the data used is often personal in nature and thus linked to specific individuals and their personal details, traits, or preferences. In such cases, sharing and use of the data conflict with privacy laws and interests. A popular remedy applied to sidestep privacy-based concerns is to render the data no longer “private” by anonymizing it. Anonymization is achieved through a variety of statistical measures. Anonymized data, so it seems, can be sold, shared with researchers, or even possibly released to the general public.
Yet, the Age of Big Data has turned anonymization into a difficult task, as the risk of re-identification seems to be constantly looming. Re-identification is achieved by “attacking” the anonymous dataset, aided by the existence of vast datasets (or “auxiliary information”) from various other sources available to the potential attacker. It is, therefore, difficult to establish whether anonymization was achieved, whether privacy laws pertain to the dataset at hand, and if so, how. In a recent paper, Ira Rubinstein and Woodrow Hartzog examine this issue’s pressing policy and legal aspects. The paper does an excellent job in summarizing the way that the current academic debate in this field is unfolding. It describes recent failed and successful re-identification attempts and provides the reader with a crash course on the complicated statistical methods of de-identification and re-identification. Beyond that, it provides both theoretical insights and a clear roadmap for confronting challenges to properly releasing data.
The discussion on anonymization, or de-identification (the more precise term which the authors choose to apply, as it does not imply full anonymization) was once mostly of academic interest: Statisticians introduced ways to anonymize data, while mathematicians and computer scientists strove to prove re-identification “attacks” were nonetheless possible. Several successful re-identification attacks (perhaps the most famous one involved Neflix and IMDb) also led legal scholars to debate proper policy practices, as well broader implications of re-identification. However, this academic discussion is quickly crossing over into the world of practitioners. Recent policy papers published by regulators in the U.S., U.K., and the E.U. strive to create legal and normative guidelines for the manner in which personal information can be shared and released. In addition, corporations are turning to legal counsel for advice on using anonymization to mitigate potential liability.
In an age in which legal scholarship seems to be drifting away from legal practice, this paper demonstrates how both can be brought together. To a great extent, the knowledge conveyed in this paper is now essential for all legal practitioners advising clients with large databases. To demonstrate the relevance of this discussion, note a recent debate regarding the practices of Yodlee, an online financial tools provider, which has also emerged as a powerful financial-data aggregator. As recently reported by the Wall Street Journal, Yodlee sells information, gathered by facilitating consumer transactions, to investors and research firms. The WSJ claimed that Yodlee clients’ privacy is being compromised, and Yodlee responded by arguing that all personal information was properly handled and de-identified. It is safe to assume that similar stories involving other companies’ collecting, marketing, or de-identifying personal data are just around the corner.
Perhaps the central point that Rubinstein and Hartzog’s paper strives to articulate is that classifying personal data as either anonymous or identifiable is both incorrect and useless. With regard to anonymization, the authors further note that: “[a]lmost all uses of the term to describe the safety of data sets are misleading, and often they are deceptive. Focusing on the language of process and risk will better set expectations” (P.4). In other words, anonymity (or rather – de-identification) is not an absolute term, but one indicating degrees on a scale – one that should be measured by the effort required to reveal the personal data, and the chance it could occur. As the authors note, this latter notion was already introduced (perhaps most famously by Paul Ohm). Rubinstein and Hartzog’s important contribution is to break this notion down into practical steps – formulating a proper data release policy as well as providing a full toolbox of measures to be applied in the process.
Beyond this important observation, the paper’s most substantial analytical contribution is to link appropriate data release policies with the notion of data security. The relationship, as explained by the authors, is based on these concepts mutual need to meet a specific standard of care in the process, and not necessarily be judged by the outcome. The authors also explain that context matters, and list various parameters and attributes of the data release process that should be considered when formulating a release policy (p. 32). In addition, they demonstrate that an integral part of a release policy is the technical measures applied when distributing and sharing the information. In doing so, they note that the Release-and-Forget Model of data sharing (in which, for example, a de-identified database is merely made available over the internet) is most likely obsolete (p. 36); all data release schemes must include unique measures (technological, contractual – or both) which strive to limit re-identification by potential attackers.
Beyond the rich policy discussion the authors provide in comparing and equating security policy to data release policy, several additional theoretical questions (with practical implications) come to mind and are worthy of future discussion: Is a regulatory response similarly necessary in the security and data release contexts? While companies usually under-invest in security (given, among other factors, the negative externalities of security breaches), there have been examples of instances in which corporate motivation to enhance security was close to sufficient, especially in view of market pressures and the reputational costs of breaches. In many cases, companies’ and clients’ interests in maintaining security are aligned. More often, though, corporations’ and clients’ interests regarding data releases directly conflict. Corporations are interested in capitalizing on their data, whereas consumers do not necessarily share corporate enthusiasm for sharing their de-identified personal information, as they are not likely to benefit from or be compensated for this additional revenue stream. For this and other reasons, the security-release policy comparison has its limits; data release policies might call for stricter rules and enforcement mechanism.
In addition, it would be interesting to consider the role insurance could play in the process of data release—an issue also currently emerging in the context of data security. An active insurance market might indeed facilitate the shift from outcome- to process-based liability without the need to change the regulatory framework. Therefore, the change the authors here advocate for might be just around the corner. Insurers could, for instance, limit indemnification to those companies that follow acceptable data-release policies (yet nonetheless cause harms to third parties). Yet, relying on insurance markets may not be a safe bet. In this specific context, insurance markets face several difficulties, which mandate further discussion. The comparison to data security can prove illuminating here as well.