Innovation & Equality: An Uneasy Relationship

Olivier Sylvain, Network Equality, 67 Hastings L.J. 443 (2016), available at SSRN.

From the halls of Congress to the cocktail parties of Davos, “innovation” is celebrated as the central rationale for Internet policy. Whatever its utility decades ago, the term is now overused, a conceptual melange that tries to make up in capaciousness what it lacks in rigor. Fortunately, legal scholars are developing more granular accounts of the positive effects of sociotechnical developments. Olivier Sylvain’s Network Equality is a refreshing reminder that Internet policy is more complex than innovation maximization. Sylvain carefully documents how access disparities interfere with the internet’s potential to provide equal opportunity.

Network Equality makes a critical contribution to communications law scholarship because it questions the fundamental terms of the last twenty years of debates in the area. For at least that long, key internet policymakers have assumed what Sylvain calls the “trickle-down theory of Internet innovation”—that if policymakers incentivized more innovation at the edge of the network, that would in the end redound to the benefit of all, since increased economic activity online would lead to better and cheaper infrastructure. Now that once-“edge” firms like Facebook are rich enough to propose to dictate the terms of access themselves, this old frame for “net neutrality” appears creaky, outdated, even obsolete. Sylvain proposes a nuanced set of policy aims to replace it.

As Susan Crawford’s Captive Audience shows, the mainstream of internet policymaking has not inspired confidence from American citizens. Large internet service providers are among the least popular companies, even for those with access. They also tend to provide slower service, at higher prices, than ISPs in the rest of the developed world. But the deepest shame of the US internet market, as Sylvain shows, is the troubling exclusion of numerous low-income populations, disproportionately affecting racial minorities.

Sylvain is exactly right to point out that these disparities will not right themselves automatically: policy is needed. Nor should we embrace “poor internet for poor people,” ala the “poor programs for poor people” so common in U.S. history. The situation in Flint shows what happens when the state simply permits some of its poorest citizens to access lower-quality infrastructure. It is not hard to imagine similar results when catch-as-catch-can internet access is proposed as a “solution” to extant infrastructure’s shortcomings.

Sylvain shows that enabling statutes require better access to telecommunications technologies, even as the policymakers charged with implementing them repeatedly demonstrate more interest in innovation than access. Their “trickle down” ideal is for innovation to draw user interest which, in turn, is supposed to attract further investment in infrastructure. But in a world of vast inequalities, that private investment is often skewed, reinforcing structural inequalities between the “information haves and have nots” regarding access to and use of the internet.

Treating the internet more like a public resource would open the door to substantive distributional equality. We generally do not permit utilities to market cheaper-but-more-dangerous, or even intermittent, electricity to disadvantaged communities, however “efficient” such second-rate services may be. Nor should we permit wide disparities in quality-of-service to become entrenched in our communicative infrastructure. Sylvain’s Network Equality may spur state-level officials to assure a “social minimum” of internet access available to all.

Sylvain’s work is an exceptionally important contribution to scholarship on access to the internet, not just in the US, but globally. Indian regulators recently stunned Facebook by refusing to permit its “Free Basics” plan. When activists pointed out that the project smacked of colonialism, celebrity venture capitalist Marc Andreessen fumed, “Anti-colonialism has been economically catastrophic for the Indian people for decades.” For him and many other techno-libertarians, the innovation promised by Facebook was worth whatever power asymmetries may have emerged once so much control was exercised by a largely foreign company. If the price of innovation was colonialism—so be it.

Andreessen’s comment was dismissed as a gaffe. But it reveals a great deal about the mindset of both elites. “Innovation” has become a god term, an unquestionable summum bonum. Few pause to consider that new goods and services can be worse than the old, or merely spark zero-sum competitions. (Certainly the example of high frequency trading in Sylvain’s article suggests that access speed and quality could be decisive in some markets, without adding much, if anything, to the economy’s productive capacity.) Nor is the unequal spread of innovation critically interrogated enough. Finally, the terms of access to innovation may be dictated by “philanthrocapitalists” more devoted to their own profits and political power than to eleemosynary aims.

According to Sylvain, the FCC has been wrong to treat distributive equality as a second-order effect of innovation, rather than directly pursuing it as a substantive goal. Since inequalities in internet access track demographic differences in race, class, and ethnicity, it is clear that the innovation-first strategy is not working. Sylvain’s perspective should embolden future FCC commissioners to re-examine the agency’s approach to inclusion and equal opportunity, going beyond innovation and competition as ideals. Among academics, it should spur communications law experts to consider whether the goal of greater equality per se (rather than simply striving to assure everyone some minimum amount of speed) is important to the economy. Sylvain’s oeuvre makes the case for internet governance institutions that can better deliberate on these issues. His incisive, insightful work is a must-read for the communications and internet policy community.

Cite as: Frank Pasquale, Innovation & Equality: An Uneasy Relationship, JOTWELL (April 4, 2016) (reviewing Olivier Sylvain, Network Equality, 67 Hastings L.J. 443 (2016), available at SSRN), http://cyber.jotwell.com/innovation-equality-an-uneasy-relationship/.
 
 

“Ye Shall Inherit My Magic Sword!” Post-Mortem Ownership in Virtual Worlds

Edina Harbinja, Virtual Worlds – a Legal Post-Mortem Account, 11 SCRIPTed 273 (3d ed. 2014).

Have you ever thought of who will have access to your email when you die? If you have social media, have you prepared a digital will that will allow your loved ones to dispose of your online presence? Have you ever wondered what happens to people’s digital accounts when they pass away? These and many other questions are part of a growing number of legal issues arising from our increasingly networked life, and it is the main subject of Virtual Worlds – a Legal Post-Mortem Account, which looks at the issue of post-mortem digital arrangements for virtual world accounts, where the author discusses several possible ways of looking at virtual goods to allow them to be transferred when the owner of the account dies. The article is a great addition to the growing scholarship in the area, but it is also an invaluable shot-in-the-arm to the subject of virtual worlds.

The legal discussion of virtual worlds has gone through a rollercoaster ride, if you pardon the use of the tired cliché. In 1993 author Julian Dibbell published a remarkable article entitled A Rape in Cyberspace. In it he recounts the happenings of a virtual world called LambdaMOO, a text-based environment with roughly one hundred subscribers where the users adopted assumed personalities (or avatars) and engaged in various role-playing scenarios. Dibbell describes how the community dealt with perceived sexual offences committed by a member upon other avatars. The story of LambdaMOO has become a classic in Internet regulation literature, and has been pondered and retold in seminal works such as Lessig’s Code and Goldsmith and Wu’s Who Controls the Internet. Dibbell’s powerful story of the virtual misconduct of an avatar during the early days of Cyberspace still resonates with legal audiences because it brings us back to crucial questions that have been the subject of literature, philosophy and jurisprudence for centuries. How does a community organise itself? Is external action needed, or does self-regulation work? What constitutes regulatory dialogue? How does regulatory consensus arise? And most importantly, who enforces norms?

There was a period of maturity in the literature as other interesting legal questions began to arise, such as ownership of virtual goods, customer protection, contractual validity of end user licence agreements (EULAs), just to name a few. The growing legal interest arose from the evident value of the virtual economy. A report on the virtual economy for the World Bank calculated that the global market for online games was $12.6 billion USD in 2009, and that the size of the secondary market in virtual goods (the monetary value of real money transactions in virtual goods) reached an astounding $3 billion USD. The culmination of this more mature era of research consists of two excellent books, Virtual Justice by Greg Lastowka and Virtual Economies: Design and Analysis by Vili Lehdonvirta and Edward Castronova.

However, after that golden period we have had a marked decline in the number of papers discussing legal issues, with the exception of the continuing existence of the Journal of Virtual Worlds Research. The apparent drop in published research could be caused by the fact that virtual worlds themselves have been losing subscribers. The once-mighty Second Life is now mostly mentioned in phrases that begin with “Whatever happened to Second Life”? Even popular massively multiplayer online games (MMOGs) such as World of Warcraft have also been losing subscribers. But most importantly, many legal issues that seemed exciting some time ago, such as virtual property, or the legal status of the virtual economy, did not produce the level of litigation expected. Most legal issues have been solved through a combination of consumer and contract law.

Edina Harbinja’s article resurrects the interest in virtual worlds with the study of an area of research that has been often neglected, and it is the status of virtual world accounts after the death of the user. While subscriptions figures have been on the wane, the value of the virtual economy has remained the same. Blizzard recently made it easy for subscribers of World of Warcraft to transfer funds from the real world into the virtual economy, and vice versa, with the introduction of in-game token systems. This has meant an injection of real money into virtual economies, potentially resulting in an increased legal interest as to the assets of virtual goods.

Harbinja describes the various types of virtual assets and virtual property, using a range of theories of property to justify the existence of virtual worlds as viable and valuable assets subject of the same rights as ‘real’ property. These include rivalrouness, permanence and interconnectedness as elements that are present in virtual goods making them worthy of legal protection as property. For example, in order to apply tangible notions of property to virtual goods, commentators remark that the possession and consumption of a virtual good must exclude “other pretenders to the same resource.” If virtual goods can have some of the similar characteristics that make tangible goods valuable and worthy of protection, then they should be similarly protected.

She then explores various theories of how to deal with virtual property, including the use of contract law in the shape of end-user licensing agreements, the constitutionalization of virtual worlds, and even going as far as suggesting the creation of virtual usufruct to describe the situation of property in virtual worlds. An usufruct is a civil law concept dating back to Roman times (as a type of personal servitude) that “entitles a person to the rights of use of and to the fruits on another person’s property.” A virtual usufruct would therefore contain limited rights by a person to use an item, to transfer an item, and even to exclude others from exercising the above. Harbinja proposes that since the usufruct would terminate on death, the personal representative of the deceased would be required to assess whether any of these rights can be monetised and the value transferred to the account-holder’s estate.

That being the case, the author explores various options of how to deal with virtual property after the death of the subscriber. This is tricky, as at the moment there is not a single regime of property allocation of virtual goods, and some type of rights may hinge on the value of the virtual goods. The author seems to favour strongly legal reform to allow for some form of usufruct after death as described above.

This is a welcome addition to the body of virtual world literature, and it may help to inject life to a declining genre, pun intended.

Cite as: Andres Guadamuz, “Ye Shall Inherit My Magic Sword!” Post-Mortem Ownership in Virtual Worlds, JOTWELL (March 7, 2016) (reviewing Edina Harbinja, Virtual Worlds – a Legal Post-Mortem Account, 11 SCRIPTed 273 (3d ed. 2014)), http://cyber.jotwell.com/ye-shall-inherit-my-magic-sword-post-mortem-ownership-in-virtual-worlds/.
 
 

International Law and Step-Zero: Going Beyond Cyberwar

Kristen Eichensehr, Cyber War & International Law Step Zero, 50 Tex. Int'l L.J. 355 (2015), available at SSRN.

Kristen Eichensehr recently published a piece entitled Cyberwar & International Law Step Zero that describes an unfolding of events that is by now familiar to international lawyers contemplating the emergence of new military technologies. First, a new military technology X (where X has been drones, cyber weapons, nuclear weapons, lethal autonomous weapons) appears. Nations then ask the “step-zero” question — “does international law apply to the use or acquisition of X”? And the answer is inevitably, “yes, but in some ways existing international law needs to be tweaked to adjust for some of the novel characteristics of X.”

Eichensehr offers a compelling explanation for both the persistence of this question and the recurrent answer. Regarding persistence, she points out that for international law, unlike domestic law, the bound parties—nations—bind themselves consensually. For example, she writes that “The tradition of requiring state consent (or at least non-objection) to international law predisposes the international legal community to approach new issues from the ground up: When a new issue arises, the question is whether international law addresses the issue, because if there is no evidence that it does, then it does not.” In other words, asking the step-zero question is the first step in proceeding down a path that may result in a state’s opting out.

Regarding the frequent recurrence of the same answer (i.e., “yes”), she points out that international law—especially International Humanitarian Law (“IHL”)—is often adaptable to new weapons technologies, in large part because the interests that IHL seeks to protect are constant. (I would prefer the term “values” rather than “interests,” but the point is the same.) For example, she writes that “[e]xisting law was designed, for example, to protect civilians from the consequences of conflict. That concern transcends the type of weapon deployed. Thus, although the nature of the weapon has changed, the underlying concern has not, which reduces one possible justification for altering existing law.” Lastly, she argues that even if existing law does not perfectly apply to new technologies, asserting the contrary raises the fearsome prospect of a world in which a new technology is not subject to any legal constraint at all. In her words, “[e]ven if existing law is an imperfect means of regulating States’ actions . . ., imperfect law is preferable to no law at all.”

The explanation seems compelling to me, though I confess from the start that my understanding of law is that of an amateur. But I’m also a long-time observer of many military technologies. I’ve thought often about how international law attends to these technologies, and I suggest that her explanation is applicable to a broader range of phenomena than she discusses.

Speaking in very broad terms, law—and especially international law—depends heavily on precedent. Precedent provides stability, which is regarded as a desirable attribute of law in large part because in the absence of legal stability, people—and nations—would have no way of knowing how law would regard their actions. But technologists have very different goals. Rather than stability, the dream of every technologist is to invent a disruptive technology, one that completely changes the way people can accomplish familiar goals. Even better is when a technologist can create not just new ways of doing old business, but can invent entirely new lines of business.

Against this backdrop, consider a broadened step-zero sequence of events. A new technology A is invented. At first, when the use of A is small and limited, the law pays little or no attention to it. But as A becomes used by more and more and more people, a variety of unanticipated consequences appear, some of which are regarded as undesirable by some people. These people look to the law for remedy, and they naturally ask the question “how, if at all, does existing law apply?” Their lawyers look for precedent—similar cases handled in the past that may provide some guide for today—and there is always a previous case involving technology that bears some similarity to A today. So the answer is, “yes, existing law applies, but tweaks are necessary to apply precedent properly.”

So, I suggest, Eichensehr’s step-zero analysis of cyber weapons and international law sheds light on a very long standing tension between technological change and legal precedent. For that reason, I think anyone interested in that tension should consider her analysis.

Cite as: Herbert Lin, International Law and Step-Zero: Going Beyond Cyberwar, JOTWELL (February 8, 2016) (reviewing Kristen Eichensehr, Cyber War & International Law Step Zero, 50 Tex. Int'l L.J. 355 (2015), available at SSRN), http://cyber.jotwell.com/international-law-and-step-zero-going-beyond-cyberwar/.
 
 

Data for Peace: The Future of the Internet of Things

The Atomic Age of Data: Policies for the Internet of Things Report of the 29th Annual Aspen Institute Conference on Communications Policy, Ellen P. Goodman, Rapporteur, available at SSRN.

The phrase “Internet of Things,” like its cousin “Big Data,” only partially captures the phenomenon that it is meant to describe. The Atomic Age of Data, a lengthy report prepared by Ellen Goodman (Rutgers Law) following a recent Aspen Institute conference, bridges the gap at the outset: “The new IoT [Internet of Things] – small sensors + big data + actuators – looks like it’s the real thing. … The IoT is the emergence of a network connecting things, all with unique identifiers, all generating data, with many subject to remote control. It is a network with huge ambitions, to connect all things.” (P. 2) The Atomic Age of Data is not a scholarly piece in a traditional sense, but it is the work of a scholar, corralling and shaping a critical public discussion in an exceptionally clear and thoughtful way.

The IoT is in urgent need of being corralled, at least conceptually and preliminarily, so that a proper set of relevant public policy questions may be asked. What are the relevant opportunities and hazards? What are its costs and benefits, to the extent that those can be discerned at this point, and where should we be looking in the future? That set of questions is the gift of this report, which is the documented product of many expert and thoughtful minds collaborating in a single place (face to face, rather than via electronic networks).1

Simply defining the IoT is one continuing challenge. As The Atomic Age of Data affirms, the IoT isn’t the Internet, though it is enabled by the Internet and in many ways it extends the Internet. (P. 2) What it is, where it is, how it functions, what it might do in the future – or permit other to do – remains at least a little cloudy. The first contribution that The Atomic Age of Data makes is simply to map these contours, contrasting the Internet of Things with the network of networks that today we call the Internet, or the Internet of People. It identifies several distinguishing characteristics of the IoT:  its sheer scale (the amount of data that can be gathered from ubiquitous sensor networks); the reduction or even elimination of user control over data collection; the widespread deployment of actuators, embedding a level of agency in the IoT; data analytics that rest atop communications and transactions; its demonstrably global character (in contrast to the initiated-in-the-US character of the Internet); and its framing of data as infrastructure, enabling the provision of a broad variety of services.

The bulk of The Atomic Age of Data consists of a comprehensive sorting of policy questions and recommendations. The foundational premise is the idea that data itself is (or are) infrastructure – “as a vital input to progress much like water and roads, and just as vulnerable to capture, malicious or discriminatory use, scarcity, monopoly and sub-optimal investment”. (P. 12) The analogy between data infrastructure and communications infrastructure is purposeful. Characterizing data as infrastructure, like characterizing communications as infrastructure, only frames policy and technical questions; it doesn’t resolve them. Data ownership and data access are related questions. They connect to questions of data formats, interoperability and interconnectivity, and common technical standards. Identifiability of data is a cross-cutting concern for privacy purposes. The respective domains of public and private investment in the IoT, and corresponding expectations of public access and use and private returns, remains open questions. The report clusters these topics together; one might label the cluster with a single theme: governance.2

How, or more precisely, by whom, will all of this data be produced? The report examines the adequacy of incentives for private (commercial) provision of data and the appropriate role for government as regulator and supplier of subsidies.

This “data as infrastructure” section of The Atomic Age of Data concludes with a series of policy recommendations, focusing on two overarching principles (also reduced to several more specific recommendations): that there should be broad accessibility of data and data analytics, with open access to some (but not all); and that government should subsidize and facilitate data production, particularly in cases where data is an otherwise under-produced public good.

The Atomic Age of Data moves next to a review of privacy topics in the context of the IoT, beginning with when, whether, and how to design privacy protections into systems from the start, and the role and implementation of Fair Information Practice Principles (FIPPs). As the report notes, these are critical questions for the IoT because so much of the IoT is invisible to individuals and has no user interface to which data protection and FIPPs might be applied. To what extent should privacy protection be designed in to the IoT, and to what extent should privacy protection be a matter of strategies that focus on individual choice?3To what extent might choice undermine the production, collection, and processing (aggregation) of good data, or the right data? Privacy questions thus intersect with incentive questions. Cost, benefit, and values questions extend further. To what extent is choice even technologically feasible without compromising other societal values? Production, collection, identification, and processing/aggregating data lead next to related privacy questions about retention and curation of data.

This privacy section concludes with brief set of recommendations, focusing on three overarching principles (again with several more specific points): that IoT systems should design in privacy controls to minimize the collection of personally identifiable information; that IoT systems should effectuate FIPPs to the extent possible; and that individuals should have a way to control collection and transmission of their personal data.

The balance of the report is divided among four additional topics that are treated more briefly, though in each case the topic concludes with a short set of basic recommendations. The first is “Equity, Inclusion, and Opportunity,” which collects questions about prospects of citizen empowerment and disempowerment via the IoT. Data collection in some respects signifies “who counts” in modern society – whose voice and presence “matters,” both individually and collectively, but also, in some respects, whose voice and presence is worth watching. The report points out the relevance of comparable concerns with respect to the deployment of broadband communications infrastructure and its impacts on things like access to education and health resources. The second is “Civic Engagement,” which touches on how IoT technologies might be used both by governments and by the private sector to increase democratic accountability. The third is “Telecommunications Network Architecture,” which concerns the intersection of the IoT and competition, service innovation, and interoperability among IoT systems and related communications networks. The key topic here is the heterogeneity of the data generated by IoT applications, recalling the question of whether the Internet of Things is, or should be, truly a single “Internet” at all, with interconnected networks, connections across verticals (home, health, transport, for example), and common platforms. (P. 39) The fourth is security, which raises the relatively simple question of security vulnerabilities introduced at both the level of individual devices and at systemic levels. The question may be simple but the answer assuredly is not; this section of the report is comparatively brief, perhaps because the salience of the interest is so obvious.

The Atomic Age of Data finishes with a case study, on The Smart City, which refers to the idea of networks of ubiquitous sensors deployed within urban infrastructure to generate data about usage patterns and service needs. (P. 45) The discussion of this use case is decidedly and appropriately pragmatic, putting utopian hopes for the Smart City in context and noting privacy and surveillance concerns and related but independent equity concerns.

To conclude this review:

This is an enormously clear, useful, and timely product. One cannot critique a report of a conference on the ground that it did not address a critical topic, if the conference itself did not address that topic. Yet as helpful as The Atomic Age of Data is in canvassing the policy territory of the IoT, I couldn’t help but notice how the boundaries of that territory are implicitly defined. The Atomic Age of Data contains a lot of discussion of “Internet” topics and less discussion of “things.” In this day and age, one should never take things or thing-ness for granted. What is a thing? 3D printing, the current label for additive manufacturing, promises to revolutionize the meaning of “thingness” – because objects may be dynamic and emergent, as well as static and fixed4 – just as the “Internet of Things” promises to revolutionize the meanings of identity and presence.

“Data for Peace,” the title of this review, builds a bit on the naïve sense of modernity and progress expressed (purposefully, no doubt) by the report’s Atomic Age title. During the 1950s and 1960s, “atomic” things were full of optimism. Later, we learned that splitting the atom changed the meanings of matter in unexpected ways. “Atomic” gave way to a variety of more complex political, cultural, and technological expressions and concerns, few of which were foreseen at the dawn of the Atomic Age. Similarly, 3D printing may turn out to change the meanings of matter in unexpected – but other – ways. As the IoT and Big Data mature — along with 3D printing – I expect that future reports on its implications will be similarly but unexpectedly complex.



  1. Recent and related but less comprehensive reviews of IoT policy questions include Internet of Things: Privacy & Security in a Connected World, Federal Trade Commission Staff Report (January 2015), https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf; and Rolf H. Weber, Internet of Things—Governance Quo Vadis?, 29 Computer Law & Security Report 341 , (2013), doi:10.1016/j.clsr.2013.05.010. []
  2. For an expansive treatment of infrastructure and governance issues, see Brett M. Frischmann, Infrastructure: The Social Value of Shared Resources (2013). []
  3. The significance of these questions is highlighted in the Mauritius Declaration, cited in Ellen P. Goodman, The Atomic Age of Data: Policies for the Internet of Things, 25 (2015);  see also Mauritius Declaration on Internet of Things, 36th Annual Conference of Data Protection and Privacy Commissioners (October 14, 2014), http://www.caa.go.jp/planning/kojin/pdf/1E_Mauritius_Declaration.pdf. []
  4. See Deven R. Desai & Gerard N. Magliocca, Patents, Meet Napster: The Disruptive Power of 3D Printing, 102 Georgetown L.J. 1691  (2014). []
Cite as: Michael Madison, Data for Peace: The Future of the Internet of Things, JOTWELL (January 7, 2016) (reviewing The Atomic Age of Data: Policies for the Internet of Things Report of the 29th Annual Aspen Institute Conference on Communications Policy, Ellen P. Goodman, Rapporteur, available at SSRN), http://cyber.jotwell.com/data-for-peace-the-future-of-the-internet-of-things/.
 
 

The Practice and Theory of Secure Data Releases

Ira Rubinstein & Woodrow Hartzog, Anonymization and Risk, 91 Wash. L. Rev. (forthcoming 2016), available on SSRN.

In the current Age of Big Data, companies are constantly striving to figure out how to better use data at their disposal. And it seems that the only thing better than big data is more data. However, the data used is often personal in nature and thus linked to specific individuals and their personal details, traits, or preferences. In such cases, sharing and use of the data conflict with privacy laws and interests. A popular remedy applied to sidestep privacy-based concerns is to render the data no longer “private” by anonymizing it. Anonymization is achieved through a variety of statistical measures. Anonymized data, so it seems, can be sold, shared with researchers, or even possibly released to the general public.

Yet, the Age of Big Data has turned anonymization into a difficult task, as the risk of re-identification seems to be constantly looming. Re-identification is achieved by “attacking” the anonymous dataset, aided by the existence of vast datasets (or “auxiliary information”) from various other sources available to the potential attacker. It is, therefore, difficult to establish whether anonymization was achieved, whether privacy laws pertain to the dataset at hand, and if so, how. In a recent paper, Ira Rubinstein and Woodrow Hartzog examine this issue’s pressing policy and legal aspects. The paper does an excellent job in summarizing the way that the current academic debate in this field is unfolding. It describes recent failed and successful re-identification attempts and provides the reader with a crash course on the complicated statistical methods of de-identification and re-identification. Beyond that, it provides both theoretical insights and a clear roadmap for confronting challenges to properly releasing data.

The discussion on anonymization, or de-identification (the more precise term which the authors choose to apply, as it does not imply full anonymization) was once mostly of academic interest: Statisticians introduced ways to anonymize data, while mathematicians and computer scientists strove to prove re-identification “attacks” were nonetheless possible. Several successful re-identification attacks (perhaps the most famous one involved Neflix and IMDb) also led legal scholars to debate proper policy practices, as well broader implications of re-identification. However, this academic discussion is quickly crossing over into the world of practitioners. Recent policy papers published by regulators in the U.S., U.K., and the E.U. strive to create legal and normative guidelines for the manner in which personal information can be shared and released. In addition, corporations are turning to legal counsel for advice on using anonymization to mitigate potential liability.

In an age in which legal scholarship seems to be drifting away from legal practice, this paper demonstrates how both can be brought together. To a great extent, the knowledge conveyed in this paper is now essential for all legal practitioners advising clients with large databases. To demonstrate the relevance of this discussion, note a recent debate regarding the practices of Yodlee, an online financial tools provider, which has also emerged as a powerful financial-data aggregator. As recently reported by the Wall Street Journal, Yodlee sells information, gathered by facilitating consumer transactions, to investors and research firms. The WSJ claimed that Yodlee clients’ privacy is being compromised, and Yodlee responded by arguing that all personal information was properly handled and de-identified. It is safe to assume that similar stories involving other companies’ collecting, marketing, or de-identifying personal data are just around the corner.

Perhaps the central point that Rubinstein and Hartzog’s paper strives to articulate is that classifying personal data as either anonymous or identifiable is both incorrect and useless. With regard to anonymization, the authors further note that: “[a]lmost all uses of the term to describe the safety of data sets are misleading, and often they are deceptive. Focusing on the language of process and risk will better set expectations” (P.4).  In other words, anonymity (or rather – de-identification) is not an absolute term, but one indicating degrees on a scale – one that should be measured by the effort required to reveal the personal data, and the chance it could occur. As the authors note, this latter notion was already introduced (perhaps most famously by Paul Ohm).  Rubinstein and Hartzog’s important contribution is to break this notion down into practical steps – formulating a proper data release policy as well as providing a full toolbox of measures to be applied in the process.

Beyond this important observation, the paper’s most substantial analytical contribution is to link appropriate data release policies with the notion of data security. The relationship, as explained by the authors, is based on these concepts mutual need to meet a specific standard of care in the process, and not necessarily be judged by the outcome. The authors also explain that context matters, and list various parameters and attributes of the data release process that should be considered when formulating a release policy (p. 32). In addition, they demonstrate that an integral part of a release policy is the technical measures applied when distributing and sharing the information. In doing so, they note that the Release-and-Forget Model of data sharing (in which, for example, a de-identified database is merely made available over the internet) is most likely obsolete (p. 36); all data release schemes must include unique measures (technological, contractual – or both) which strive to limit re-identification by potential attackers.

Beyond the rich policy discussion the authors provide in comparing and equating security policy to data release policy, several additional theoretical questions (with practical implications) come to mind and are worthy of future discussion: Is a regulatory response similarly necessary in the security and data release contexts? While companies usually under-invest in security (given, among other factors, the negative externalities of security breaches), there have been examples of instances in which corporate motivation to enhance security was close to sufficient, especially in view of market pressures and the reputational costs of breaches. In many cases, companies’ and clients’ interests in maintaining security are aligned. More often, though, corporations’ and clients’ interests regarding data releases directly conflict. Corporations are interested in capitalizing on their data, whereas consumers do not necessarily share corporate enthusiasm for sharing their de-identified personal information, as they are not likely to benefit from or be compensated for this additional revenue stream. For this and other reasons, the security-release policy comparison has its limits; data release policies might call for stricter rules and enforcement mechanism.

In addition, it would be interesting to consider the role insurance could play in the process of data release—an issue also currently emerging in the context of data security. An active insurance market might indeed facilitate the shift from outcome- to process-based liability without the need to change the regulatory framework. Therefore, the change the authors here advocate for might be just around the corner. Insurers could, for instance, limit indemnification to those companies that follow acceptable data-release policies (yet nonetheless cause harms to third parties). Yet, relying on insurance markets may not be a safe bet. In this specific context, insurance markets face several difficulties, which mandate further discussion. The comparison to data security can prove illuminating here as well.

Cite as: Tal Zarsky, The Practice and Theory of Secure Data Releases, JOTWELL (November 30, 2015) (reviewing Ira Rubinstein & Woodrow Hartzog, Anonymization and Risk, 91 Wash. L. Rev. (forthcoming 2016), available on SSRN), http://cyber.jotwell.com/the-practice-and-theory-of-secure-data-releases/.
 
 

Is it Fair to Sell Your Soul?

Marco Loos & Joasia Luzak, Wanted: A Bigger Stick. On Unfair Terms in Consumer Contracts with Online Service Providers (Ctr. for the Study of European Contract Law, Working Paper No. 2015-01, 2015), available at SSRN.

The reliance of online service providers on lengthy terms of service or related documents is easily mocked. When I teach this topic, I can choose to illustrate the topic with the selling of souls, in cartoon or written form, point to the absurd length of the policies of popular sites, and highlight experiments that call us out on our love of the I Accept button. But behind the mirth lie a number of serious legal issues, and the recent working paper by Marco Loos & Joasia Luzak of the University of Amsterdam tackles some of them.

Loos & Luzak work at the Centre for the Study of European Contract Law, and their particular concern is with the European Union’s 1993 Unfair Contract Terms Directive. They point out that although the gap between typical terms and policies and the requirements of the Directive is often pointed to, it is rarely studied in detail. In their thorough study, the authors examined the instruments used by five well-known service providers, and evaluated them against the Directive’s stipulation that mass terms (those not individually negotiated with the consumer) be ‘fair’.

The detailed paper, full of examples from the policies of the services under review (Dropbox, Facebook, Twitter and Google), covers topics including modification and termination of the agreement, as well as how liability is managed. Despite the focus of the work being the UCT Directive, the analysis is also linked with developments in related fields of law, such as the gradual expansion through Court of Justice of the EU (CJEU) decisions of the ‘consumer’ provisions of the Brussels Regulation on jurisdiction. The authors save particular criticism for the lack of clarity in how terms are drafted.

Importantly, the paper also tackles the preliminary question of whether the statements we know and love actually fall within the scope of the Directive, which is about contracts and about consumers. They challenge the assumption that ‘free’ services are excluded, but do note that in some cases more detail on the actual use of an account may be necessary in order to be certain that the Directive is applicable.

What Loos & Luzak have done here also contributes to debates on consent, rights and technology. In data protection and in consumer law, much depends on assumptions about information – what must be provided, how it informs decisions, and what legal options are available to the end user. One cannot doubt the skill that goes into drafting some of the examples that are cited in this paper, but the authors are right to call for greater study and vigilance – particularly on the part of national consumer authorities. They hope that if the CJEU is faced with appropriate questions in future years, the result might be a gradual raising of consumer protection standards. Indeed, this might well have implications across the world – as Jack Goldsmith and Tim Wu discussed regarding earlier data protection disputes in their 2006 book, Who Controls The Internet? – and of course other agencies, such as the FTC and the Australv.ian Privacy Commissioner, are interested in these issues.  So, this recent work on common clauses and legal requirements for fairness should interest European and non-European audiences alike.

Cite as: Daithí Mac Síthigh, Is it Fair to Sell Your Soul?, JOTWELL (October 29, 2015) (reviewing Marco Loos & Joasia Luzak, Wanted: A Bigger Stick. On Unfair Terms in Consumer Contracts with Online Service Providers (Ctr. for the Study of European Contract Law, Working Paper No. 2015-01, 2015), available at SSRN), http://cyber.jotwell.com/is-it-fair-to-sell-your-soul/.
 
 

Who Regulates the Robots?

Woodrow Hartzog, Unfair and Deceptive Robots, 74 Maryland L. Rev. 785 (2015).

When the law faces a new technology, a basic question is who governs it and with what rules? Technological development disrupts regulatory schemes. Take, for example, the challenges the Federal Aviation Administration (FAA) now faces with drones. The FAA usually regulates aircraft safety. Drones force the FAA to consider—and in some cases reject as outside the agency’s mandate—issues of privacy, spectrum policy, data security, autonomous decision-making, and more. The pace and complexity of recent technological change has led some to call for the creation of new agencies, including a Federal Robotics Commission. But given the significant hurdles involved in agency creation, it is valuable in the short run to assess what tools we already have.

In Unfair and Deceptive Robots, Woodrow Hartzog takes up the question of who will govern consumer robots. Hartzog proposes that the Federal Trade Commission (FTC) is best equipped to govern most issues that consumer robots will soon raise. He reasons that the FTC is well prepared both as a matter of subject-matter expertise and as a matter of institutional practice.

This article was a hit at the 2015 We Robot conference. It blends practical guidance, expert knowledge of the FTC, and a range of thoughtful and often amusing examples. It also provides a window onto a number of framing questions recurring in the field: to what extent are robots new? How does that answer vary, depending on what aspect of robots you focus on? And how do you best choose or design institutions to adapt to fast-changing technology?

Hartzog points out a number of ways in which robots, or really robotics companies, might take advantage of vulnerable consumers. A company might falsely represent a robot’s capabilities, touting effectiveness in sped-up videos that make a robot look more capable than it is. Or a company might use a “Wizard-of-Oz” setup to operate a robot from behind the scenes, causing it to appear autonomous when it is not. A company might use a robot to spy on people, or to nudge their behavior. Autonomous robots and robotic implantables raise their own classes of consumer protection concerns. If you were not already worried about robots, you will be after reading this. From the robot vacuum that ate its owner’s hair, to flirtatious Twitter bots, to a dying Roomba pleading for a software upgrade, to the “Internet of Things Inside Our Body,” Hartzog’s examples are visceral and compelling.

The FTC, Hartog claims, is thankfully well positioned to address many of the consumer protection issues raised by this pending host of scambots, decepticons, autobots, and cyborgs. The FTC has a broad grant of authority to regulate “unfair and deceptive” trade practices. It has used that Section 5 authority in recent years to regulate online privacy and data security. While the FTC started by addressing classic truth-in-advertising problems, and enforcing company promises, it has developed more complex theories of unfairness that it now extends to data security and user interface design. His recent authoritative work with Dan Solove on the FTC’s Section 5 “jurisprudence” makes Hartzog uniquely qualified to discuss FTC coverage of robotics. There is no doubt that this paper will have practical applicability.

Hartzog also contributes to ongoing conversations about technological change and regulatory design. He touts the FTC’s institutional ability to adapt to changes through productive co-regulation, including its tendency to defer to industry standards and avoid “drastic regulatory lurches.” Hartzog thus identifies not just substantive but structural reasons why the FTC is a good fit for governing consumer robots.

But the view Hartzog presents is a little too rosy. The FTC has vocal and litigious critics whom Hartzog mainly ignores. Not everyone is happy with its settlement agreement process, which some regard as arbitrary and lacking notice. While Hartzog mentions in passing that the FTC’s authority to regulate data security has been challenged, the pending Wyndham decision in the Third Circuit could seriously rock the Section 5 boat. Moreover, the FTC’s focus on notice and design is in tension with developing First Amendment jurisprudence on commercial and compelled speech. And there are plenty of other good reasons why we might want to be careful about focusing governance on technological design as Hartzog proposes.

If I have one larger criticism, it is that the “which agency is best” framing is a little disingenuous. Hartzog frames his question in a way that drives his answer. He asks which agency is best positioned for governing consumer protection issues raised by robots; unsurprisingly, his answer is the FTC, a consumer protection agency. If he had asked which regime is best for governing robotic torts, or which is best for governing robotic IP issues, the answer would have differed. In other words, the article provides solid guidance for how the FTC might approach robots. It does not answer, or really justify asking, the question of who governs them best.

Which brings us to the larger conversation this piece briefly engages in, on just how new and disruptive robots will be. I am increasingly convinced that the answer to this question is dependent on the asker’s perspective. Asking how robots disrupt a particular area of law will highlight the features of the technology and its social uses that are disruptive to that particular area of law. A new technology will be disruptive to different regulatory regimes in different ways. And because Hartzog picks the FTC as his lens, he is bound to solutions the FTC provides, and somewhat blinded to the problems it can not solve. Robots fit within the FTC’s consumer protection regime, but they also fundamentally disrupt it. As with the Internet of Things, the owner of the robot is often not the only person facing harm. The FTC protects the consumer, not the visitor to a consumer’s house. As Meg Jones has recently pointed out, the FTC is not particularly well equipped to handle problems raised by this “Internet of Other People’s Things.”

Unfair and Deceptive Robots is clever and extremely useful: it tells us what the FTC is equipped to handle, and argues for the FTC’s competence in this area. As a robot’s road map to FTC jurisprudence, the piece shines. But regulating robots will take many regulatory players. While we are trying to spot the gaps and encourage them to cooperate, it might be counterproductive to name one as the “best.”

 

Cite as: Margot Kaminski, Who Regulates the Robots?, JOTWELL (September 29, 2015) (reviewing Woodrow Hartzog, Unfair and Deceptive Robots, 74 Maryland L. Rev. 785 (2015)), http://cyber.jotwell.com/who-regulates-the-robots-2/.
 
 

What is a Theorist For? The Recruitment of Users into Online Governance

Kate Crawford & Tarleton Gillespie, What is a flag for? Social media reporting tools and the vocabulary of complaint, New Media & Society (2014), available at SSRN.

The problem of handling harassing and discriminatory online speech, as well as other forms of unpleasant and unlawful content—infringing, privacy-invading, or otherwise tortious—has been a matter for public discussion pretty much since people noticed that there were non-governmental intermediaries involved in the process. From revenge porn to videos of terrorist executions to men kissing each other to women’s pubic hair, controversies routinely erupt over whether intermediaries are suppressing too much speech, or not enough.

“Flagging” offensive content is now an option offered to users across many popular online platforms, from Facebook to Tumblr to Pinterest to FanFiction.net. Flagging allows sites to outsource the job of policing offensive content (however defined) to unpaid—indeed, monetized—users, as well as to offer a rhetoric to answer charges of censorship against those sites: the fact that content was reported makes the flagging user/s responsible for a deletion, not the platform that created the flagging mechanism. But the meaning of flags, Crawford and Gillespie persuasively argue, is “anything but straightforward.” Users can use flags strategically, as can other actors in the system who claim to be following community standards.

One of the most significant, but least visible, features of a flagging system is its bluntness. A flag is binary: users can only report one level of “badness” of what they flag, even if they are allowed several different subcategories to identify their reasons for flagging. Nor are users part of the process that results, which is generally opaque. (As they note, Facebook has the most clarity on its process, likely not because of its commitment to user democracy but because it has faced such negative PR over its policies in the past.)

Another, related feature is flagging’s imperviousness to precedent—the memory-traces that let communities engage in ongoing debates about norms, boundaries, and difficult marginal judgments. Crawford and Gillespie explain:

[F]lags speak only in a narrow vocabulary of complaint. A flag, at its most basic, indicates an objection. User opinions about the content are reduced to a set of imprecise proxies: flags, likes or dislikes, and views. Regardless of the proliferating submenus of vocabulary, there remains little room for expressing the degree of concern, or situating the complaint, or taking issue with the rules. There is not, for example, a flag to indicate that something is troubling, but nonetheless worth preserving. The vocabulary of complaint does not extend to protecting forms of speech that may be threatening, but are deemed necessary from a civic perspective. Neither do complaints account for the many complex reasons why people might choose to flag content, but for reasons other than simply being offended. Flags do not allow a community to discuss that concern, nor is there any trace left for future debates. (P. 7.)

We often speak of the internet as a boon for communities, but it is so only in certain ways, and website owners can structure their sites so that certain kinds of communities have a harder time forming or discussing particular issues. Relatedly, YouTube’s Content ID, now a major source of licensing revenue for music companies, allows those companies to take down videos to which they object regardless of the user’s counternotifications and fair use claims, because Google’s agreements with the music companies go beyond the requirements of the DMCA. No reasoned argument need be made, as it would be in a court of law, and so neither the decisionmakers nor the users subject to YouTube’s regime get to think through the limiting principles—if any—applied by the algorithms and/or their human overlords. I have similar concerns with Amazon’s Kindle Worlds (and the Kindle’s ability to erase or alter works that Amazon deems erroneously distributed, leaving no further trace) compared to the organic, messy world of noncommercial fan fiction.

This is a rich paper with much to say about the ways that, for example, Flickr’s default reporting of images as “inappropriately classified” rather than completely unacceptable structures users’ relation to the site and to each other. “Whether a user shoehorns their complex feelings into the provided categories in a pull-down menu in order to be heard, or a group decides to coordinate their ‘complaints’ to game the system for political ends, users are learning to render themselves and their values legible within the vocabulary of flags.” Crawford and Gillespie’s useful discussion also offers insights into other forms of online governance, such as the debates over Twitter’s reporting system and the merits of “blocking” users. A “blocking” feature, available for example on Tumblr and Twitter, enables a logged-in user to avoid seeing posts from any blocked user; the offensive user disappears from the site, but only from the blocker’s perspective. Like denizens of China Miéville’s Besźel and Ul Qoma, they occupy the same “space” but do not see each other. This literalization of “just ignore the trolls” has its merits, but it also allows the sites to disclaim responsibility for removing content that remains visible to, and findable by, third parties. We may be able to remake our view of the world to screen out unpleasantness, but the unpleasantness persists—and replace “unpleasantness” with “slander and threats” and this solution seems more like offering victims blinders rather than protecting them.

What about total openness instead? As Crawford and Gillespie point out, Wikipedia generally retains a full history of edits and removals, but that process can also become exclusionary and opaque in other ways. Nonetheless, they suggest that an “open backstage” might offer a good way forward, in that it could “legitimize and strengthen a site’s decision to remove content. Significantly, it would offer a space for people to articulate their concerns, which works against both algorithmic and human gaming of the system to have content removed.” Moreover, an “open backstage” would emphasize the ways in which platforms are social systems where users can and should play a role in shaping norms.

I’m not as sanguine about this prospect. As Erving Goffman explained so well, even “backstage” is in fact a performance space when other people are watching, so I would expect new and different forms of manipulation (as has happened on Wikipedia) rather than a solution to opacity. Proceduralization and the ability to keep arguing endlessly can be a recipe for creating indifference by all but a tiny, unrepresentative fraction of users, which arguably is what happened with Wikipedia. It’s a new version of the old dilemma: If people were angels, no flags would be necessary. If angels were to govern people, neither external nor internal controls on flags would be necessary.

As someone who’s been deeply involved in writing and subsequently revising and enforcing the terms of service of a website used by hundreds of thousands of people, I know all too well the impossibility of writing out in advance every way in which a system might be abused by people acting in good faith, or even just (mis)used by people who simply don’t share its creators’ assumptions. Open discussion of core discursive principles can be valuable for communities; but freewheeling discussion, especially of individual cases, can also be destructive. And, as Dan Kahan has so well explained, our different worldviews often mean that a retreat from one field (from ideology to facts, or from substance to procedure, or vice versa) brings all the old battles to the new ground.

Still, there’s much to like about the authors’ call for a system that leaves some traces of debates over content and the associated worldviews, instead of a flagging and deletion system that “obscures or eradicates any evidence that the conflict ever existed.” Battles may leave scars, but that doesn’t mean that the better solution is a memory hole.

Cite as: Rebecca Tushnet, What is a Theorist For? The Recruitment of Users into Online Governance, JOTWELL (August 14, 2015) (reviewing Kate Crawford & Tarleton Gillespie, What is a flag for? Social media reporting tools and the vocabulary of complaint, New Media & Society (2014), available at SSRN), http://cyber.jotwell.com/what-is-a-theorist-for-the-recruitment-of-users-into-online-governance/.
 
 

Internet Privacy: A Chinese View

The overall issue addressed in this book has received renewed attention recently. On April 1, 2015 President Obama issued the Executive Order “Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities,” which allows the Treasury Department to freeze assets of individuals and entities that are directly or indirectly involved in such activities. Furthermore in the beginning of April, in a series of meetings in China, US Homeland Security officials met with their Chinese counterparts to discuss cybersecurity issues. And in late April the US Department of Defense issued its latest document on cyber strategy that mentions – among other countries – China among the “key cyber threats.”

However, the chosen article focuses on an issue that is easily is forgotten in these grand debates: citizens’ privacy, since threats to privacy come from the inside as well as from the outside. The author is Professor of Communication at the School of Digital Media and Design Arts, Beijing’s renowned University of Posts and Telecommunications (BUPT). He starts with an overview on the present legal framework for protecting the Right to Internet Privacy in China. (P. 247) I still vividly remember a presentation I gave in October 1996 at the China Youth College for Political Science (now the China Youth University for Political Sciences) in Beijing on “The Function of Law in an Information Society” addressing privacy issues. At the end of my talk one of the Chinese students stood up and boldly asked me what my talk had to do with current situation in China.

But I digress. The situation has changed profoundly: Professor Xu’s overview is condensed, yet sufficiently detailed to gain an insight into the development of concepts of privacy in China from an understanding of privacy as “shameful or embarrassing private [family] affairs” to privacy as a more comprehensive, however, still defensive notion and how it is moving from there to a broader understanding of affected “personal information.”

The current “Deepening Reform Campaign” in China has been emphasizing the Rule of Law. The Chinese concept of law is primarily an instrumental one. Rule of Law in this context means to ensure that the judiciary subsystem works efficiently, free from cross-interference—for example with regard to corruption cases—with optimal resources as regards the educational standard of its personnel, and meets its aim of ensuring fairness across local and provincial levels. All these principles have been reconfirmed this last month by a set of specific regulations from the General Office of the Communist Party’s Central Committee and the General Office of the State Council. At the same time the judiciary should be seen to be embedded in the guiding authority of these two law-making systems: the government as the administrative body and the checking political power of the Chinese Communist Party.

In Xu’s view the current system of legal privacy protection still needs to be fundamentally improved. There is no stringent overall legal concept of privacy. “Hundreds of laws and regulations have been enacted to protect the right to online privacy, but they are quite unsystematic and hard to put into practice.” (P. 252) (Sounds familiar). Responsibilities and liabilities in civil law should be established clearly and criminal law violations need to be more precise. He points to Hong Kong experiences as a learning resource for the further development of Chinese privacy protection, just as this note seeks to point to the necessity to enlarge our view on privacy beyond our European and American concerns.

Xu thus provides a useful insight into the ongoing development of the concept of privacy in the Chinese environment. As with such developments in the US and Europe they need to be put into the context of the respective legal system.

Cite as: Herbert Burkert, Internet Privacy: A Chinese View, JOTWELL (July 14, 2015) (reviewing Jinghong Xu, Evolving Legal Frameworks for Protecting the Right to Internet Privacy in China, in China and Cybersecurity : Espionage, Strategy, and Politics in the Digital Domain, 242 (edited by Jon R. Lindsay, Tai Ming Cheung, and Derek S. Reveron, 2015)), http://cyber.jotwell.com/internet-privacy-a-chinese-view/.
 
 

An Internet X-Ray Machine for the Masses

Aldo Cortesi, et al., mitmproxy.

Thank you to the Jotwell editors for indulging me as I stretch their mission statement (and quite possibly their patience) by highlighting not an article nor even a conventional work of scholarship but rather a piece of software as the “thing I like (lots)”: mitmproxy, a tool created by Aldo Cortesi who shares authorship credit with Maximilian Hils and a larger “mitmproxy community.”

mitmproxy does just what it says on the tin (assuming you know how to read this particular kind of tin). It’s a Man-In-The-Middle Proxy server for the web. In English, this means that this tool allows you to reveal, with finely wrought control, exactly what your browser is saying and to whom. It is an X-ray machine for the web, one which lays many of the Internet’s secrets bare. Let me extol the many virtues of this well-designed piece of software, and after I do that, let me explain why I think this strikes me as an important contribution to legal scholarship.

There are many other tools that do what mitmproxy does. Where mitmproxy shines relative to everything I have tried is the way it embraces both usability and power without compromising either.

Take usability first. Especially for Mac OS X users, mitmproxy is the single easiest tool of its kind I have encountered. Here is what you need to do to begin wiretapping yourself as you browse the web:

  • Step 1: Install the OSX binary available at https://mitmproxy.org/
  • Step 2: Open a terminal window and extract, find, and start the binary.1
  • Step 3: Open a browser and configure it to use the IP address and port of the computer running mitmproxy (probably 127.0.0.1 and 8080) as its web proxy.
  • Step 4: Surf the web.

At this point, the mitmproxy display will fill with lots of http requests down the screen. The controls to navigate these requests are so intuitive they require little documentation: arrows scroll up and down, enter reveals more detail about the current request, escape returns to the previous screen, etc.

By performing the steps above, the student or scholar of technology law or policy who has never operated a packet sniffer above can more deeply understand some of the secrets of web surveillance. mitmproxy is, most importantly, packet sniffing for the masses. For the first time, we are given a tool which is simple to understand, relatively easy to operate, free to download, and available to people lacking root access to their computers. These qualities make this a powerfully democratizing tool.

All of this makes mitmproxy also a wonderful tool for teaching. For three years, I have taught a course on “The Technology of Privacy,” in which the students have spent an hour or two sniffing packets. Until this year, my students toiled with Wireshark–an old tool, but still the industry standard for packet sniffing. To say that Wireshark confused my students is an understatement. The semester-end reviews were replete with comments like, “Great class, but I have no idea what was going on with Wireshark.”

This year, I taught the same unit using mitmproxy. The experience could not have been more different. After walking through the steps above and watching a demo for two minutes, my students started monitoring their own web traffic, needing no further guidance. My only instruction was “find something interesting,” and within five minutes, that’s exactly what they did.

Perhaps the most astonishing thing the tool makes easy is the sniffing of encrypted web traffic. Techies might scoff at my being impressed by this, because it’s almost tautological; that’s what a MITM proxy permits. But look again at how simply this has been implemented. Here are the steps required to permit the monitoring of encrypted traffic:

  • Step 5: From your browser, visit mitm.it. (This won’t send you to an Italian webpage; mitmproxy intercepts the request and sends you its own content instead.)
  • Step 6: Follow the simple instructions at that page.
  • Step 7: Surf the encrypted web.

If all mitmproxy did was bring packet sniffing to the masses, it would still do plenty. But mitmproxy is not only easy-to-use, it is also so powerful and robust that it has become a serious tool of web-based forensics.

Take the work of Ashkan Soltani, who introduced me to mitmproxy. Ashkan is well-known in the privacy law community as the current Chief Technologist of the FTC. He made his first big splash as the technological brains behind many of the groundbreaking studies conducted by Julia Angwin and her fellow journalists at the Wall Street Journal, in the “What They Know” series. The great impact of those studies–and what qualify them in my mind as scholarly research as much as investigative journalism–stems from the rigorously obtained and compellingly presented data revealing third-party tracking of the web and invasive tracking of mobile apps. It is my understanding that at least some of these important results were obtained using mitmproxy.

Others have used mitmproxy to “slay dragons.” It is credited with revealing privacy violations in mobile apps. It has allowed researchers to peer into opaque private APIs to learn how companies are protecting their users’ secrets (spoiler: not always well).

There is too much more to praise in full about mitmproxy, so let me summarize the rest. It is released under a GPL open source license and distributed via github, so anybody can tinker under the hood. It is written in python, so you’re likelier to understand what you’re looking at under that hood. It allows you to “replay” web requests and responses from the past, giving you fine-tuned controls for testing. It lets you monitor the activity of mobile apps as seamlessly as web browsing. You can easily automate it.

All of this power can be used for evil as well as good, of course. If I trick your browser into using my mitmproxy, then with a few lines of code, I can flip all of the images sent to the browser upside down or replace all images with photos of kittens, or do something even more evil.

Finally, back to the question I started with: why does mitmproxy belong on a website dedicated to celebrating scholarship? mitmproxy is a scholarly tool or methodology, akin to R or logistic regression, something that too few legal scholars use and many more should embrace. That alone is probably enough to justify this review.

But in some sense, a packet sniffer is the key to my personal origin story as a scholar of Internet privacy. In my first job after college–helping develop and defend the networks of the RAND Corporation–in what I think was my first week on the job, I ran a packet sniffer–one much clunkier to use than mitmproxy–on our local network segment. Entirely by happenstance, the first screenful of packets I intercepted contained a packet revealing the RAND vice president’s username and password in plaintext, right on my screen. I don’t think I ever closed an application as quickly as I did at that moment, and my manager (who was standing behind me) said, with a smile on his face, “we shall never speak of this again.” I can draw a direct line from that moment to many thoughts I have had and things I have written about Internet privacy.

We scholars of internet policy spend most of their time focused on the abstract and intangible. The things we investigate flit through the aether (or ethernet) near the speed of light. There is value in finding ways to reify these abstractions into something closer to the tangible and concrete, the way sniffing tools like mitmproxy do. It is one thing to write about, say, privacy as an abstraction, it is another altogether to capture a password or set up a proxy server. Doing little things like this will remind us that we what we are investigating is real and within our reach.



  1. For the truly uninitiated, this step might require a bit more elaboration.
    1. On the mitmproxy website, click the link next to the big apple logo, which reads, “OSX (Mountain Lion and later)”.
    2. It should drop a file into your “Downloads” directory, which is probably the icon on your desktop dock next to your trash can. Click that icon.
    3. Click the file you just downloaded. It’ll be called something like “osx-mitmproxy-0.11.3.tar.gz”, although the version numbers may vary.
    4. Now, open the terminal (see link in step three above).
    5. Type: “cd Downloads/osx; ./mitmproxy” without the quotes and then enter.
    6. You should be running mitmproxy. (It should look like the screen depicted at https://mitmproxy.org/.)
    7. Macs running older versions of OS X might encounter errors at this point.

    If you run linux, you should be able to figure out the install for yourself. If you run windows, you’re probably out-of-luck, although I hear good things about (but cannot vouch personally for) Fiddler. []

Cite as: Paul Ohm, An Internet X-Ray Machine for the Masses, JOTWELL (June 12, 2015) (reviewing Aldo Cortesi, et al., mitmproxy), http://cyber.jotwell.com/an-internet-x-ray-machine-for-the-masses/.