Feb 3, 2012 Frank Pasquale
Scott Peppet’s article Unraveling Privacy: The Personal Prospectus & the Threat of a Full Disclosure Future has offered a fundamental challenge to reigning privacy paradigms in cyberlaw. The old privacy law assumed that the right set of laws could help individuals hide embarrassing facts or disable invasive tracking. The encroaching “full disclosure future” ensures that those who try to maintain secrets look like they have “something to hide.” We used to be afraid of shadowy watchers collecting incriminating “digital dossiers;” now we worry over not measuring up when rivals reveal better “personal prospectuses” than our own. Peppet’s elegant interweaving of social science and law renders us unable to rely on old privacy paradigms like “notice and consent” online.
Something to Hide
Traditionally, privacy law experts have assumed that a combination of markets and law can preserve privacy. Firms will compete to offer more or less privacy. Data collectors will provide customers with various “privacy settings” that tailor online services to optimize self-disclosure. Some have proposed “personal data vaults” to manage the emanations of sensor networks that track movements and actions in real space. Jonathan Zittrain’s classic article on “privication” proposed that the same technologies used by copyrightholders to monitor or stop dissemination of works could be adopted by patients concerned about the unauthorized spread of health information.
These technological “self-help” measures reflect privacy law’s consent paradigm. Generally speaking, data dissemination is not deemed an invasion of privacy if it is consented to. The consent paradigm requires individuals to decide whether or not, at any given time, they wish to protect their privacy. The consent paradigm sense if one assumes that we live in a society where individuals have a relatively free choice whether or not to disclose critical data. But it becomes less realistic as individuals are under more pressure to compete by revealing important aspects of their past.
Peppet observes that individuals are increasingly volunteering information about themselves in order to stand out from the crowd. When such self-disclosure reaches a critical mass, a tipping point is reached, and everyone essentially must disclose in order to avoid being stigmatized as someone with something to hide. Economists of information label this process “unraveling.” As “rapidly changing information technologies are making possible the low-cost sharing of verified personal information for economic reward,” the ultimate effect will be little different than if snooping employers, government officials, and other decisionmakers could directly demand damaging information.
Reorienting Cyberspace’s Privacy Law
Mainstream privacy scholarship has for too long attempted to adopt old tort law and ossified, sectoral statutes to rapidly changing technologies. The scholarship has paid too little attention to economic changes that have made cutthroat competition in the workplace and pervasive surveillance not only de rigeur, but intimately connected. While the data can be used in many cases for good, it would be naïve to ignore the extent to which it will be repurposed to classify and stigmatize individuals. In an age of diminishing expectations, intensive data gathering is a critical tool for deciding which human resources should be invested in, and which should be treated like flotsam.
If individuals had enough time to manage their personal data the way they manage their checkbooks and gardens, perhaps the consent paradigm that Peppet challenges would be a good foundation for addressing concerns about privacy. If applicants could easily bargain with would-be employers over privacy, or patients with hospitals, perhaps we could rely on them to protect their interests. But the actual occurrence of such acts of self-assertion and self-protection are rare. Given the frequently abstract benefits that privacy and reputational integrity afford, they are almost always traded away for competitive economic advantage. This process further erodes societal expectations of privacy.
It is to Peppet’s great credit that he squarely wrestles with this phenomenon before engaging in the legal interpretation (or drafting of proposed statutes and regulations) that is the more common end of privacy scholarship. By bringing a lucid account of the “economics of signaling” to the field, Peppet may help it leapfrog its current infatuation with “notice and consent” models and move on in three directions.
First, we may simply seek to assure that informational harms cannot bring any individual below a social minimum. In that case, a good bit of privacy regulation and cyberlaw is effectively absorbed into broader campaigns of social justice. For example, if we eventually enter into an equilibrium where employers are demanding very positive “personal prospectuses,” and a large and growing class of individuals cannot provide such profiles, the answer may not be to regulate information flow so much as it is to take on the larger social task of reducing the stigma and material wants arising out of unemployment. Similarly, health privacy becomes less of a concern when insurers can’t deny coverage to anyone, including those with pre-existing conditions.
The second response, which might be addressed in some of Peppet’s future work, is to turn a Panoptic eye onto those who demand personal prospectuses, subjecting them to the same level of competition as they subject individuals to. As we become “transparent citizens” (as Joel Reidenberg puts it), we should demand that the corporate and governmental authors of that trend reciprocate, and become more open about the data they gather.
Finally, as full disclosure dynamics render the average citizen’s life an open book, one-time privacy advocates might seek a different end: equalizing the surveillance that is now being aimed disproportionately at the vulnerable. Large corporations have used both privacy and trade secrecy laws to deflect scrutiny. As David Brin suggested in his book The Transparent Society, the “full disclosure future” might be a little less scary for ordinary citizens and consumers if government and business powers had to live up to the same standards of openness that they impose on others.
Jan 4, 2012 Michael Madison
Marketa Trimble,
The Future of Cybertravel: Legal Implications of the Evasion of Geolocation,
22 Fordham Intell. Prop. Media & Ent. L.J, (forthcoming 2012), available on
SSRN.
Fifteen years ago, David Post and David Johnson published what some still regard as the seminal paper of cyberlaw scholarship: Law and Borders: The Rise of Law in Cyberspace. Post and Johnson argued that because cyberspace was defined, in a way, by the very absence of territoriality, cyberspace should be governed by laws and lawmakers not tied in traditional ways to territorial states. That paper provoked a reply, Against Cyberanarchy, by Jack Goldsmith, and those two positions – “cyberspace is different”; “no, it isn’t” — have pretty much defined the landscape of cyberlaw ever since. Later scholars have had little choice but to explore the implications and details of staking out intermediate positions. When and how does cyberspace differ, and what do we do about it?
Marketa Trimble’s article approaches this topic by revisiting a species of the territorial question that prompted Law and Borders. How can and should the law address behavior online by people who are physically located in one place but who wish to create or manage online identities in other places? Trimble calls this the challenge of “cybertravel,” a phenomenon that is hardly new but that has taken on renewed significance as Internet technologies (and governments) have caught up to the many ways in which cybertravelers can be in more than one place at a time.
The article describes the problem to be addressed in blunt terms. Governments have ongoing interests in effective taxation and in regulating at least some online behavior (gambling, for example), and commercial interests (often backed by governments) have strong interests in policing geographically-dependent use of intellectual property rights. Individual interests in online freedom and privacy, particularly in anonymous and pseudonymous behavior – in “cybetravel” — have long been threatened by both law and technology used to back interests in regulation. What has changed is the development and use of geolocation tools that have made it easier than ever for both governments and firms to determine where a particular online actor is located in physical space. That technological shift is compounded by the growing acknowledgement of the inadequacy “soft law” approaches to balancing government, commercial, and individual interests (that is, approaches grounded in application of jurisdictional rules in cyberspace-related litigation), and the undesirability from both technical and policy levels of accommodating those interests via compulsory or voluntary activity (such as the use of geographically-oriented filtering technology) at the service provider level. The question is, as it was 15 years ago, how to construct a manageable and sensible regime at the user level.
Approaching this question, Trimble adopts a premise that may put off cyberlaw idealists: Borders should be viewed positively from a normative perspective. Borders are enabling (they help governments keep the bad guys in – such as a gambling enterprises that might like to “offshore” their activities — so that they can be regulated productively) as well as disabling (they help governments keep the good guys out). That pragmatic perspective informs the whole article. Cyberlaw no longer deals in a purely borderless world, online or off. I have a lot of sympathy for that point of view, and I confess that my appreciation of Trimble’s article is grounded in the first place by the fact that she does not try to dodge the point. Some appeal to borders in a geographic sense – physical, virtual, or simply psychic – may be hard-wired into our appreciation of what cyberspace “is.” But the point is hardly uncontested, or uncontestable. Trimble does not cite to the post-Law and Borders literature of a decade ago, which included a number of articles addressing the law and policy implications of the arguable “place-ness” or “placeless-ness” of cyberspace, but she comes down clearly on the side of the scholars who argued that cyberspace is a place, after all.
The article conducts a thorough review of geolocation technologies, with appropriate nods to history and “lower” or less sophisticated or complex technological versions of contemporary tools. It reviews evasion approaches, some that permit access to illegal or regulated content or services, some that enable online participation by individuals or groups with legitimate concerns for their own safety or the safety of others if their location and/or identity were disclosed. The article acknowledges, in other words, that the cybertravel problem is related to jurisdictional issues and to anonymity and identity questions, but also distinct from both.
There is a long section talking about liability risks faced by those who engage in evasion tactics and by those who supply evasion tools. This section describes relevant contract law, copyright law, anti-circumvention law, and tort and fraud questions, with significant and appropriate attention to international and non-US legal regimes. The article does not make its case only from the US perspective.
The most interesting part of the article consists of its review of the normative and prescriptive future. Under what circumstances should individual evasion of geolocation technologies be lawful – that is, what is the proper scope of legitimate regulation of cybertravel? Trimble begins by accepting an analogy between virtual travel and physical travel, such that the interests of citizens (and governments) in each are approximately identical. The argument here is rooted partially in US law and partially in international legal and human rights norms. Given the physical travel analogy, a partial remedy is proposed in the form of a “digital passport” for “netizens” (she does not use that term, but it seems appropriate here, given the modestly anachronistic flavor of the underlying problem), with the rights and duties of the passport holder coded into the architecture of the Internet. Acknowledgement of the “passport” would be more or less a matter of technology rather than politics; the rights underlying the passport holder would be grounded in the holder’s residence or nationality (or perhaps, both). Basing digital rights on terrestrial rights offers a way of balancing the virtues of permitting and enabling evasion of geolocation with legitimate commercial and state interests in supporting a robust geolocation infrastructure.
It is almost a tautology to note that, in light of the article’s Law and Borders ancestry, the solution is unsatisfactory. To her great credit, Trimble acknowledges as much, describing the serious risks to individual privacy that the proposal entails, and the concerns regarding data integrity that it engenders. She also takes care to observe that the proposal could not be implemented without a business, technical, and regulatory infrastructure to support it, and that such an infrastructure would create second-order risks of privatization of the entire enterprise and a corresponding lack of meaningful transparency. Both the proposal and the critique borrow heavily from themes developed originally by Lawrence Lessig.
Trimble closes not with despair that the idealism of cyberlaw pioneers has not been sustained, but with a pragmatic acknowledgement that we live in a second-best world. She argues that the tradeoffs embedded in her proposal are worth accepting, at least conceptually, in order to enable cybertravel that is to some degree freer than it might be in an age of unrestricted use of geolocation tools. Implicit in that conclusion is a response to the debate between Post and Johnson, on the one hand, and Goldsmith, on the other, the conflict between the idea of reinventing social and political life online and the idea of continuing online our lives as we have always lived them. Trimble’s careful, pragmatic article shows that this split is and has always been irremediable, and that the Internet and our experiences within, by, on, and through it, are simultaneously and entirely new and the same.
Cite as: Michael Madison,
Law and Borders, Revisited, JOTWELL
(January 4, 2012) (reviewing Marketa Trimble,
The Future of Cybertravel: Legal Implications of the Evasion of Geolocation,
22 Fordham Intell. Prop. Media & Ent. L.J, (forthcoming 2012), available on SSRN),
https://cyber.jotwell.com/law-and-borders-revisited/.
Nov 16, 2011 James Grimmelmann
Felix T. Wu,
Collateral Censorship and the Limits of Intermediary Immunity,
87 Notre Dame L. Rev. 101 (2011), available at
SSRN.
“Section 230” contains the single most important provision in all of Internet law:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Felix Wu’s Collateral Censorship and the Limits of Intermediary Immunity — his first article as a law professor — offers a perceptive new interpretation of this enigmatic sentence. It has always been clear that Section 230 protects intermediaries — the Googles, Facebooks, Comcasts, and bloggers of the world — from being held liable for user-generated content. But consensus in the core gives way to controversy in the penumbra: just how far does or should the immunity reach?
In Zeran v. America Online, the Fourth Circuit gave an answer stunning in its absoluteness and its simplicity: the immunity applies no matter whether the intermediary is on notice of the content and of its wrongfulness. The courts have uniformly agreed: even if an offending post is transparently false and hurtful, and even if this fact is pointed out to the intermediary, and even if the intermediary cackles with glee, still it will be immune. This protection is the legal bedrock on which Internet empires have been built; it has also left countless victims of online thuggery without any effective legal recourse. The limited academic debates on Section 230 have largely focused on whether the Zeran rule is fine as it is, whether it should be rolled back to some form of notice-based liability, or whether it needs other exceptions for particularly egregious situations. There are good articles here, but one does not need a very long string cite to run through them.
Wu’s move, so elegant that it is obvious in hindsight, is to recognize that there are really two questions about Section 230, not one. The first is how strong its protection should be: this was the issue in Zeran and it is the one on which scholars have mostly divided. The second is when that protection should apply at all: this part has received less attention. If we have two sliders to play with, perhaps we should set them differently. Section 230 could be broad and shallow: shielding intermediaries in a variety of factual settings but offering only a thin immunity that can be overcome with a sufficient showing of malice or unconcern on the intermediary’s part. Or it could be narrow and deep: protecting intermediaries only from defamation and closely related torts, but offering absolute protection when it does.
Having distinguished these sliders, Wu offers guidance on how to set them. He does so by reconstructing a theory of what Section 230 is supposed to do: prevent “collateral censorship.” It’s a commonplace that an online intermediary can’t be counted to stick up for its users when its own ass is on the line. (Exhibit A: PayPal and Amazon disgracefully dropped WikiLeaks based on little more than Joe Lieberman’s disgraceful jawboning.) Faced with even the vague and distant threat of liability for user speech, the rational intermediary will yank the challenged content. It has nothing to gain and everything to lose by doing anything else. This gives opponents of speech an easy-to-use heckler’s veto: just threaten the intermediary. A robust, deep immunity recognizes that the intermediary has much weaker incentives than the original poster does.
As Wu demonstrates, however, this rationale only works some of the time. It fails when the intermediary has a speaker’s own incentives because it has adopted the user’s speech as its own. Wu’s example is Barrett v. Rosenthal, where the defendant reposted an email message to two USENET newsgroups: she was not just acting as a gateway, she was “speaking in her own right.” This distinction helps understand why some of the other problematic cases in the section 230 canon, such as Blumenthal v. Drudge and Doe v. Ciolli, are so problematic. These are cases in which the intermediary at least plausibly “obtains the social benefits of speech” and so may not deserve Section 230’s full protection.
Wu also argues that the collateral censorship rationale fails when the law in question is actually designed to target intermediaries rather than users. He gives an illuminating exegesis of the statutory phrase “be treated as the publisher or speaker,” which he claims should not apply when the intermediary is the recipient of the information in question. His example here is MySpace v. Doe, in which the fourteen-year-old plaintiff had been sexually assaulted by a man she met on MySpace. Among her causes of action against MySpace was a claim for negligence based on its failure to implement effective age verification that would have kept her from meeting her attacker. The Fifth Circuit held this theory of liability preempted by Section 230, but Wu disagrees. Doe’s negligence claim wasn’t really about treating MySpace as the “speaker” of her assertion that she was older than she was, and liability here would not raise the incentive-mismatch problem Section 230 was designed to address.
These are just a few of the analytical gems in this treasure chest of an article. His explanation of when intermediaries are and are not really acting as intermediaries alone is worth the price of admission, and will be of use to scholars working on a range of Internet problems. Even where its arguments are less persuasive — I’m not convinced that it really engages with the best arguments for immunity in the Roommates.com housing discrimination case — it has fresh and important insights. Wu’s recommendations don’t fit cleanly into a “pro-” or “anti-” Section 230 camp; anyone who teaches, writes, or cares about Internet law will be challenged and energized by his reinterpretation of the caselaw.
This what good doctrinal scholarship looks like. Wu starts with a real problem, one that is frequently before the courts. He brings to bear the scholar’s comparative advantages: abstraction, time, and theoretical rigor. Having achieved a synoptic view, he returns to the specific, making well-argued recommendations that courts can put to work in actual cases. “Collateral Censorship and the Limits of Intermediary Immunity” is an outstanding debut.
Cite as: James Grimmelmann,
Undiplomatic Immunity, JOTWELL
(November 16, 2011) (reviewing Felix T. Wu,
Collateral Censorship and the Limits of Intermediary Immunity,
87 Notre Dame L. Rev. 101 (2011), available at SSRN),
https://cyber.jotwell.com/undiplomatic-immunity/.
Aug 11, 2011 Lilian Edwards
Works of pure theory in Anglophone European internet law scholarship are fairly rare, and those that exist often come from scholars whose background is in a field other than traditional law, e.g. sociology, politics or criminology. While some of this work is excellent, it may lack a full understanding both of the nuances of legal analysis and the realities of commercial legal culture. For all these reasons, it is to be warmly welcomed that in what one might call the second stage of his distinguished career, Chris Reed, one of Europe’s leading researchers into the more commercial and practical aspects of internet law, has decided to turn his years of experience in helping both draft and critique European internet and e-commerce laws towards theorising how to regulate for the on-line world, in the form of a series of pieces which so far include Taking Sides on Net Neutrality, The Law of Unintended Consequences–embedded models in IT regulation and more recently, How to Make Bad Law: Lessons from Cyberspace. The latest of these pieces (which are destined eventually to form a book on regulation, I believe) ((Usefully, Reed has also posted on his blog his use of these pieces in teaching a coherent course on Internet law and regulation, along with extra conclusions and slides.)) appeared in late 2010 and takes on the near cliché of internet law that “what is legal offline should also be legal online,” or more formally, the principle of equivalence. While it is something of a kneejerk assumption in many domains, notably freedom of speech, that this approach is axiomatically mandatory, Reed dissects the desirability, applicability and most interestingly perhaps, the failures of the principle in the context of the history of (mainly European) internet regulation.
Reed defines equivalence as a starting point as “an approach in which all laws and regulations should, so far as possible, be equivalent online and offline. In other words, the same legal principles should regulate an online technology activity as those which applied to the equivalent offline technology activity.” Reed’s first point is that this should not be confused with the similarly-popular notion of technology neutrality. “Technology neutrality addresses the choice between the available substantive rules which could be used to implement … legal principles,” while equivalence, in his view, is about choosing those legal principles for regulating the online world in the first place. Equivalence therefore takes precedence in the regulatory toolkit and is arguably the more important issue to get right. Reed also muses as to whether a distinction is needed between “technology indifference”–which is an “attempt … to define a rule in such a way that it applies equally well to the activity whatever technology is used to undertake it” and a concept he does not name but I will call technology non-discrimination which is “a legislative aim that the rules should not discriminate between technologies and should continue to apply effectively even if new technologies are developed.” A good example of problematic regulation which might have been elucidated by applying these concepts lies in the recent controversial redrafting of the part of the EU Privacy and Electronic Communications Directive dealing with cookies (art 5(3)), where despite frequent claims to technology-neutrality the results have been nothing of the kind either initially or after reform.
Returning to equivalence though, Reed makes a cogent distinction between “pure” equivalence and “result” equivalence (a concept which seems drawn partially, one might hazard, partly from feminist legal theory and partly from the comparative law doctrines of e.g. Zweigert and Kotz). Applying the exact same rules on and offline will often simply produce a mess, given the huge differences in the environment–one of the best examples being the attempt in early jurisprudence to map ISPs and hosts of unlawful defamatory material to newspaper or TV publishers with consequent full liability. Instead, Reed points us towards “functional equivalence,” where the idea is to get the new online rule right by making sure that even if formally or even substantively quite different, it achieves the same result offline as online. This raises the further problem that, in Reed’s view, “equivalence” is often most neatly met by having one rule for both online and offline activities, with the practical result of a need to revise (and generalise?) the offline rule to cover both domains. If the rule brings in entirely new regulation, this may be politically plausible–Reed’s example is the UK Terrorism Act 2006, which introduced the new offence of disseminating terrorist publications, and applied it to both hard copy and electronic versions simultaneously–but in other cases it may require political will or judicial happenstance and may never or only very slowly happen. One success story Reed cites is the adaptation of the English common law of fraud by the UK Fraud Act 2006 to deal with the problem that in online fraud the fraudster rarely knows the mental state of his victim (the offence was redrafted to pivot solely on the intention of the fraudster). This however depended on funded law reform by the English Law Commission–whose time and resources are finite (as are, one imagines, those of similar national bodies). Such holistic reform may simply often not be possible.
But the biggest problem with “functional equivalence” is how to define what is functionally the same scenario to be regulated. This is hard enough offline: online, it is a corker. Reed highlights one of the most obvious problems, that of categorisation. Is a search engine, for example, a piece of essential infrastructure, like water or gas supplies; a distributor of electronic content like an ISP or host; an intentional recopier of copyright material, possibly without permission of the rightsholder; a publisher like a newspaper with the freedom of speech privileges that implies; or the virtual equivalent of physical trespasser? Get this wrong (as the Belgian courts notoriously did in Copiepresse) and you have a scenario where the internet disappears in the deluge of unparseable material and the digital society vanishes. One sleight of hand Reed doesn’t mention is to avoid the categorisation problem by explicitly regulating only functions, not who undertakes them. This is largely what the DMCA and the EU E-Commerce Directive do to deal with the problems of online intermediary liability: a strategy that has lead to much testing of limits, yes, and of course essentially passes the buck back to the courts (cf. Napster, Grokster, Google Adwords, L’Oreal v eBay at the ECJ, et al) but at least has had some durability about it.
But sometimes there simply is no functional equivalent between the online and offline worlds. What then? How can we tell when equivalence simply won’t work? Here Reed’s analysis does falter. In his view, for example, there are “no major theoretical obstacles” to regulating copyright online and offline by “equivalent” rules (P. 269): the problem is a procedural not a substantive one, namely the restraining influence of international treaties preventing states from going it alone with their own most appropriate solutions (a bit like Greece being stuck in the Eurozone). This writer would beg to differ: one of Reed’s own criteria for applying equivalence is that there is a balance of interests among stakeholders which can be identified offline and mirrored online–it is hard to see how this is possible in the current online content wars where balances are entirely skewed from the offline by easy copying, easy distribution, anonymity and encryption (to name but a few factors). But these cavils aside this is a rare and enormously useful primer on how to regulate for the internet–one wishes some elected representatives could be forced to read a copy.
May 19, 2011 Paul Ohm
Derek E. Bambauer,
Conundrum, 96
Minn. L. Rev. ____ (forthcoming 2012), available at
SSRN.
It is rare to find satisfying cybersecurity scholarship. This is not the fault of the talented scholars who have written in this field. I am a fan of the work of many who have tried to lead us to legal and geopolitical solutions to the problems of viruses, worms, botnets, cyberwar, and cyberterrorism. But these individuals have had their considerable talents stymied by cybersecurity’s fundamental knowledge problems. To make a useful contribution, an author must understand technical concepts famous for their complexity, from TCP/IP to BGP, and be able to untangle complex relationships like the ones between the FBI and NSA and the United States and China. Even worse, cybersecurity scholars can never know whether they have the details right, because these topics are shrouded in layers of official and de facto secrecy.
For these reasons, I have never felt entirely satisfied by a single work about cybersecurity, at least not until now. Derek Bambauer has written a fine article about this topic entitled Conundrum, available on SSRN and forthcoming in the Minnesota Law Review. This useful article points the way to a more interesting and more useful new way forward for cybersecurity scholarship and discourse.
Reviewing the state of cybersecurity scholarship, Bambauer helpfully diagnoses an underappreciated narrowness to past approaches: scholars and policymakers have too often treated cybersecurity as a problem of infrastructure alone. They focus only on macro-level, technological concerns, asking questions like how can we detect the source of a cyberattack? how easily can we quarantine a troublesome part of the network? and how well do domestic and international law provide tools to bring to justice a cyberterrorist or cyberwarrior?
As Bambauer demonstrates, this narrow focus pushes scholars to adopt correspondingly narrow legal and political frames, meaning cybersecurity is seen primarily as best addressed with “well-established, comfortable, yet poorly-fitting models from criminal law, national security law, and military law.” Id. at 10. Solutions built upon these frames focus, to the exclusion of almost anything else, on preventing, detecting, and stopping cyberthreats, which means they lead almost inexorably to calls to “fix the attribution problem” online, calls to bolt on some new protocol to the Internet to destroy the network’s inherent untraceability. Not only are these solutions never likely to come to pass, but also if they ever did, they would strike a blow to things we value, like generativity, privacy, and the ability to resist dictators and tyrants.
Bambauer breaks free of these narrow frames by connecting cybersecurity to information theory. He peels away the hard shells surrounding the fiber-optic cables that comprise our international infrastructure, to reveal their delightful chewy centers, the communications that we’re trying to protect. This move makes enormous sense. After all, we are not protecting cables because we like cables but instead, we are trying to ensure that after a crippling cyberattack, we can still send email messages, text messages, and telephone calls, and access websites, databases, and control systems.
This is a wise move because it allows Bambauer to connect cybersecurity to a rich intellectual history, from Claude Shannon to George Akerloff, Joseph Stiglitz, Michael Spence, and beyond. Building on the work of these thinkers, Bambauer asks a critical question that too often goes unanswered in cybersecurity debates: what exactly are we trying to protect? To this essential and underexplored question, he provides three answers: access, alteration, and integrity.
Technical experts in information security won’t be very impressed with the novelty of this list, as it echoes the venerable information security triad, confidentiality, integrity, and availability. But Bambauer helps us realize that non-technical experts in law and policy have been proposing solutions that protect these goals only indirectly. Attribution allows us to trace the source of an attack—except when it doesn’t—which helps us find, stop, and bring to justice our attackers—except when we can’t—which deters others thereby making our network safer—except when it won’t.
Bambauer’s singular focus on information allows him to find much more direct and narrow ways to protect access, alteration, and integrity, proposals that sound very different than those that have been proposed before. His guiding principle is redundancy. (He calls it inefficiency, but more on that in a minute.) Data and networks should be rendered more redundant than an unregulated market would produce. As a tenant of national and international policy, we should encourage and sometimes mandate technological redundancy, forcing businesses by regulation to create and disperse more copies of their data and to establish more network interconnections than they would otherwise choose, perhaps paid for by government subsidy.
This is a sound prescription, and we should focus our energy on ways to recover quickly from an attack rather than think only about prevention or retaliation. What I like most about this focus is it helps cure cybersecurity’s knowledge problem, by focusing our attention more on facts that are readily available—how interconnected are the nation’s networks and critical data centers?—and less on facts that we civilians can never know—how powerful are China’s infowar capabilities?
The article, of course, isn’t perfect. First, when Bambauer talks of redundancy, he uses the term inefficiency. He does this, I think, to suggest that his proposals are counter-intuitive and maybe even radical; it seems a heresy of the first order to argue for inefficiency in this law-and-economics-drenched age. But as he himself notes throughout his paper, computer security experts have recognized the importance of redundant systems for decades. The Internet itself was in part architected on the principle of robustness through redundant links. Bambauer isn’t being a radical here, he is simply importing neglected principles from information security theory, principles for too long underappreciated by legal scholars and policymakers. By choosing the surprising term over the conventional one, Bambauer obscures his contribution.
Second, Bambauer is wrong if he suggests that the solution to cybersecurity will be found in information theory alone. Criminal law, national security law, and military law must play a role, and the “problem of attribution” isn’t irrelevant.
Despite these mostly cosmetic flaws, the important lesson of the article is that we need to think of cybersecurity as a problem we should view through two different lenses. But there is no reason to confine this lesson solely to cybersecurity. This article should help us remember that every cyberlaw policy dispute can be seen through the same dual lenses, one focused on infrastructure and the other on content and communications. This echoes but expands upon Orin Kerr’s important article about the “internal” and “external” views of cyberlaw. Orin S. Kerr, The Problem of Perspective in Internet Law, 91 Geo. L.J. 357 (2003). From net neutrality, to Wikileaks, to online privacy, to whatever we are worrying about tomorrow, we should always view our debates through these two lenses, the infrastructural and the informational, the technological and the human, the network and the social. Two lenses give us the stereoscopic vision to make better sense of cybersecurity, Bambauer convincingly demonstrates; it can probably do the same for many other great problems of the day.
Apr 18, 2011 Ann Bartow
The abstract of the piece lays out the author’s thesis very cleanly and clearly in a single sentence: “… [E]ven though Facebook users have privacy options to control who sees what content, this Article concludes that every single one of Facebook‘s 133 million active users in the United States lack a reasonable expectation of privacy from government surveillance of virtually all of their online activity.” Semitsu begins the piece by explaining the social and political importance of Facebook in a compelling way. To take just one example, he observes a huge percentage of matrimonial lawyers have used or faced evidence found on social networking sites in during divorce proceedings. He then explains that while people may use the privacy controls that Facebook provides them in ways that successfully mediates the information exchanges they have with other private citizens or with commercial entities, these controls have no meaning vis a vis the government. This is because literal application of the Third Party Doctrine means Facebook users can’t have a reasonable expectation of privacy in anything they post. And potentially pertinent provisions of the Electronic Communications Privacy Act may not even apply to Facebook-based communications. Therefore, as Semitsu cogently explains, “though Facebook has been justifiably criticized for its weak and shifting privacy rules, even if it adopted the strongest and clearest policies possible, its users would still lack reasonable expectations of privacy under federal law.”
Semitsu evaluates Facebook’s architecture, its evolving approaches to user privacy, noting that Facebook users may misunderstand their actual ability to delete their accounts or keep information confidential, and that many decline to take advantage of the privacy tools that are available to them. He observes that the situation is pretty similar at other social networking sites as well. Then he launches into an extended elucidation of how the government uses Facebook as an investigative tool. He compellingly illustrates the non-piddling possibilities by explaining how a campus police officer used Facebook to become a whiz at apprehending a University of Illinois student observed urinating in public. Facebook was deployed by law enforcement not because the crime was significant but because it is a fast, cheap and easy way to identify those suspected of extremely minor infractions, who might not have even been pursued if more resources were necessary to bring him to justice.
Semitsu lists many different Facebook disclosure fact patterns and charts the ways they intersect with existing evidence collection doctrines and practices. Examples include the use of fake profiles, voluntary disclosures, and data mining techniques. Then he provides a thorough-seeming overview of Katz v. United States, and what he characterizes as its “two step approach that looks to the reasonableness of a search or seizure” as it has evolved across time and technologies. He paints a picture of Facebook as a giant surveillance tool, no warrant required, which the government can use in a mind bogglingly creative range of ways, with almost no practical constraints from existing laws.
He finishes with the requisite normative component, a plan to reconfigure the law so that Facebook is more like a phone booth, in terms of the associative reasonable expectations of privacy its users can have. I like the idea of starting with real space norms that are fairly clearly understood, and trying to build them into an electronic environment a lot. (Well, obviously I would). His proposal might have benefited from a more thorough taxonomy, in terms of how the laws would be modified and how they would, in turn, interact with Facebook and each other. But like the rest of the article, it provides a jaunty yet terrifying account of how clueless most Facebook users are with respect to the diminution of our Fourth Amendment rights online. While we lobby for social networking tools to give us more control over our profiles, and debate the transparency or lack thereof with which corporate actors track us or collect and use our personal information, we barely notice that Facebook leaves us almost completely vulnerable to searches and seizures triggered by invasive but mostly invisible government surveillance. Semitsu’s clever article brings this squarely to our attention.
A couple of concluding qualifiers. First, I do not know Junichi P. Semitsu but after reading his USD Law bio I definitely hope to meet him some day. Here is an excerpt:
In his spare time, he recently served as the embedded blogger for the Dixie Chicks, appeared as a contestant on Who Wants To Be A Millionaire?, and won the title of “Funniest Lawyer in San Diego” at the San Diego Volunteer Lawyer Program’s LAF-Off (Lawyers Are Funny) competition.
Funny is good! So are law professors that do important work but do not take themselves too seriously.
Second, I am not an expert in Fourth Amendment law, to put it lightly, so there may be errors or omissions that escaped my notice, though I didn’t find any through a bit of spot checking or from consultation with better versed colleagues. I found the article to be very well written and well sourced (again with the caveat that I am not well placed to know whether he cited all the important Fourth Amendment literature that he could have). Semitsu has a fresh, accessible and engaging voice. I learned a lot from reading this article.
Jan 14, 2011 Frank Pasquale
Jonathan Zittrain, Ubiquitous Human Computing, Phil. Trans. R. Soc. A, vol. 366 no. 1881 3813-3821 (28 October 2008).
A banana usually sells for about 30 cents. On average, the plantation owner gets 5 of those cents; the shipper, 4 cents; the importer/ripener, 7 cents; and the retailer, 13 cents. That leaves one penny for the worker who picked the banana. Fruit economics helps drive the politics of “banana republics:” as the unpaid laborers and netizens at Wikipedia note, such countries are “politically unstable,” “dependent upon” commoditized crops, and “ruled by a small, self-elected, [and] wealthy . . . clique.” Oligarchs at the top set the direction of society; workers merely play the roles assigned them. Truth doesn’t matter much; as Paul Krugman noted, one political party promised voters to save money on gasoline by “building highways that ran only downhill.”
Commentators have begun to wonder if the United States is becoming a banana republic. Nicholas Kristof concludes that “You no longer need to travel to distant and dangerous countries to observe . . . rapacious inequality. We now have it right here at home.” Chronicling endless financial industry shenanigans, critical finance blogger Yves Smith seems to label every third post “banana republic.”
Wasn’t the internet supposed to solve these problems? Wouldn’t a “wealth of networks” guarantee opportunity for all, as prediction markets unearthed the “wisdom of crowds?” It turns out that the net, while mitigating some forms of inequality in the US, is accelerating others. Jonathan Zittrain’s essay “Ubiquitous Human Computing” examines a future of “minds for sale,” where an atomized mass of knowledge workers bid for bite-sized “human intelligence tasks.” Zittrain explores some positive aspects of the new digital dispensation, but the larger lesson is clear: without serious legal interventions, an expansive global workforce will be scrambling for these jobs by “racing to the bottom” of privacy and wage standards. This review explains Zittrain’s perspective, applauds his effort to shift the agenda of internet law, and argues that trends untouched on in Zittrain’s essay make his argument all the more urgent.
Exploitation and Alienation Online
Zittrain argues that assembly line-style “division of labor” is becoming more common in mental tasks, ranging from very simple repetitive recognition exercises (“where is the car in this picture?”) to design competitions (“win $1,000 by drawing a new trademark!”). He states that “We are in the initial stages of distributed human computing that can be directed at mental tasks the way that surplus remote server rackspace or Web hosting can be purchased to accommodate sudden spikes in Internet traffic.”
The resulting distributed labor force offers unparalleled flexibility for CEOs. While they pursue the vaunted “Four Hour Workweek” of Silicon tycoons, they can avoid making any guarantees of wages to employees–or ask for 80 hour weeks suddenly when business picks up. In a globally connected world, the cheapest hands are at the ready to perform what “Amazon’s Mechanical Turk enumerate[s as] ‘HITs’ – human intelligence tasks – for sale one unit at a time, from as low as $0.01.” Once micropayment systems are perfected, pennies from cloud-heaven can rain upon the downtrodden.
Zittrain describes some advantages of this turbocharged division of labor for workers, too. Operators at one company (LiveOps) “can work whenever they like, wherever they like, for as much or as little time as they like.” Whereas the traditional employment relationship was like a marriage, with both parties committed to some longer-term mutual project, the digitized workforce seeks a series of hookups. There are plenty of opportunities for the flexibilized worker.
For those saddled a mortgage ball-and-chain, ubiquitous human computing offers less of a blessing. Aside from a blip of hope in 1990s wage figures, America’s working class has experienced declining compensation since the 1970s. Establishment journalists were among the first of the “knowledge workers” to experience the same fate, as search engines set up a national and global market for news once delivered locally. Since similar trends could soon engulf computerized work generally, Zittrain is right to argue that“[m]inimum wage, maximum workinghours, unionization (or at least the ability to know and contact one’s co-workers)” may need to be revisited. Having discussed “Privacy 2.0” in his book TFOTI, Zittrain also realizes that atomized digital workers need the right to establish a reputation by “building portfolios” if they are to compete effectively for gigs.
Zittrain also worries that “disembodied HITs can deprive people of the chance to make judgments about the moral valence of their work.” We can imagine a worker figuring out CAPTCHAs in the service of an Iranian intelligence agency or Chinese “fifty cent army” which wants to place hundreds or thousands of messages as comments on blogs. The atomized HIT is a way of diffusing responsibility in a world where it is already far too hard to pierce the corporate veil, contest trade secrecy claims, or penetrate shadowy government actions. In response, Zittrain proposes that “harvesters of human mindpower can be encouraged – or perhaps required – to disclose their activities to those who benefit them.” He also proposes that workers have the opportunity to opt out. To do that effectively, some entity will need to audit exactly how a company like LiveOps ranks and rates its workforce; otherwise, opting out could be a false choice that simply speeds one’s way to a blacklist.
Why Legal Scholarship Needs Social Theory
Legal scholarship has traditionally focused on discrete doctrinal areas. In intellectual property law, scholars seek to rationalize copyright, trademark, patent, and related doctrines; “cyberlaw” extends to contract, property, and tort online; and privacy experts confront the welter of common law and statutory limits on the accumulation and disclosure of data. While such specialization may promise to “work the law pure” in particular doctrinal bailiwicks, it also risks a tunnel vision that would reinforce trends that few would endorse upon reflection.
Scholars may provide a great service by recognizing trends that burst the seams of extant doctrine. For example, banking law couldn’t respond to the Panic of 1907 by tweaking extant statutes; it had to struggle fitfully toward an institution like the Federal Reserve. Similarly, reorganization of work through the internet could make swathes of federal and state employment law irrelevant. It also threatens to transform homes and eviscerate “expectations of privacy.” Zittrain does not panic, or sensationalize any of these trends. Rather, he calmly lays out what is happening now, and where it might lead. The paper originated as a presentation at the World Economic Forum 2008 Annual Meeting workshop, “How Science Will Redraw the Business Landscape of the 21st Century,” and Zittrain manages to present a compelling account of how the day-to-day phenomenology of labor, supervision, and monitoring can be technologically transformed.
As Russell Muirhead has argued, employment relations are not “just work,” in the sense of merely applying oneself to get a wage (only work); they ought to be just (in the sense of fair) work. Critical internet scholars like Trebor Scholz and Laura Liu have made this case in the realm of digital labor. Scholz and Liu have focused on the “relationship between labor and technology in urban space, in a context where communication, attention, and physical movement generate financial value for a small number of private stakeholders.” Zittrain recognizes similar dynamics as a problem for cyberlaw, evincing in his work what William Gibson has called the “eversion” of the internet. As Gibson observes, “cyberspace has everted. Turned itself inside out. Colonized the physical.” As the cyber and real become indistinguishable, cyberlawyers will need to influence realms like labor, employment, and consumer protection law. “Ubiquitous Human Computing” is a worthy entrant into these nontraditional cyberfields.
The Tipping Point
Can anything be done? Diagnosing a social problem is a dangerous game: the issue may be either too trivial to care about or too self-reinforcing and pervasive to be addressed. Zittrain’s dilemmas of exploitation and alienation may fall into the latter category. They promise to affect not only the economy, but politics, influencing the “rules of the game” that Zittrain hopes will tame them.
Zittrain mentions in passing the problem of HITs in political campaigns. If a distributed workforce can crack captchas for spammers, they can also plant comments on blogs, or engage in more creative uses of data to influence public opinion. Daniel Kreiss recently observed that campaigning has become data-driven; “223 million pieces of personal information” were “provided to Obama’s millions of online and on-the-ground canvassers” during the 2008 campaign. Data-intensive persuasion will permit new levels of personalization in advertisements, and new demand for a rapid, flexible workforce capable of targeting voters.
The irony of American free speech has reached its apogee in a First Amendment right to lie endorsed by Washington’s Supreme Court, and just about any form of spin and prevarication is allowed in our campaigns. Unlimited corporate spending also makes the collective will formation a crap shoot. Exxon could have deployed 10% of its 2008 profits to outspend every presidential and senatorial candidate running that year. Imagine when such interests use “big data” to slice and dice the electorate, with the aid of “minds for sale.” Impressionable and broke young people might be told they are combing websurfing data to find fellow environmentalists, with their labors really directed to compiling a mailing list of people most likely to be swayed by an ad of Mitt Romney strolling through a meadow. Big data and personalization will allow candidates to send “Save Medicare” ads to worried seniors and “cut entitlements” ads to Tea Partiers. Ubiquitous human computing will try to identify every imaginable subgroup: octogenarians particularly worried about Medicare Advantage Plans, angry young men with no dependents, angry old men who can’t stand Medicare Advantage—you get the picture.
Thus, I have little hope that Zittrain’s vision will do much to influence American working conditions: commoditized HITs are far more likely to accelerate current political trends than to be affected by our politics. Nevertheless, his proposals should have an impact in other, more advanced political systems, whose governments are committed to shaping economic life to human needs, rather than vice versa. And if ubiquitous human computing pushes the US one more step toward banana republicdom, at least Zittrain warned us.
Cite as: Frank Pasquale,
Banana Republic.com, JOTWELL
(January 14, 2011) (reviewing Jonathan Zittrain,
Ubiquitous Human Computing, Phil. Trans. R. Soc. A, vol. 366 no. 1881 3813-3821 (28 October 2008)),
https://cyber.jotwell.com/banana-republic-com/.
Dec 10, 2010 Michael Madison
The organization of the Internet raises some profound and fundamental questions about the nature of law and social order, questions that legal scholars have tackled head-on only occasionally and incompletely. If, as Lessig once argued, technical protocols effect a kind of “law” analogous to treaties, statutes, judgments, and administrative regulations, then by what standard should that “law” be regarded as legitimate and authoritative? Comparable questions have been asked from time to time with regard to informal social norms that seem to operate online, and more frequently with respect to the private but apparently governmental institutions, particularly the Internet Corporation for Assigned Names and Numbers (ICANN), that have evolved over the last decade to govern the wilds of the Net.
Lawrence Lessig, in Code and Other Laws of Cyberspace, and later Michael Froomkin, in Habermas@discourse.net, chose to look at legitimacy in cyberspace from the perspective of normative political theory. Jonathan Weinberg, in this chapter from the International Handbook on Informal Governance titled “Non-State Actors and Global Informal Governance – The Case of ICANN,” steers clear of such normative judgments and instead approaches the task explicitly as one of sociological, or descriptive, legitimacy. Legitimacy is important, as Weinberg, notes, in part because perceptions of an institution’s legitimacy powerfully impact willingness to comply with its commands or defer to its arrangements. Though he does not argue the case explicitly, legitimacy is central to institutional authority. Legitimacy and social order – online and off – go hand in hand.
Weinberg therefore renews a fundamental jurisprudential question, but by adopting a sociological framework, he enters territory only lightly trod by cyberlaw scholars. That turn enables him to base his argument not on first principles, but on descriptive analysis. The chapter is a case study, cleanly and evenly told, of the emergence and stabilization of ICANN as the private, non-governmental but nonetheless legitimate authority with respect to the ‘root’ that stores the technical specifications for the domain name system on (or of) the Internet.
The basic history of ICANN is repeated: Its formation as a California corporation entrusted with management of the root via a contract endorsed by the United States Department of Commerce, its promulgation of requirements and regulations to be followed by accredited domain name registrars, its endorsement of the Uniform Domain Name Resolution Policy (UDRP). Weinberg accurately characterizes all of this as a species of governance, perhaps not “collaborative” public/private governance in the sense developed in American administrative law scholarship, but “informal” governance in the flexible sense developed in European political theory. Within that “informal governance” framework, he tries on a series of possible accounts of the legitimacy of ICANN, its processes, and its outputs, finding each of them wanting. The organization tried to recover by establishing legitimacy via mastery of the technical standards governing administration of the root and asserting that its authority was limited solely to technical coordination. When its essential policy role quickly became apparent, ICANN tried to recover by adopting and following what appeared to be “democratic” processes, and by strategies of consensus-building among its “grass-roots” constituents. Weinberg persuasively establishes the failure of these strategies.
He concludes, by contrast, that ICANN has succeeded in establishing its legitimacy when the organization is understood through the lens of what Powell & DiMaggio refer to as “institutional isomorphism”: the idea that an organization acquires legitimacy within its institutional setting by taking on the characteristics expected of it by client and constituent organizations. ICANN, as an essentially bureaucratic enterprise operating in an environment dominated by expectations offered by business-oriented constituents, succeeded in its legitimacy challenge by adopting the bureaucratic and professionalized character that was expected of it.
The relative brevity of the piece means that its central argument is not brought definitively to a conclusion. What does the ICANN experience tell us about the nature, meaning, and significance of legitimacy? Is it possible to conclude that ICANN, or any modern institution that blends governmental, democratic, technological, and bureaucratic justifications, is or is not “legitimate”? Weinberg focuses on the details of the case rather than an elaboration of theories of legitimacy, and in so doing he leaves the door open for a great deal of further scholarship in this vein.
Cite as: Michael Madison,
Exploring Legitimacy in Internet Institutions, JOTWELL
(December 10, 2010) (reviewing Jonathan Weinberg,
Non-State Actors and Global Informal Governance -- The Case of ICANN,
in International Handbook on Informal Governance (Thomas Christiansen and Christine Neuhold eds., forthcoming 2011)),
https://cyber.jotwell.com/exploring-legitimacy-in-internet-institutions/.
Oct 13, 2010 James Grimmelmann
There is a distinctive NYU School of Internet studies: philosophically careful, intellectually critical, rich in detail, and humanely empathetic. Its unofficial dean is Helen “values in design” Nissenbaum; her colleagues and students have included Siva Vaidhyanathan, Michael Zimmer, Gabriella Coleman, Alexander Galloway, and Gaia Bernstein. Almost none of them are lawyers (Seton Hall’s Bernstein being the notable exception), but their work speaks to those of us who are.
One of the most recent additions to the NYU School is Joseph Reagle, who received his Ph.D. in Media, Culture, and Communications in 2008 and is now a fellow at Harvard’s Berkman Center. His new book, Good Faith Collaboration: The Culture of Wikipedia (MIT Press, 2010) is an ethnography of Wikipedia, a modest, beautiful book that analyzes the site’s “good faith collaborative culture.” Reagle offers an extended reading of how this culture emerges from the interplay of ideology, technology, and social practice.
Why is Wikipedia’s culture so important? Nazis. Godwin’s Law (named after its author, Mike Godwin, who is now Wikipedia’s general counsel) states, “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.” This commonplace of online discourse captures one of the hard facts about online collaboration: discussions in text-only online media have a centrifugal tendency toward unproductive, extreme positions. Wikipedia, in particular, also faces the problem of ideologically self-interested editing; Reagle leads with a case study of an incident in which members of Stormfront tried to shape Wikipedia’s coverage of the neo-Nazi movement, leading to substantive disputes over article content and even more fractious procedural disputes over how other Wikipedians should respond. In the face of these dissipative forces, collaborative culture holds Wikipedia together.
Reagle convincingly argues that there is a crucial link between Wikipedia’s core substantive commitment (“NPOV,” short for “Neutral Point of View”) and its core procedural commitment (“Assume Good Faith”). NPOV refuses to privilege any one version of “the truth” and thus requires articles to fairly present all sides. Assume Good Faith and its related norms of patience, civility, and humor, refuse to privilege any person. Everyone — even neo-Nazis — is welcome to edit. Open-mindedness, about arguments and about people, is thus central to Wikipedia’s culture.
The picture of Wikipedia that emerges is messy, contentious, and productive. Conflict is routine; NPOV and Assume Good Faith are sometimes honored only in the breach. Arguments over small matters like naming conventions may seem like a tremendous waste of energy. But this endless series of discursive crises, small and large, in fact keeps Wikipedians engaged in articulating — in producing — the spirit of collaboration. This point is consistent with Dave Hoffman and Salil Mehra’s conclusion in Wikitruth Through Wikiorder that Wikipedia’s arbitration system “functions not so much to resolve disputes and make peace between conflicting users, but to weed out problematic users while weeding potentially productive users back in to participate.” I would add that Reagle also shows why the arbitration system itself is of secondary importance in Wikipedia’s collaborative structure; the real work of holding it together and negotiating its meanings takes place on its Talk pages, mailing lists, and meetups.
The book’s central chapters deal with a twinned pair of threats to this open, good-faith model: that it will be too chaotic and that it will be too controlled. Observing Wikipedia’s anyone-can-edit ideals, some critics have worried that it will be overrun by vandals, trolls, sock puppeteers, and the just plain ignorant. Others fear that Wikipedia betrays those same ideals by vesting too much control in a shadowy group of administrators led by Jimmy Wales, who have the software-based power to censor, revert, and bully. (One of Reagle’s chapter epigraphs — J.S’s second law — amusingly plays on this fear.) Eric Goldman, one of Wikipedia’s most thoughtful academic critics, has argued that that excessive openness and excessive control are Wikipedia’s Scylla and Charybdis, and questioned how long the channel between them will remain wide enough to be navigable. Reagle gives more cause for optimism; he shows how a self-produced culture of collaboration has so far enabled Wikipedia to resist both external threats and internal capture.
Two sections stand out as particularly astute. The first is Reagle’s discussion of “neutrality” (building on his previous work), which explains how a term without a clear underlying meaning can still be an effective principle around which to organize a community. The second is his chapter on “encylopedic anxiety,” which demonstrates that much criticism (and more than a little praise) of Wikipedia is in fact unrelated to how it does or doesn’t work. Instead, people project their hopes and fears onto reference works; concerns about Wikipedia’s open editorial policies are arguably just another iteration of previous concerns about whether dictionaries should present the “is” or the “ought” of language. Reagle is too polite, though, to criticize even the deeply misguided: the whole book is suffused with a generous tolerance. For such a thoughtful analysis of Wikipedia’s good-faith culture, that is very much as it should be.
Jul 26, 2010 Ian Kerr
Jennifer A. Chandler,
The Autonomy of Technology: Do Courts Control Technology or Do They Just Legitimize its Social Acceptance?, 27
Bull. Sci. Tech. & Soc. 339 (2007), available at
SSRN.There’s this feeling I sometimes get browsing law review articles. It happens, like, once or twice in a decade. When it happens, I am so utterly struck by an article’s hypothesis that its supporting arguments practically fall by the wayside. Not because those arguments aren’t important or convincing. Ultimately, they are crucial. But, on rare occasions, the arguments are eclipsed by the author’s incredible insight in the formulation of the research question itself. This feeling that I am describing is the academic’s equivalent to a Jerry McGuire moment.
And, let me just say, Jennifer Chandler’s “The Autonomy of Technology” had me at hello.
Chandler examines the “autonomy of technology” thesis—a rather odd philosophical notion made famous by Jacques Ellul and Langdon Winner, that “technology tends to move along a trajectory that is relatively impervious to deliberate social control and that society instead tends to adapt its values to technological change.” (P. 341.) While this particular philosophy of technology has in recent years suffered many slings and arrows from the social constructivist camp (who argue that “technologies are very much shaped by social factors and the appearance of determinism arises because the social interests at stake in technological design are forgotten once the technology is completed” (P.342)), Chandler wonders whether the “autonomy of technology” thesis might be useful in illuminating the role of courts in the social control of technology. As she asks in her subtitle: do courts control technology or do they just legitimize its social acceptance?
Offering three interesting case studies, Chandler tries to demonstrate the possibility that “the courts may be systematically supporting the social acceptance of technology and technological values as they develop and apply the private law of tort and contract.” (P.342.) This possibility, she thinks, is reminiscent of the “autonomy of technology” thesis: “despite our belief that we direct the development of technologies and choose whether or not to use them, this control is more or less illusory.” (P.341.)
Chandler draws on several philosophical concepts, including Langdon Winner’s “reverse adaptation”—the phenomenon that, “[a]s a technology emerges, human ends are adjusted to match the available means.” (P. 341.) Chandler’s hypothesis is that judges, though they believe themselves to be autonomous, authoritative regulators of emerging technologies, are in some sense compelled through various private law principles and legal techniques to support and legitimize novel technologies within society. As Winner himself once put it, “[w]e may firmly believe that we are developing ways of regulating technology. But is it perhaps more likely that the effort will merely succeed in putting a more elegant administrative façade on old layers of reverse adapted rules, regulations and practices?” ((LangdonWinner, Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought 320 (MIT Press) (1977).))
My favorite of Chandler’s three case studies examines the judicial construct that “harm is caused by rejecting technology.”(P. 342.) The discussion centers around one of Canada’s most interesting tort cases in recent years, a class action suit commenced on behalf of a group of organic farmers seeking damages from agricultural biotech giants Monsanto and Bayer. ((Hoffman v. Monsanto Canada Inc., 264 Sask.R. 1 (Q.B. 2005) aff’d 293 Sask.R. 89 (C.A. 2007).)) According to the farmers, the foreseeable pollen drift from Monsanto’s and Bayer’s genetically modified canola products contaminated their crops, causing harm by thwarting their ability to grow certified organic canola. However, Monsanto and Bayer responded by claiming that the harm was not caused by their corporate release of genetically modified canola but rather only by the standards required to obtain organic certification (those standards being incompatible with the products’ inevitable drift) as well as by the farmers’ own attempts to adhere to those standards.
As Chandler very astutely observes, by favoring the defendants’ position on this issue, the court’s line of reasoning provides a classic illustration of Winner’s reverse adaptation thesis. Adapting human ends to available technological means, “it is not the parties modifying the environment with a novel technology that cause harm to others, but the parties seeking to avoid the use of the new technology that bring harm upon themselves.” (P. 343.) According to Chandler, “[t]he courts … are helping to make the technology an invisible part of the ‘cultural’ wallpaper, such that a rejection of available technology is irrational and is the source of any harm suffered.” (P. 344.)
Interestingly, Chandler goes on to demonstrate that the court’s reverse adaptation rule is not a one–off phenomenon. In her second case study, she offers a fascinating explication of the doctrine of mitigation in tort law to show that courts expect individuals to submit to technologies considered reasonable (from the perspective of rational risk) in order to mitigate harms caused by others. According to the mitigation doctrine, where the majority has embraced a particular technology, a plaintiff will also be required to adopt it if the technology would assist in mitigating the plaintiff’s losses. As Chandler points out in a detailed discussion of the existing Canadian case law, “[t]his becomes particularly troubling in the context of medical technologies, where a plaintiff must submit to [an unwanted] treatment if he or she wishes to recover compensation for injuries… [T]he economic duress faced by persons unable to work as a result of their injuries will in some cases exert serious pressure to comply with the mitigation requirement in order to obtain compensation through the courts.” (P. 344.)
The mitigation doctrine, Chandler concludes, is a means by which the private law renders various emerging technologies reasonable. Its tenets require judges to “promot[e] the cultural integration of technologies by labeling as unreasonable an attempt to avoid them.” (P. 346.) Chandler sees the mitigation doctrine as part of a systematic tendency “to legitimize certain technologies and to put pressure on dissentients to submit to them.” (P. 346.)
These case studies on private law’s notion of harm and its potential mitigation (and a third study on standard form/shrinkwrap contracts) provide a measure of support for Chandler’s working hypothesis, “that judges, through various private law principles, support and legitimize novel technologies.”(P. 348.) At the same time, by carefully referring to hers as a “working hypothesis”, Chandler remains open to the possibility of refutation, expressly stating that “further work would be helpful in identifying counter-examples and in studying other legal doctrines to see if they support or undermine the hypothesis.” (P. 348.)
Without trying to ram it down our throats, Chandler presents a very interesting, plausible and intuitive prima facie legal case for the rather implausible and generally counter-intuitive “autonomy of technology” thesis. When I said at the outset that she “had me at hello”, I meant that even if it turned out that her overall argument is incomplete, incorrect or unconvincing, her article offers up some extremely tasty food for thought: do the structures, doctrines, and methods of private law have the systematic effect of legitimizing the social acceptance of certain technologies?
Now, that is one very cool question for cyberlaw scholars to consider!
So cool, in fact, that I think Chandler ought to be forgiven for stacking the deck in favor of her working hypothesis by “tak[ing] a narrow approach” (P. 348) and by “looking only at certain private law doctrines” (P. 348). The better understanding of her work is to see it as a challenge to herself and others to further investigate the rather bold assertion that courts have a systematic bias in favor of technology. I am hopeful that she will continue to do so. The topic certainly merits a full-length monograph, graduate dissertations and further published law review articles.
I also loved this article because it epitomizes the breadth and depth of Chandler’s thinking and the beauty of her insight. Through her exploration of the philosophical debate on technological determinism across three doctrines in private law, Chandler invites cyberlaw scholars to ask and answer questions that will not only help to ground policy discussions about particular emerging technologies, but will also allow us to carefully reflect upon deeper juridical questions and issues surrounding the nature of law itself.
And, that’s a tall order.
Let me finish with just a few of the questions burning in my mind.
If courts really are biased in favor of technology, what exactly is the cause of the bias? And, what makes the bias systematic? Can and should this bias be undone? Or, do core private law values (e.g., reasonableness, efficiency) necessarily favor the technological society? And, if so, how so? Is the advancement of technology impervious to judicial conservatism or discretion? Should law itself be understood as a kind of technology? If so, what does the “autonomy of technology” thesis teach us about the nature of law or our (in)ability to control it?
Although the “autonomy of technology” thesis may seem farfetched to some, it is important for cyberlaw scholars to remember that our entire field is in fact premised upon one of its core tenets. Joel Reidenberg’s lex informatica, Lawrence Lessig’s code is law, are both derivative of Langdon Winner’s famous idea that artifacts have politics. As Winner put it, “A crucial turning point comes when one is able to acknowledge that modern technics, much more than politics as conventionally understood, now legislates the conditions of human existence.” ((Winner, note 1 above, 324.)) In large measure, cyberlaw and technology policy analysis begin with this acknowledgement.
One of Jennifer Chandler’s central insights, though she never expresses it as such, is that the “autonomy of technology” thesis does not entail a wholesale adoption of technological determinism. As Winner so eloquently stated: “It is somnambulism (rather than determinism) that characterizes technological politics—on the left, right and center equally.” ((Id. at 324.)) Like Langdon Winner, Jennifer Chandler seeks to wake up those who would simply assume that technology is neutral and that judges (and other regulators) control technology through the application of rules. In her excellent preliminary work on the subject, she encourages a deeper understanding of the relationship between law and technology, beckoning a reconsideration of Thoreau’s famous remark that perhaps, “we do not ride on the railroad; it rides upon us.” ((Henry David Thoreau, The Annotated Walden: Walden; or, Life in the Woods 223.))
Cite as: Ian Kerr,
Juridical Delusions of Control?, JOTWELL
(July 26, 2010) (reviewing Jennifer A. Chandler,
The Autonomy of Technology: Do Courts Control Technology or Do They Just Legitimize its Social Acceptance?, 27
Bull. Sci. Tech. & Soc. 339 (2007), available at SSRN),
https://cyber.jotwell.com/juridical-delusions-of-control/.