Empirical Link Rot And The Alarming Spectre Of Disappearing Law

Something Rotten in the State of Legal Citation trumpets an important alarm for the entire legal profession, warning us that given current modes of citing websites in judicial cases create a very real risk that opinion-supporting citations by courts as important as the United States Supreme Court will disappear, making them inaccessible to future scholars. The authors of this important and disquieting article, Raizel Liebler and June Liebert, both have librarianship backgrounds, and they effectively leverage their expertise to explicate four core premises: Legal citations are important; web based legal citations can and do disappear without notice or reason; disappearing legal citations are particularly problematic in judicial opinions; and finally, to this reader’s vast relief, there are solutions to this problem, if only the appropriate entities would care enough to implement them.

Denoting the disappearing citation phenomenon with the vivid appellation “link rot,” Liebler and Liebert explain that the crucial ability to check and verify citations is badly compromised by link rot, and then demonstrate this with frankly shocking empirical evidence. According to their research:

[T]he Supreme Court appears to have a vast problem with link rot, the condition of internet links no longer working. We found that number of websites that are no longer working cited to by Supreme Court opinions is alarmingly high, almost one-third (29%). Our research in Supreme Court cases also found that the rate of disappearance is not affected by the type of online document (pdf, html, etc) or the sources of links (government or non-government) in terms of what links are now dead. We cannot predict what links will rot, even within Supreme Court cases. (P.278).

They warn that without significant changes to current practices, the information in citations within judicial opinions will be known solely from those citations. When citations lack lengthy parentheticals or detailed explanatory text, it might not even be clear to future readers, critics or researchers why a document was cited, no less the nature of the support or clarifications it offered.

Liebler and Libert acknowledge that the Internet has improved legal research in many ways, opening up information conduits that had not been easily available before, and that in many respects website citations were an exciting development for the Supreme Court. They note that Justice Souter was the first Justice to cite the Internet in 1996, in a concurrence, and “then in 1998, Justice Ginsburg used the Internet for sources to demonstrate different meanings of the word “carry” in her dissent.” (P. 279). By 2006, all of the Justices then serving had cited at least one website. Internet based citations continued to blossom, and Liebler and Liebert’s research establishes that between 1996 and 2010, 114 majority opinions of the Supreme Court included links, but that almost one third of them are no longer working. Link rot at the Supreme Court is extant, widespread, and perfidious. Among several arresting examples they offer is the following:

In Scott v. Harris, a video with a dead link was cited extensively by both the majority and minority opinions, serving as the focal point of a serious disagreement in the case. The majority opinion states, “We are happy to allow the videotape to speak for itself.” Additionally, the majority used the citation to the video to disagree with the dissent, stating that “Justice Stevens suggests that our reaction to the videotape is somehow idiosyncratic, and seems to believe we are misrepresenting its contents.” (P. 282).

Even when information cited by the Court remains available on the Supreme Court website, it is often relocated; the old links are not amended to point to the new location, so they are as good as dead if that is what researchers quite reasonably assume them to be. Liebler and Liebert’s findings affirm research which has charted extensive link rot in many other contexts such as law review articles. Even more disturbingly, this research is in accord with “a study of federal appellate opinions [which] found that in 2002, 84.6% of Internet citations in cases from 1997 were inaccessible; moreover, 34% of citations in cases from 2001 were already inaccessible by 2002.” (P. 290-91).

Liebler and Liebert’s stunning revelations are a simple matter to confirm in the context of any subject area. For example, one of the most important copyright cases the Supreme Court has ever decided was Sony Corp. v. Universal Studios in 1984. The long and not particularly well written majority opinion set the balance between content owners like Universal Studios, and companies like Sony that produced new and innovative technologies (in this case the Betamax videocassette recorder) with respect to secondary copyright infringement. Under Sony, a new technology that was capable of substantial non-infringing uses could not be enjoined from distribution on the grounds that it contributed to copyright infringement. Sony was controversial and its convoluted drafting gave lawyers and judges the opportunity to read it into a multitude of meanings. As a copyright law geek of long standing, this author has seen that majority opinion in Sony parsed, diced, sliced by lower courts, and ultimately repackaged as a shadow of its former self by a unanimous Supreme Court in 2005 in MGM Studios v. Grokster. But at least link rot was not a worry. The same cannot be said of Grokster, wherein Justice Breyer’s concurrence has links and some of the links have already rotted. In fairness he notes that “all Internet materials … are available in Clerk of Court’s case file,” but it is not at all clear how easy it might be for a researcher to access this now, or especially five years from now. According to Liebler and Liebert, the case files are only available to those with sufficient means to go to Washington, DC, and visit the office of the Clerk of the Supreme Court. (P.300).

Another thing one learns from Something Rotten in the State of Legal Citation is that the Supreme Court often does its own web-based fact-finding. Liebler and Liebert inform readers that Allison Orr Larsen conducted a study of fifteen years of Supreme Court opinions, and “found that of the over one hundred “most important Supreme Court cases” from 2000 to 2010, 56% include mentions of facts the Justices did not find in the record and instead found independently.” (P. 278). Liebler and Liebert quote her stunning finds as follows:

[I]t was quite common for Justices to demonstrate the prevalence of a practice through statistics they found themselves. And, at a fairly high rate these statistics were supported by citations to websites—I found seventy-two such citations in my non-exhaustive search. Importantly, statistics ere independently gathered from websites with widely ranging indicia of reliability.1

While it is sort of amusing to picture the Justices surreptitiously googling themselves when they get bored during oral arguments, it’s a little disconcerting to think of them relying even briefly on misinformation-ridden sites like Yahoo Answers. Yahoo has not cornered the market on dumb, because the Internet does not have corners, but Yahoo Answers is rather infamous for exchanges such as:2

Question: Is it wrong to hate a certain race?

Answer: No, because if you are only used to running a 5k, doing a 10k with your jogging group is going to take too long. I hate 10ks myself for this very reason.

Question: Why doesn’t the Earth fall down?

Answer: Because it can fly.

Question: I plan on starting a business selling dognuts, any advice?

Answer: If you want people to eat them, I would call them doughnuts.

Question: Does vodka kill bees and wasps?

Answer: Yes, over time it will destroy their tiny livers, but it is the disruption to the home life that really takes its toll.

One wonders about the quality of the information that the Justices are finding online, and this practice is even more dangerous if link rot means that citations to the Justices’ independent research cannot be assessed or verified. And if Supreme Court Justices are engaging in the dubious practice of doing their own online research about cases before them, one has to assume lower court judges are doing so as well.

Liebler and Lieber conclude their outstanding article by recommending possible solutions to the link rot problem. “Ideally,” they say, “every court should digitally archive all materials cited within an opinion, regardless of the format.” (P. 299) They observe that:

In 2009, the Judicial Conference of the United States created a report titled Internet Materials in Opinions: Citations and Hyperlinking that recommended two primary solutions to the broken Internet link problem: Clerks should download any cited Internet resources and include them with the opinions. The downloaded Internet resources should be included as attachments on a non-fee basis in each court’s Case Management/Electronic Case Files System, such as PACER. (P. 301).

PACER is not without its drawbacks, but there are other alternatives as well, including using the Internet Archive or other internet archiving organizations, or permanent URLs. The main takeaway from this valuable article is that something needs to be done about link rot, and the problem needs to be addressed quickly and expansively. Liebler and Liebert have done a great service to the entire legal profession by bringing link rot to our attention and mapping the gigantic contours of the problem so compellingly.



  1. See Allison Orr Larsen, Confronting Supreme Court Fact Finding, 98 Va. L. Rev. 1255, 1288 (2012) (including discussion of the Justices’ use of websites to conduct research during oral argument and for opinions). []
  2. These are representative, edited versions of Yahoo Answers, screen grabs of which are on file with the author. []
 
 

What Do People Think About Copyright?

Lee Edwards, Bethany Klein, David Lee, Giles Moss & Fiona Philip, Isn’t it just a way to protect Walt Disney’s rights?: Media user perspectives on copyright, 16 New Media & Socy (2013).

When it comes to the issue of copyright in the digital age, it is not uncommon to read claims and counter-claims regarding the public’s perception of copyright enforcement and infringement through file-sharing mechanisms. Public policy in the field is often driven by assumptions that tend to be nothing more than guesswork as to their effectiveness and efficiency. While copyright policy has been the subject of several government-funded reviews in the UK in several years, these have usually failed to be conducted with the end-user of copyright works in mind, which seem to cement the idea that the subject is too complex for the public. It is therefore a very refreshing development when research is conducted to provide us with better empirical understanding of what the public really thinks with regards to copyright, going beyond mere conjecture and potential biases.

In Isn’t it just a way to protect Walt Disney’s rights?, the authors have set out to engage in an empirically sound exercise in order to ascertain the validity of various statements that are often part of copyright debates. They have put together a series of focus groups designed to get the opinion of “ordinary media users,” as the authors claim that this is a sector that does not often get their opinion represented in the copyright debates. The study’s methodology consisted in carrying out twelve focus groups based in Yorkshire, England, and each of these ranged from three to ten participants, who were recruited as pre-existing groups of media users, varying in age, background and experience with downloading media. The groups were asked to discuss topics relating to copyright, the creative industries, digital media, downloading and piracy for over one hour, and while the groups were directed, they were given a set of open-ended questions to explore the users’ experience, attitudes, and behaviour with regards to copyright.

The results of the research are fascinating. One of the most striking elements is that users seem to be confused as to what constitutes copyright infringement, a confusion that has been corroborated by other surveys in the UK (an OFCOM study found that 44% of respondents cannot identify with certainty legal or illegal content).

Another intriguing result is that, while panel members agree with some of the justifications for copyright, such as the right of creators to derive monetary gain from their works, there was a large disconnect between the creator and the copyright industry. In other words, when faced with the opportunity to download pirated content, users would display a complex array of justifications that combined rationalization and even cynicism with regards to the copyright industry.

The study unearths the very complex relationship between users and content, one that does not respond easily to the simplistic view of lazy and greedy pirates who never pay a penny for copyright works. One of the participants made this clear when describing the way in which they interact with film releases:

A film comes out in the movies. And if it looks really good, I’ll go watch it. And then, I dunno, a couple of weeks go by, and you can get a relatively good quality [copy] online and download it illegally. And, I’ll do that so I can watch it at home. And then another few months go by and I can buy the DVD for a quid.

Participants seemed to be critical at some level of the “legal” alternatives offered by the industry. While older group members seemed more content with the purchasing choices that they are presented with, younger and more technology-oriented users seemed less impressed with options presented by platforms such as Spotify and iTunes.

Another interesting finding is that users tended to describe downloading and file-sharing as something transitory, for example, to be done while there are no legal alternatives, or to be performed while you do not have enough money to purchase content legally. Similarly, the delay between a TV show being distributed in the US and Europe was identified by participants as an important factor driving piracy levels up. Users also seemed to be more comfortable with sharing content with friends and family, than to widespread and indiscriminate file sharing online.

The study concludes that:

Focus group discussions demonstrated users as complex, rational and cynical in their approach to copyright, challenging stereotypes of infringers as knowingly criminal or naively ignorant, rescuing the collapse of the public into outdated notions of pirates, and broadening the one-dimensional portrayals that sometimes lurk in the background of less user-grounded frameworks and arguments. The historically embedded criminalization of users may be difficult to dislodge, but it is vulnerable to analysis that situates everyday behaviour and views within wider social, political and cultural contexts and which allows user voices to challenge and critique dominant justifications while contributing justifications of their own.

This is a very welcome addition to copyright literature, one that gives us a hint about the complex relationship between users and copyright.

 
 

What’s So Special About Information Security

Andrea M. Matwyshyn, The Law of the Zebra, 28 Berkeley Tech. L.J. 155 (2013).

A debate continues to brew about the proper interpretation of the Computer Fraud and Abuse Act (CFAA), the federal statute that imposes criminal penalties on individuals who access computer networks without authorization.  For at least a decade, scholars and a growing number of courts have wondered whether the owner of a computer network could define “authorization” using form “terms and conditions” of the sort often presented to consumers who purchase or use digital services.  If that strategy were successful, then someone who clicked “I Agree” on a digital form yet failed to comply with all of its terms might be accused – even convicted – of the federal crime specified by the CFAA.

Andrea Matwyshyn uses that apparently technical problem to revisit a much larger question:  When, whether, and how the law should treat computers and computer networks as special in any way when dealing with a host of doctrinal and policy issues:  commercial law, intellectual property law, telecommunications law, antitrust law, criminal law, and so on?  This was the subject of a famous scholarly debate back at the turn of the 21st century between Lawrence Lessig, who argued that considering a “law of cyberspace” offered commentators access to potentially valuable insights about how people interact with each other1, and Judge Frank Easterbrook, who accused cyberspace promoters of constructing an unworkable and unhelpful “law of the horse.”2 No one “won” the debate in its original form, but in the late 1990s the question was mostly academic, literally.  Too few law and policy judgments turned on the answer to make the debate matter in any but a conceptual or theoretical sense.

Matwyshyn’s “The Law of the Zebra” suggests that the answer does matter in a concrete set of cases, and she has the case reports to show it.  Her answer is that both Lessig and Easterbrook were right:  There is something special about computers and computer networks.  But what’s special about them is that judges should not be seduced into treating them as something new and strange.  Most of the time, the common law deals with them, or should deal with them, just fine.  Courts that fail to remember that fact are dealing in a “law of the zebra,” an unusual creature, rather than a law of the horse (of course), an ordinary and more common animal.

In one sense, then, the article resumes a dialogue about metaphorical treatments of the Internet that captured the imaginations of a host of legal scholars a decade or so ago, including me.  Is cyberspace a thing?  A place?  A frontier?  A horse?  A zebra?

That question has no single answer, and Matwyshyn is smart enough not to propose one.  Instead, she wants to show how the alleged specialness of computer networks leads courts astray.  The CFAA and breaches of relevant contracts form the doctrinal backbone of an inquiry into techno-exceptionalism.

She shows how courts have dealt with contract formation and breach of contract questions in computer access contexts in inconsistent ways, and how that inconsistency has affected application of the CFAA.  She identifies her normative baseline – a series of related principles or propositions that define a common law contract framework – and argues that in contract formation questions, a degree of techno-exceptionalism is warranted; in contract interpretation and enforcement contexts, “regular” contract law will do.  Using four paradigmatic examples of “types” of computer hackers who might breach agreements with network providers – the sorts of people the CFAA was arguably drafted to deal with – she shows how her balanced form of “restrained technology exceptionalism” treats CFAA/contract law intersections.  Ordinary contract remedies are sufficient to deal with the harms that result from most types of unauthorized network access linked to bypassing agreed-to terms and conditions.  She argues that adding criminal liability under the CFAA to those remedies amounts to a sort of “weaponized” breach of contract that is warping basic contract law as applied in computer contexts, is bad policy, and arguably conflicts with Constitutional law prohibiting peonage. The proper way to look at the CFAA/contract interface, she argues, is through the prism of private ordering, a framework that is consistent both with Lessig’s view of cyberspace law (in which computer networks present novel forms of private ordering for fresh normative evaluation) and Easterbrook’s (in which existing doctrinal categories were more than adequate to that normative task).

On the doctrinal question, is she right?  Possibly.  But the doctrinal means are less important here than the policy ends.  In effect, Matwyshyn argues that contract remedies should preempt CFAA liability where the two overlap.  That sort of “reverse federalism” (“reverse” because, of course, we rarely think of state law preempting federal law) is, in a perverse way, quite consistent with a heterogeneous, anti-one-size-fits-all view of the Internet.  Matwyshyn is not making an appeal to an idealized “information wants to be free” fantasy.  Instead, she points out that the real policy at stake in interpretations of the CFAA, and in metaphorical debates about horses and zebras, is information security.  Linking criminal liability under the CFAA to breaches of the standardized, form-based terms and conditions that are essentially ubiquitous on the Internet trivializes the idea of access and undermines incentives for network providers to care properly for information that they truly care about.



  1. Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 501 (1999). []
  2. Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. Chi. L.F. 207 (1996). []
 
 

The Cancer of the Internet

Finn Brunton, Spam: A Shadow History of the Internet (MIT Press, 2013).

Technologies do not come with social or legal instruction manuals. There is nothing inherent in rooftop strobe light bars to suggest that police may use them but not civilians, or in thermal imaging cameras to suggest the reverse. The public must figure out what to do with each technology as it becomes available: embrace, ignore, regulate, ban. If we are lucky, the rules distinguishing acceptable from forbidden uses can come, over time, to seem like natural features of the technology itself. But they are not: the rules have to come from somewhere, and someone had to work them out, somehow.

For an example, consider today’s debates on what to do about drones. Or for another, consider spam, the subject of Finn Brunton’s erudite and entertaining Spam: A Shadow History of the Internet. Brunton pushes his history far back before the 1994 advertisment from a pair of immigration lawyers that is usually thought of as spam’s Ground Zero. He notes, for example, a 1971 antiwar message sent to every user of the Compatible Time-Sharing System and a 1978 announcement of a DEC computer demonstration sent to all West Coast ARPANET users–both of which provoked debate around the acceptable boundaries of network use. Brunton argues that well into the 1990s, spamming was considered a primarily social offense, separate and distinct from commercial self-promotion, and of an entirely lesser order than “net abuse” (P. 39) like crashing computers. Spam was a form of free speech, and like other inappropriate speech was to be met with censure rather than censorship.

But this attitude changed, and changed sharply, as the first wave of commercial spammers arrived en masse. Unlike the earlier “spammers,” who could be telephoned and reasoned with, or shamed into silence, or simply identified and ignored by users’ personal message filters, these new operators both flaunted their identity as outsiders to close-knit online communities and aggressively covered their tracks to keep the messages getting through. In the face of these new actors, Brunton shows that spam was effectively redefined as a legal and technical problem rather than a social one. To many antispam activists, the great danger of CAN-SPAM was that it would legitimize spam. But the combination of a legislative framework with reasonably effective filtering had another effect entirely–it “destroyed email spam as a reputable business model,” (P. 143) and “eliminated the mere profit-seeking carpetbaggers and left the business to the criminals.” (P. 144)

Spam is thoughtful about the ontology of its namesake. We are accustomed to thinking of spam as an email phenomenon. But, as Brunton effectively demonstrates, email spam is only one instance of a much larger pattern. Today there are Facebook spam, LinkedIn spam, blog comment spam, Twitter spam–and many more. Indeed, spam’s contested definitions create any number of difficult boundary cases. Gmail’s inbox tabs shunt “Promotions” into a separate folder, even when the recipients have affirmatively opted into receiving these emails. Or, to take one of Brunton’s examples, Demand Media “commissions content from human writers (who are willing to meet very low standards for very little money) on the basis of an algorithm that determines ad revenue over the lifetime of any given article.” (P. 162)

Brunton’s own definition of spam, offered at the end of the picaresque tour, is “the use of information technology infrastructure to exploit existing aggregations of human attention.” (P. 199) Both halves are exactly on point. Spam is medium- and technology- agnostic, but it is inherently a technological phenomenon: without the amplifying power of commodity copying, spam’s characteristic bulk is impossible. And spam is essentially a problem of attention hijacking: the systematic conscription of large and diffuse audiences by abusive speakers.

Much of Brunton’s story of spam is told through the eyes of its enemies, from the vigilantes who made tried to burn out commercial spammers’ fax machines to the modern programmers who build increasingly complex filters to identify and delete spam. Significantly, this is history through the eyes of its losers: the story of the tide as related by King Canute. Brunton conveys effectively the sheer frustration felt by anti-spam activists. The network they loved was being abused by outsiders who pointedly rejected their values, but they found themselves unable to stop the abuse. One countermeasure after another fell before the onslaught: killfiles, cancelbots, keyword filters, blackhole lists, and so many others.

Roughly the second half of the book is devoted to the remarkable technical evolution of computer-generated spam. Brunton traces the rise of keyword stuffing, hidden text, Oulipo-esque email generators, spam blogs, content farms, Mechanical Turk-fueled social spam, CAPTCHA crackers, Craigslist bots, malware as a source of spam, and online mercenaries renting out botnets to the highest bidder. This escalation–from a pair of immigration lawyers in over their heads to a “criminal infrastructure” industry (P. 195) in less than two decades–is nothing short of alarming.

Spam is also one of the most nuanced books to unpack what makes the postmodern post-Web 2.0 Internet tick. Borrowing Matt Jones’s concept of “robot-readable” media–”objects meant primarily for the attention of other objects” (Pp. 110-11)–Brunton gives an insightful metaphor of the uneasy coexistence of human and software readers online:

Consider a flower–say, a common marsh marigold, Caltha palustris. A human sees a delightful bloom, a solid and shiny yellow … A bee, meanwhile, sees something very different: the yellow is merely the edging around a deep splash of violet invisible to human eyes–a color out on the ultraviolet end of the spectrum known as “bee violet.” It’s a target meant for the creature that can fly into the flower and gather pollen. The marsh marigold exists in two worlds at once. (P. 110)

The visible language of QR codes and the invisible language of HTML tags are not meant for human consumption. They are there for our computers, not for us. But when we rely on those computers to find interesting things and show us the results, we leave ourselves open to a new kind of vulnerability:

If their points of weakness can be found, it is quite possible to trick our robots, like distracting a bloodhound with a scrap of meat or a squirt of anise–giving it the kind of thing it really wants to find, or the kind of thing that ruins its process of searching. The robot can be tricked, and the human reached: this is the essence of search engine spamming. (P. 113)

Brunton describes the current state of affairs, in which spammers and spam filters are locked in an arms race to master human linguistic patterns, as a parody of the Turing Test, “in which one set of algorithms is constantly trying to convince the other of their acceptable degree of salience–of being of interest and value to the humans.” (P. 150) And in the book’s conclusion, he circles back to spam’s central irony:

Indeed, from a certain perverse perspective … spam can be presented as the Internet’s infrastructure used maximally and most efficiently, for a certain value of “use.” … Spammers will fill every available channel to capacity, use every exploitable resource: all the squandered central processing unit cycles as a computer sits on a desk while its owner is at lunch, or toiling over some Word document, can now be put to use sending polymorphic spam messages–hundreds a minute, and each one unique. So many neglected blogs and wikis and other social spaces: automatic bot-posted spam comments, one after another, will fill the limits of their server space, like barnacles and zebra mussels growing on an abandoned ship until their weight sinks it. (P. 200)

Spam, in other words, is the cancer of the Internet. It is not an alien organism bent only invasion and destruction. Rather, it takes ordinary healthy communications and extrapolates them until they become grotesque, obscene, deadly parodies of themselves. Spam is constantly mutating, and it cannot be extirpated, not without killing the Internet, because the mechanisms they rely on to live are one and the same. The email is coming from inside the house.

 
 

Disruptive Contracting in Digital Music

Kristelia Garcia, Private Copyright Reform, 20 Mich. Telecomm. & Tech. L. Rev. (forthcoming 2013), available at SSRN.

Differential regulation of different technologies is baked into many of our laws, often on the basis of outdated assumptions, relative political power at the time of enactment, or other quirks. Internet exceptionalism is common, but perhaps nowhere more galling than as applied to music in the US, where the interests of terrestrial radio and music copyright owners combined to produce a regime so tangled that to call it ‘Byzantine’ is an insult to that empire.

Kristelia Garcia dives deep into the details of digital music law, focusing on two case studies that she finds promising and troubling by turns. While opting out of the statutory scheme may well be locally efficient and risk-minimizing for the participants, some of the gains come from cutting artists out of the benefits. Other third parties may also be negatively affected if statutorily set copyright royalty rates are influenced by these private deals without full recognition of their specific circumstances, or if adverse selection leaves the collective rights organizations (CROs) that administer music rights too weak to protect the interests of the average performer or songwriter. Garcia’s paper suggests both that scholars must keep an eye on business developments that can make the law on the books obsolete and that specific legal changes are needed to protect musicians, songwriters, and internet broadcasters as part of the dizzying pace of change in digital markets.

Garcia reviews the complex structure of music law, which defies summary. Of particular interest, § 114 of the Copyright Act provides for statutory licensing of the digital performance right in sound recordings in some circumstances, with rights administered by an entity known as SoundExchange. Despite serving an ever-converging audience indifferent to how its music is delivered, radio providers pay stunningly varying amounts. The Internet radio provider Pandora pays more than 50% of its revenue in performance royalties, while satellite and cable providers pay roughly 1/6th the rate Pandora pays, and terrestrial radio broadcasters pay nothing at all. But §114 also allows copyright owners to negotiate with digital performing entities and agree on royalty rates and license terms that replace the statutory licenses.

Likewise, §115 allows voluntary negotiations for royalties that replace the statutory license for musical works. Historically, performance rights organizations, primarily ASCAP and BMI, have administered the performance rights in musical works. These private organizations are not creatures of statute like SoundExchange, but they operate under antitrust decrees, with “rate courts” that review their licensing fees.

In June 2012, Taylor Swift’s record label, Big Machine, cut a separate performance rights deal with radio giant Clear Channel, circumventing SoundExchange. Surprisingly, Clear Channel agreed to pay performance royalties even for “spins” on terrestrial radio, for which there is no sound recording performance right. In return, Clear Channel received a lower rate for digital performances, and possibly other special considerations such as access to unique content. It received certainty in the form of a rate calculated as a share of revenue, rather than the less predictable per-play rate that would be owed SoundExchange under the statutory license. Big Machine may take a short-term loss, but may also receive preferential treatment for its artists. Sharing revenue means that Clear Channel no longer has an incentive to limit the number of Big Machine songs it plays, as it does under the statutory per-play royalty—and that means Big Machine is likely to get more airtime to promote its artists, which may result in increased cross-promotional opportunities.

Presumably, the parties wouldn’t have bargained around the law unless the contract benefited them both. But that’s not the end of the story. Under the compulsory license, Clear Channel would pay SoundExchange a set fee per digital spin of We Are Never Ever Getting Back Together; SoundExchange would distribute most of that directly to featured and non-featured performing artists (the latter include backup singers and session musicians, who have far less negotiating power than stars) and to the copyright holder, in proportions required by the statute. By opting out of §114, Clear Channel and Big Machine avoided the need to pay artists. (As Garcia notes, in theory artists could eventually claw back a share of such deals from record labels by contract, but the point of §114’s required distributions was Congress’s determination that artists, especially non-featured performers, lacked sufficient bargaining power to secure fair compensation in the market.) They also avoided the need to pay a share of SoundExchange’s overhead. These savings are, from others’ perspectives, externalities that the parties imposed on nonparticipants.

The deal is also an attempt to ease the grossly unjustified disparity between digital and terrestrial broadcasters, but only for Clear Channel, and the fact that legal discrimination against other Internet broadcasters remains in place creates further complications. If Clear Channel can pay less than the near-crippling performance royalties than other digital broadcasters, it obtains a marked competitive advantage. The private deal additionally allows it to predict its costs with much greater certainty—through both congressional and administrative interventions, statutory rates have changed by orders of magnitude, and they have to be re-set every few years. Separately, if the long-delayed but much-desired general public performance right for sound recordings is ever enacted, requiring terrestrial broadcasters to pay royalties to record labels, Clear Channel could well end up with an advantage over its terrestrial competitors. Even if that gamble doesn’t pay off, if the future really is in digital radio, the lower digital rate may justify paying general performance royalties.

Garcia’s second example comes from the musical work side. Sony/ATV, a music publisher, accepted a lower performance royalty rate from DMX, a digital music service that provides music programming for retail stores and restaurants, in exchange for a large advance. Again, this deal enabled the parties to avoid paying artists, who evidently lack the power to force contractual change.

Moreover, songwriters—whether contracted to Sony/ATV or not—lost out because of the deal. DMX used the negotiated royalty rate (excluding the advance) to convince the rate court to lower rates across the board, reflecting the “market value” supposedly expressed in the Sony/ATV deal. A similar dynamic is possible with the Clear Channel/Big Machine agreement, since the market value of the digital performance right looks lower, despite the terrestrial performance royalties being paid as part of the overall package.

At the same time, Sony/ATV’s withdrawal lowered the value of an ASCAP license by shrinking the size of its repertoire, and by increasing transaction costs for future licensors. Sony/ATV ultimately withdrew all its digital content from ASCAP. This is going well for Sony/ATV—in early 2013, it signed a direct license agreement with Pandora that increased its royalties by 25% over the ASCAP rate—but that’s an unsurprising consequence of adverse selection. When the strongest participants with the most valuable catalogs opt out in order to cut their own deals, the average quality/value of the remaining catalogs goes down, making the CRO less desirable. And this is true even if, as a group and on average, the participants would maximize their return by banding together.

These private deals might seem to be more efficient than statutory licenses or CROs (which operate in the shadow of government regulation, including antitrust). But they are fundamentally shaped by pervasive government regulation of the music business for the past century. They are no more proof of the superiority of the “free market” than are commercially successful technologies first developed through government R&D funding. Indeed, to the extent that the rest of the market operates under pervasive legal constraints, these exceptional deals may be even more distortionary.

Still, Garcia argues, the benefits of certainty shouldn’t be discounted, especially those available through a revenue-based rather than per-play model. (Statutory rates are also capable of using revenue-based metrics, of course, and this has often been a part of proposals to fix the statutory digital performance royalty.) However, she points out that the interests of institutional stability are unlikely to coincide with the interests of artists.

Garcia uses her examples to examine arguments made by Robert Merges and Mark Lemley about the role of property rules and liability rules in encouraging the formation of private groups that engage in blanket licensing, as the CROs do: Merges argues for property rules as incentivizing the most efficient private arrangements, while Lemley contends that owners and users can and do also contract around liability rules where that’s more efficient. The existence of evasion of both compulsory licenses and private entities such as ASCAP, Garcia argues, complicates the analysis. Among other things, she suggests, the existing structures operate as starting points allowing the parties to frame defection in mutually agreeable terms, and provide a backup solution if negotiations fail. “Neither party has to commit to the terms vis a vis all partners, nor does either party have to engage in costly, multiple negotiations, since all extradeal licenses can continue under the compulsory regime.” (P. 40.)

This portion of the paper could have used more discussion of adverse selection, as well as the related issue that the music business includes a few massive, oligopolistic entities with significant market power. Sony/ATV already represents a large bloc of artists, or at least of works. In such a concentrated market, the CRO may seem like just another layer of bureaucracy. But smaller or new entrants may join a CRO like ASCAP on nondiscriminatory terms, an option Sony/ATV generally does not offer. Instead of disintermediation and diversity, the current market structure for digital music seems to favor greater concentration—a problem explored by Tim Wu in his recent book.

Legal innovation often occurs without formal legal change. The easiest to see is when enterprising lawyers discover a formerly neglected cause of action and sets off a wave of lawsuits in the same area. Here, the innovation is different—market participants didn’t bother to opt out of the statutory schemes for decades, but the visibility of Clear Channel and Sony/ATV’s decisions means that others are likely to at least consider similar moves. In a time of diminished profits and overshadowing of traditional business models, the opportunity to lock in royalty rates regardless of changed laws, rate court rulings, and the like may prove tempting.

Garcia offers two tweaks to existing law in the absence of comprehensive reform (since even its proponents agree that it will take a very long time). First, she argues, parties who circumvent compulsory licenses should be required to follow the statutorily mandated distributions to artists, so that negotiating parties can’t benefit simply by virtue of writing them out of the deal. Second, all aspects of any private agreements used as evidence to set statutory rates must be fully disclosed, to avoid misrepresenting a deal with extra components as a market rate. These are modest proposals, and quite convincing, though the second in particular is likely to draw objections around protecting trade secrets and business models.

A larger lesson is that arbitrage opportunities, such as those produced by the differing treatment of terrestrial and digital performances, won’t be ignored forever, especially when sophisticated businesses are involved. The Internet offers many opportunities to experiment with business models. But without a sensible legal structure, the experimentation may only be in aid of externalizing costs on other parties and suppressing competition.

 
 

The Care and Feeding of Sticky Defaults in Information Privacy Law

Lauren Willis, When Nudges Fail: Slippery Defaults, 80 U. Chi. L. Rev. ___ (forthcoming 2013) available at SSRN.

If Jotwell is meant to surface obscure gems of legal scholarship, which might go unnoticed otherwise, I might be missing the point by highlighting a work forthcoming in the not-so-obscure University of Chicago Law Review on the au courant topics of nudges and liberal paternalism. But Lauren Willis’s new article, When Nudges Fail: Slippery Defaults, might escape the attention and acclaim it deserves as a work of information privacy law, so it is in that field I hope to give the article its due.

Willis’s article takes on the pervasive idea that all default choices are sticky. Defaults can sometimes be sticky, but Willis carefully deconstructs the economic, social, and technological preconditions that tend toward stickiness, and then demonstrates how firms can manipulate those conditions to render defaults quite slippery.

This article deserves to become a standard citation in information privacy law scholarship, important in at least three ways. Most obviously, the article uses online behavioral advertising and the Do Not Track debate as a recurring example, revisiting it throughout. This article makes a very useful contribution to the Do Not Track debate, which continues to rage.

Deeper, and more generally, the article delivers a blow—perhaps fatal—to the age-old “opt-in versus opt-out” debate. Should new, privacy invasive practices affect only people who opt-in to them, or should they instead apply to all except those who opt-out of them? Willis helps us understand that this debate, which has generated so much energy and discussion, may matter less than we think. “When firms have significant control over the process for opting out or the context in which the defaults are presented, firms can undermine the stickiness of policy defaults.” In other words, firms can, and do, encourage, cajole, push, and deceive into opting in those customers who rationally should not.

As proof, the heart of the article presents a lengthy examination of failed attempts by regulators to limit what they have seen as predatory bank practices surrounding checking account overdraft coverage. Although it might seem like an unequivocal convenience to have banks cover rather than reject ATM withdrawals and debit card payments from accounts with insufficient funds, because of the fees they charge for this “service”—$20 or even more—these amount to “low risk, high cost loan[s].” Low risk, because the bank is paid back automatically with the next deposit, and high cost—Willis gives a typical example amounting to an effective 7,000% APR. In some cases, these banks offer alternative services that provide basically the same protection at orders of magnitude less cost. For one who worries about consumer welfare, this is a maddening story, told with detail and care. Banks lied about the benefits of the coverage. They deluged the holdouts under a flood of paper and harassed them on the telephone. And in the end, they spurred droves of customers—according to one study, 75% of all customers and 98% of customers who overdraft more than ten times per year—to switch.

Going forward, one will be able to skim the first few footnotes of any article that uses the words “privacy,” “opt-in,” and “opt-out” to apply the “Willis Test.” Any such article that doesn’t cite this piece probably needn’t be taken seriously.

But at its deepest, and to my mind most interesting, level this article chips away at the faith we have placed in notice and choice, which is to say, at the foundation of most contemporary information privacy laws. Notice implies the transmission of accurate and fair information giving rise to fully informed consumers, and choice presupposes freedom of action and the absence of coercion. Watching the banks manipulate their customers into making bad choices brings home the challenges that face those who yearn for honest notice and choice. The lesson Willis offers repeatedly is crucial for information privacy law: companies control the messages that consumers see, and they are masters at manipulation.

And these are banks. Banks! Dinosaurs of the old economy that build websites that users merely suffer to use rather than enjoy and whose executives probably think UI and UX are stock ticker symbols. To flip the overdraft protection default, these fusty old companies resorted to costly Jurassic techniques involving the phone, ATM, and email account. Consider how much more consumers are outmatched by the media owners who run today’s engaging-to-the-point-of-addictive mobile apps and social networking sites. Online, consumers notice only what these master manipulators want them to notice and choose what they are preordained to choose.

This matters for information privacy a lot. We still rely on notice and choice as the most important tools regulators have to guarantee user privacy. Proposals for tackling new privacy concerns—from location tracking, to remote biometrics, to genomic information, and beyond—continue to center on creating the conditions for meaningful notice and consent. Willis’s article suggests that firms will provide clear notice and obtain meaningful choice only when they see no reason to oppose either choice, which is to say, when it doesn’t count for much. It might be time for regulators to reach for different tools.

We have known for some time that notice and choice are plagued by information quality problems. But Willis’s article demonstrates the still unmet need for scholarship that deconstructs the mechanics of how companies manipulate these problems to their benefit to subvert individual privacy. This builds on the groundbreaking work of scholars from outside law such as Lorrie Cranor and Alessandro Acquisti. And while legal scholars like Ryan Calo have built on this work (and I’ve started to do so, too), we need more of this. We need thorough and careful accounts of the landscape of notice and choice. With this article, this very necessary research agenda now has a fine blueprint.

 
 

From Behind the Great Wall: FOI in China and – About Method

At a conference hosted jointly by Peking University Law School and the Carter Center, ex US-President Carter, as reported recently by freedominfo.org,  a highly recommendable information source on access to government information by the way, encouraged the Chinese government “to take critical steps toward institutionalizing the right to information, including reviewing the experiences to date under the current Open Government Information regulation and developing it into a more powerful legal regime with the statutory strength of a law.”

What these “Regulations of the People’s Republic of China on Open Government Information of April 5, 2007, effective May 1, 2008″ are about, how and why they came into existence and what is keeping them alive, is described in Weibing Xiao’s book. According to Xiao, a Professor of Law at Shanghai University of Political Science and Law, the fight against corruption did not cause this development, but rather administrative problems with managing secrecy led to first tentative research and policy initiatives for greater transparency. These initial steps were then encouraged by an improved information flow environment in which – also in part due to technological developments – information exchanges increased between administrations and between citizens and administrations. Xiao’s account suggests a push-model of government information, one which while being encouraged for all levels of government seems to be particularly vital on the local level, where it is supported by long-standing and far-reaching administrative reforms.

Beyond this historical-analytical account I recommend the book for four reasons:

The book provides a highly readable account of how sensitive legal-political subjects – sensitive because they are perceived as overly dependent on Western concepts of the democratic law state – find their way into Chinese research agendas, how they are challenged and how they eventually legitimize themselves. The reader interested in public law will note – perhaps with some surprise – the importance of the Chinese Constitution and administrative law reform in this context.

Secondly, while it is obvious that a lot remains to be done in China, the account also serves as a mirror for the historical dependencies and shortcomings in those countries that might see themselves as champions of access to government information. Following the arguments and counter-arguments in China, readers will recall similar debates in their own countries, including references to cherished historical practices of secrecy in the past. Xiao’s quotation from Lao Zi (老子) in this context “People are difficult to govern when there is too much knowledge”(民之难治, 以其智多 (P. 29.) may still reflect the thoughts of many government officials here when they are faced with transparency requests.

The third reason for recommendation goes well beyond the subject proper of the book: In his analysis of the Chinese situation the author makes constant references to information flows in society, their structures, their impact on social and political developments and to the importance of how and with which objectives to address them by regulation. This is information law properly freed from the appearances of technology. And this is why Xiao’s book belongs in the Cyberlaw section: We still have to make substantial efforts to discover what is accidental about technology and what is the essence of information and its flow. Reading Xiao and his account of the Chinese developments we get a critical assessment of some of those approaches. While not yet providing a detailed methodology himself, he encourages us to look more closely at what law does to information flows.

Let me add a fourth argument, a puzzling one perhaps, based perhaps even on a cultural misunderstanding, but so intriguing for someone like me to whom English – as to Professor Xiao – has not been the first language. Coping with a second language limits the capabilities you may have in your first language to hide behind the flash work of oratory. You are forced to state your case simply and argue it closely, point by point. If then it is done so elegantly, economically and intellectually pleasingly, as by Xiao, it does make refreshing reading. – This, by the way, is the reason why I prefer to read German philosophers in their English translations.

 
 

Free Access to Law – Is It Here to Stay? Research Publications of Interest for Anybody who Believes In The Rule of Law

• Free Access to Law - Is it Here to Stay?, Local Researcher's Methodology Guide (2010).
• Free Access to Law - Is it Here to Stay?, Environmental Scan Report (2010).
• Free Access to Law - Is it Here to Stay?, Good Practices Handbook (2011).

“What is a Legal Information Institute when the transcripts of judgments are refused for publication – even by the courts themselves – by the company contracted to provide the transcription service on some very shady grounds of copyright?” That is one of the questions lingering in the wake of a very ambitious recent Free Access to Law project.1

The mission of the Legal Information Institutes (LIIs) it to maximize free access to public legal information such as legislation and case law from as many countries and international institutions as possible. To that end they produced the publications linked above. The “Local Researcher’s Methodology Guide” explains the reasons for the “Free Access to Law – Is It Here to Stay?” project in detail, and then provides instructions for researchers, including an “environmental scan matrix” and associative questionnaires.

The “Environmental Scan” is the first component of the “Free Access to Law – Is it Here to Stay?” global study on the sustainability of Free Access to Law initiatives. This report looks at the situation for the free open distribution of legal information in Kenya, Uganda, Hong Kong, India, Indonesia, Philippines, and Canada. The collected information includes a brief overview of each legal system, the legal environment (with a focus on copyright law, privacy, and secrecy based restrictions), legal education, the legal research environment (both online and off) and situates it in the context of each national economy.

The “Good Practices Handbook” adds depth and clarity to the instructions set out in the “Local Researchers Methodology Guide.” All three reflect the output of an undertaking that Mariya Badeva-Bright describes as an effort to “link two central concepts – the concept of success of a free access to law project and the concept of sustainability”. The objective is that by making law freely available, a legal information institute (LII) produces outcomes that benefit its target audience, thereby creating incentives among the target audience or other stakeholders to sustain the LII’s ongoing operations and development.”

The written portions of this project reflect an extensive and very thoughtful effort to map out ways that people can work toward consistent archiving and dissemination of legal information so that citizens have access to their own laws. As Kerry Anderson has noted In a VOXPOULII blog post, Free Access to Law matters the most to the poorest and most unstable communities:

Zimbabwe has not been able to publish its Law Reports since 2003 owing to the devastating collapse of infrastructure resulting from the political situation. Swaziland last published Law Reports in the 1980s. Many other countries have out-of-date Law Reports with no resources to continue the Law Reporting function. Others have written more eloquently than I on the necessity of having contextual law, particularly in common law jurisdictions. The point is singular and self-evident: how can the laws of a country be known if the laws of the country are not available?

Some of the project’s lessons are that “digitization of print materials and/or manual capturing of metadata … cannot be deemed a successful strategy in the long run – it is simply uneconomical to continue to do so past a certain stage. Engaging stakeholders in education of use of technology or development of IT solutions to support workflows for delivering of judgments or passing legislation may be a way of dealing with issues of digitizing and automating delivering of law to the public. Standards of preparation of legal material … adopted by all originators of legal information in a particular jurisdiction, will ease its dissemination and re-use.”2 In other words, dead trees are not nearly as helpful as electrons, even in very poor countries, in providing access to law. Part of me wants to resist this conclusion even though I concede that it is undoubtedly correct. Paper publications may be traditional, resilient, and fairly copyright-restriction-defying once they are published but they add a cumbersome step to any knowledge-distribution chain. And as we learn from these publications, money for Free Access to Law initiatives is scarce.

It may be, as Eve Gray concluded that “[t]he most promising and sustainable future looks to be in small and innovative digital companies using open source publishing models, offering free content as well as value-added services for sale.” But librarians are a hardy and relentless people, and if there is a way to bring a Legal Information Institute to every corner of the globe, these are the people who will figure it out.



  1. See Kerry Anderson, What is a Legal Information Institute. []
  2. Mariya Badeva-Bright, Is Free Access to Law here to stay?. []
 
 

CyberHealth: Computerizing Personalized Comparisons of Treatment Effectiveness

Economists are beginning to lose faith in technological progress.  As one wag puts it: instead of cancer cures, “Captain Kirk & the USS Enterprise, we got the Priceline Negotiator and a cheap flight to Cabo.” Even formidable companies like Google have fled the health field, daunted by the complex legal environment.  Some have called for radical deregulation as a solution. But a more viable approach is to turn to the work of some of the smart, committed, and impartial legal scholars who are pioneering the field of cyberhealth law. Particularly instructive is Sharona Hoffman & Andy Podgurski’s article, Improving Health Care Outcomes through Personalized Comparisons of Treatment Effectiveness Based on Electronic Health Records.

In an information economy, even cheesecake can be optimized using data-driven methodology.  Unfortunately, leading health care providers often resist such methods of improvement. Pharmaceutical firms have sometimes continued to market drugs even after reports emerge that undermine the rationale for taking the drug, let alone paying for it.That troubling method of attaining short term profits at the cost of long term sustainable business models needs to be countered by sophisticated methods of analyzing (and disseminating) data on the real effect of medical interventions.  Hoffman and Podgurski help develop a legal and technical framework for assuring that happens.

Promoting Pharmacovigilance

The President’s Commission Advising on Science and Technology has endorsed aggressive use of health data to ensure new research opportunities. The PCAST authors conclude that many clinical research studies today are “out of date before they are even finished,” “burdensome and costly,” and too narrowly focused.  They endorse health information technology that is enabled for “syndromic surveillance,” “public health monitoring,” and “adverse event monitoring” by aggregating observational data.

The free flows of data elevated to constitutional status in the case of Sorrellv. IMS Health Inc. may eventually improve pharmacovigilance, including efforts to understand the effectiveness of drugs on a population-wide level, beyond clinical research.  But it will take a great deal of computing power for them to do so.  Cyberlawyers will need to rethink how privacy, IP, and health law interact in order to help researchers and physicians make the most of the oncoming data deluge.

Hoffman and Podgurski have detailed how advanced programs of observational research on effectiveness could work. They explain the benefits of personalized comparisons of treatment effectiveness (PCTEs), a form of personalized medicine, that uses information obtained through a large database search to “find a cohort for a patient needing treatment.”   Their proposal for new forms of personalized medicine takes to the individual level what has often been envisioned for population-wide analysis:

We propose the development of a broadly accessible framework to enable physicians to rapidly perform, through a computerized service, medically sound per­sonalized comparisons of the effectiveness of possible treatments for patients’ conditions.  A personalized comparison of treatment effectiveness . . . for a given patient (the subject patient) would be based on data from EHRs of a cohort of patients who are similar to the subject patient (clinically, demographically, genetically), who received the treatments previously and whose outcomes were recorded. (P. 425.)

As they explain, such a database query could identify “for a given patient, an appropriate reference group (cohort) of similar, previously treated patients whose EHRs would be analyzed to choose the optimal treatment for the patient at issue.”  Their proposal is a logical extension of an idea promoted in an Institute of Medicine report known as the “Wilensky Proposal,” which called for more targeted comparative effectiveness research.  Research has already demonstrated that pharmacogenetic algorithms can sometimes outperform algorithms that consider only clinical factors.

From Transparency to Intelligibility

Of course, there are challenges to this type of research.  Systems must move beyond mere transparency to data entry standards that allow for the intelligibility required by personalized medicine.  As Hoffman and Podgurski recognize, “the need to code all presenting comorbidities” and to identify “patients who have the specific condition to be studied” is crucial to data quality. There is a tension between untrammeled innovation by vendors at any given time and later, predictable needs of patients, doctors, insurers, and hospitals to compare their records and to transport information from one filing system to another.

For example, one system may be able to understand “C,” “cgh,” or “koff” as “cough,” and may well code it in any way it chooses.  But to integrate and to port data, all systems need to be able to translate symptoms, diagnoses, interventions, and outcomes into commonly recognized coding.  Competition also depends on data portability: health care providers can only credibly threaten to move their business away from an unsatisfactory vendor if they can transport those records.  Patients want their providers to seamlessly integrate records.  Hoffman and Podgurski show the necessity of Stage II of meaningful use rulemaking to promote a common language of medical recordkeeping. As they recommended in 2008:

[I]t is necessary for all vendors to support what we will call a “common exchange representation” (“CER”) for EHRs.  A CER is an artificial language for representing the information in EHRs, which has well defined syntax and semantics and is capable of unambiguously representing the information in any EHR from a typical EHR system. EHRs using the CER should be readily transmittable between EHR systems of different vendors.  The CER should make it easy for vendors of EHR systems to implement a mechanism for translating accurately and efficiently between the CER and the system’s internal EHR format.

There are also important opportunities for standardization in the security field.  The discussion can quickly become technical, but the underlying purpose is clear: to develop some standard forms of interacting in a realm where “spontaneous order” is unlikely to arise and network effects (as well as what David Grewal describes as network power) could lead to the lock-in of suboptimal patterns of data storage and transfer.

Better health information technology infrastructures in the United States can enable forms of surveillance that are more rigorous, comprehensive, and actionable in the world of policy, and more user-friendly for patients. Rather than getting between doctor and patient, advanced EHR stands poised to silently monitor and improve their relationship. The same record systems that are designed to digitize health diagnoses and interventions can also generate outcome data if they are configured appropriately.  Such data would help ensure patients and authorities are truly informed about the risks and benefits of drugs.

Hoffman and Podgurski are among the first legal academics to convincingly merge literatures of health system transformation and cyberlaw.  They suggest the practical feasibility of productivity gains in the health sector that we usually associate only with Silicon Valley. Just as U.S. Department of Homeland Security (“DHS”) and National Security Agency (“NSA”) have advanced domestic intelligence capabilities by querying distributed databases from diverse public and private sector partners, we can now apply such technology toward improving population health.  Hoffman and Padgurski demonstrate a “proof of concept” for reallocating more of these technologies from the diminishing marginal returns of seeking an “enemy within,” to fighting the truly pervasive menace of disease.

 
 

If Code Is Law, Then Coders Are Lawyers

E. Gabriella Coleman, Coding Freedom: The Ethics and Aesthetics of Hacking (Princeton University Press, 2012).

Legal academics who write about norms risk becoming armchair anthropologists.  But the armchair is precisely the place anthropologists avoid; good ethnography cannot be done alone.  As one of my college professors said, “The specific antidote to bullshit is field work.”

E. Gabriella Coleman has spent much of her career doing field work with a computer.  Her first monograph, Coding Freedom: The Ethics and Aesthetics of Hacking, is based on an extended study of free software programmers.  She lurked on their email lists, hung out in their IRC chat rooms, went to their conferences (she even helped organize one herself), and spent countless hours simply talking with them about their work.  The result is a fascinating study of a community substantially defined by its tense engagement with law. (More recently, she has been closely observing the anarchic carnival-esque collective paradoxically known as Anonymous, with equally fascinating results.

On one level, this is a book to savor simply for its empathetic ethnography.  The “hackers” it describes–despite the pejorative, transgressive overtones that years of media overreaction have given the term–play at the intersection of esthetic beauty and practical utility.  Coleman describes coding as a species of creative craft work, with a perceptive eye for detail.  One of the best passages is dedicated to a close reading of a code snippet written by the free-software advocate Karl Fogel in which he grinds his teeth in frustration at having to work around a bad design decision in another piece of software.  He creates a function named “kf-compensate-for-fucking-unbelievable-emacs-lossage” to solve the problem.  As Coleman explains, quoting Erving Goffman:

Fogel’s code is an apt example of “face work”–when a hacker is sanctioned to perform a “line,” which is the “pattern of verbal and nonverbal acts by which he expresses his view of the situation and through this his evaluation of the participants, especially himself.” Within such a presentation, hackers can declare and demarcate their unique contribution to a piece of software while at the same time proffering technical judgment. One may even say that this taunting is their informal version of the academic peer-review process.  In this particular case, Fogel is declaring the code he patched as an utter failure of the imagination.

Anyone who thinks about programmers, open source, online communities, or the politics of intellectual property should have a copy of Coding Freedom on the shelf.  It is an invaluable portrait of how free-software coders work, individually and collectively.

What makes Coding Freedom truly stand out, however, is that “free software hacker” is an identity significantly constituted in relation to the law.  To write free software is to choose to release one’s code using a carefully crafted copyright license; Coleman’s hackers elevate this legal issue to prime significance in their working lives.  Coding Freedom is thus both the oft-told story of a legal idea–free software–and the lesser-known story of how numerous hackers, following personal but parallel tracks, have engaged with copyright law.

Coleman describes two crossing trajectories in copyright: the rise of an increasingly expansive domestic and international copyright system and the simultaneous rise of the free software movement, The former is bent on restricting uses; the latter on enabling them.  The two collided in the early 2000s in the fights over the implementation of the DMCA, particularly the DeCSS case and the arrest of Dmitry Sklyarov.  The result was the politicization of copyright in code: inspired by legal scholars and free software evangelists, many hackers saw themselves as participants in a struggle against a repressive copyright system.

Coding Freedom makes these familiar stories fresh. Free-software hackers were receptive to a fight-for-your-rights narrative precisely because they were already embedded in a professional context that foregrounded the political and ethical implications of copyright law.  What is more, they engaged with copyright law as law, drafting licenses to achieve a free-software goals, endlessly debating the minutiae of license compliance, and critiquing copyright’s inconsistencies with the playful creativity of appellate litigators.

Coleman artfully demonstrates how the anti-DMCA trope of “code is speech” resonated with hackers’ lived experiences creating software alone and together.  They were used to communicating both their individual expression and their shared endeavor in source-code comments and elegant algorithms.  When Seth Schoen critiqued the DMCA’s prohibition on circumvention tools by rewriting
DeCSS “in haiku, he was drawing on a long hacker tradition (also artfully described by Coleman) of linguistic play, of writing programs not merely to compute but also to amuse.

This leads into a thoughtful discussion of the extent and limits of a hacker-oriented critique of the existing order of things.  On the one hand, some coders have been politicized by their engagement with copyright, and connect it to a larger transformative movement concerned with the intellectual commons and global access to knowledge.  On the other, free-software licenses are built around a deep core of apolitical neutrality: they pointedly refuse to take any position on the relative worth of what downstream users use the software for.  Feeding the homeless is fine; so is building doomsday devices.

Coding Freedom offers a nuanced analysis of hackers sometimes-closer sometimes-further dance with liberal ideals – particularly in its clever discussion of how Debian (a leading free software project) cycles between majoritarian democracy, technical meritocracy, and informal consensus.  None of these governance modes is fully satisfactory, either ideologically or pragmatically: each has broken down as Debian has gone through growth spurts and awkward adolescent phases.  But at the same time, each of them reflects larger commitments its members hold dear: equality, excellence, and collaboration.

Debian — which Coleman describes as a Coverian nomos– is the heart of the book.  Its social practices of production, education, and self-governance receive careful treatment.  In an early chapter, Coleman convincingly argues that hard work of creating and sustaining hacker communities does not happen solely online.  She gives a thoughtful description of “cons” — the regular gatherings at which hackers come together to teach each other, discuss project direction, code intensely, and socialize.  She convincingly argues that a con is a ritual-laden lifeworld, an in-sense experience that helps hackers understand themselves as part of a larger collaborative collective. These and other in-person interactions are an important part of the glue that makes the global networked hacker public possible; online and offline appear as complements in her story, rather than as modalities in opposition.

Coleman’s portrait of how hackers become full-fledged members of Debian is eerily like legal education.  They learn a specialized subset of the law, to be sure, with a strong and narrow emphasis on a thin slice of copyright.  But the hackers who are trained in it go through a prescribed course of study in legal texts, practice applying legal rules to new facts, learn about legal drafting, interpretation, and compliance, and cultivate an ethical and public-spirited professional identity.  There is even a written examination at the end.  Law schools and regulators ought to be interested in her careful portrait of informal but successful legal training in a lay community.

There is a deep parallel between software and law as formal rule-bound systems of control and creation.  Coding Freedom breaks important ground in teasing out some of the implications of this connection.  Hopefully others will also take up the project.