Sep 10, 2013 Rebecca Tushnet
Kristelia Garcia, Private Copyright Reform, 20 Mich. Telecomm. & Tech. L. Rev. (forthcoming 2013), available at SSRN.
Differential regulation of different technologies is baked into many of our laws, often on the basis of outdated assumptions, relative political power at the time of enactment, or other quirks. Internet exceptionalism is common, but perhaps nowhere more galling than as applied to music in the US, where the interests of terrestrial radio and music copyright owners combined to produce a regime so tangled that to call it ‘Byzantine’ is an insult to that empire.
Kristelia Garcia dives deep into the details of digital music law, focusing on two case studies that she finds promising and troubling by turns. While opting out of the statutory scheme may well be locally efficient and risk-minimizing for the participants, some of the gains come from cutting artists out of the benefits. Other third parties may also be negatively affected if statutorily set copyright royalty rates are influenced by these private deals without full recognition of their specific circumstances, or if adverse selection leaves the collective rights organizations (CROs) that administer music rights too weak to protect the interests of the average performer or songwriter. Garcia’s paper suggests both that scholars must keep an eye on business developments that can make the law on the books obsolete and that specific legal changes are needed to protect musicians, songwriters, and internet broadcasters as part of the dizzying pace of change in digital markets.
Garcia reviews the complex structure of music law, which defies summary. Of particular interest, § 114 of the Copyright Act provides for statutory licensing of the digital performance right in sound recordings in some circumstances, with rights administered by an entity known as SoundExchange. Despite serving an ever-converging audience indifferent to how its music is delivered, radio providers pay stunningly varying amounts. The Internet radio provider Pandora pays more than 50% of its revenue in performance royalties, while satellite and cable providers pay roughly 1/6th the rate Pandora pays, and terrestrial radio broadcasters pay nothing at all. But §114 also allows copyright owners to negotiate with digital performing entities and agree on royalty rates and license terms that replace the statutory licenses.
Likewise, §115 allows voluntary negotiations for royalties that replace the statutory license for musical works. Historically, performance rights organizations, primarily ASCAP and BMI, have administered the performance rights in musical works. These private organizations are not creatures of statute like SoundExchange, but they operate under antitrust decrees, with “rate courts” that review their licensing fees.
In June 2012, Taylor Swift’s record label, Big Machine, cut a separate performance rights deal with radio giant Clear Channel, circumventing SoundExchange. Surprisingly, Clear Channel agreed to pay performance royalties even for “spins” on terrestrial radio, for which there is no sound recording performance right. In return, Clear Channel received a lower rate for digital performances, and possibly other special considerations such as access to unique content. It received certainty in the form of a rate calculated as a share of revenue, rather than the less predictable per-play rate that would be owed SoundExchange under the statutory license. Big Machine may take a short-term loss, but may also receive preferential treatment for its artists. Sharing revenue means that Clear Channel no longer has an incentive to limit the number of Big Machine songs it plays, as it does under the statutory per-play royalty—and that means Big Machine is likely to get more airtime to promote its artists, which may result in increased cross-promotional opportunities.
Presumably, the parties wouldn’t have bargained around the law unless the contract benefited them both. But that’s not the end of the story. Under the compulsory license, Clear Channel would pay SoundExchange a set fee per digital spin of We Are Never Ever Getting Back Together; SoundExchange would distribute most of that directly to featured and non-featured performing artists (the latter include backup singers and session musicians, who have far less negotiating power than stars) and to the copyright holder, in proportions required by the statute. By opting out of §114, Clear Channel and Big Machine avoided the need to pay artists. (As Garcia notes, in theory artists could eventually claw back a share of such deals from record labels by contract, but the point of §114’s required distributions was Congress’s determination that artists, especially non-featured performers, lacked sufficient bargaining power to secure fair compensation in the market.) They also avoided the need to pay a share of SoundExchange’s overhead. These savings are, from others’ perspectives, externalities that the parties imposed on nonparticipants.
The deal is also an attempt to ease the grossly unjustified disparity between digital and terrestrial broadcasters, but only for Clear Channel, and the fact that legal discrimination against other Internet broadcasters remains in place creates further complications. If Clear Channel can pay less than the near-crippling performance royalties than other digital broadcasters, it obtains a marked competitive advantage. The private deal additionally allows it to predict its costs with much greater certainty—through both congressional and administrative interventions, statutory rates have changed by orders of magnitude, and they have to be re-set every few years. Separately, if the long-delayed but much-desired general public performance right for sound recordings is ever enacted, requiring terrestrial broadcasters to pay royalties to record labels, Clear Channel could well end up with an advantage over its terrestrial competitors. Even if that gamble doesn’t pay off, if the future really is in digital radio, the lower digital rate may justify paying general performance royalties.
Garcia’s second example comes from the musical work side. Sony/ATV, a music publisher, accepted a lower performance royalty rate from DMX, a digital music service that provides music programming for retail stores and restaurants, in exchange for a large advance. Again, this deal enabled the parties to avoid paying artists, who evidently lack the power to force contractual change.
Moreover, songwriters—whether contracted to Sony/ATV or not—lost out because of the deal. DMX used the negotiated royalty rate (excluding the advance) to convince the rate court to lower rates across the board, reflecting the “market value” supposedly expressed in the Sony/ATV deal. A similar dynamic is possible with the Clear Channel/Big Machine agreement, since the market value of the digital performance right looks lower, despite the terrestrial performance royalties being paid as part of the overall package.
At the same time, Sony/ATV’s withdrawal lowered the value of an ASCAP license by shrinking the size of its repertoire, and by increasing transaction costs for future licensors. Sony/ATV ultimately withdrew all its digital content from ASCAP. This is going well for Sony/ATV—in early 2013, it signed a direct license agreement with Pandora that increased its royalties by 25% over the ASCAP rate—but that’s an unsurprising consequence of adverse selection. When the strongest participants with the most valuable catalogs opt out in order to cut their own deals, the average quality/value of the remaining catalogs goes down, making the CRO less desirable. And this is true even if, as a group and on average, the participants would maximize their return by banding together.
These private deals might seem to be more efficient than statutory licenses or CROs (which operate in the shadow of government regulation, including antitrust). But they are fundamentally shaped by pervasive government regulation of the music business for the past century. They are no more proof of the superiority of the “free market” than are commercially successful technologies first developed through government R&D funding. Indeed, to the extent that the rest of the market operates under pervasive legal constraints, these exceptional deals may be even more distortionary.
Still, Garcia argues, the benefits of certainty shouldn’t be discounted, especially those available through a revenue-based rather than per-play model. (Statutory rates are also capable of using revenue-based metrics, of course, and this has often been a part of proposals to fix the statutory digital performance royalty.) However, she points out that the interests of institutional stability are unlikely to coincide with the interests of artists.
Garcia uses her examples to examine arguments made by Robert Merges and Mark Lemley about the role of property rules and liability rules in encouraging the formation of private groups that engage in blanket licensing, as the CROs do: Merges argues for property rules as incentivizing the most efficient private arrangements, while Lemley contends that owners and users can and do also contract around liability rules where that’s more efficient. The existence of evasion of both compulsory licenses and private entities such as ASCAP, Garcia argues, complicates the analysis. Among other things, she suggests, the existing structures operate as starting points allowing the parties to frame defection in mutually agreeable terms, and provide a backup solution if negotiations fail. “Neither party has to commit to the terms vis a vis all partners, nor does either party have to engage in costly, multiple negotiations, since all extradeal licenses can continue under the compulsory regime.” (P. 40.)
This portion of the paper could have used more discussion of adverse selection, as well as the related issue that the music business includes a few massive, oligopolistic entities with significant market power. Sony/ATV already represents a large bloc of artists, or at least of works. In such a concentrated market, the CRO may seem like just another layer of bureaucracy. But smaller or new entrants may join a CRO like ASCAP on nondiscriminatory terms, an option Sony/ATV generally does not offer. Instead of disintermediation and diversity, the current market structure for digital music seems to favor greater concentration—a problem explored by Tim Wu in his recent book.
Legal innovation often occurs without formal legal change. The easiest to see is when enterprising lawyers discover a formerly neglected cause of action and sets off a wave of lawsuits in the same area. Here, the innovation is different—market participants didn’t bother to opt out of the statutory schemes for decades, but the visibility of Clear Channel and Sony/ATV’s decisions means that others are likely to at least consider similar moves. In a time of diminished profits and overshadowing of traditional business models, the opportunity to lock in royalty rates regardless of changed laws, rate court rulings, and the like may prove tempting.
Garcia offers two tweaks to existing law in the absence of comprehensive reform (since even its proponents agree that it will take a very long time). First, she argues, parties who circumvent compulsory licenses should be required to follow the statutorily mandated distributions to artists, so that negotiating parties can’t benefit simply by virtue of writing them out of the deal. Second, all aspects of any private agreements used as evidence to set statutory rates must be fully disclosed, to avoid misrepresenting a deal with extra components as a market rate. These are modest proposals, and quite convincing, though the second in particular is likely to draw objections around protecting trade secrets and business models.
A larger lesson is that arbitrage opportunities, such as those produced by the differing treatment of terrestrial and digital performances, won’t be ignored forever, especially when sophisticated businesses are involved. The Internet offers many opportunities to experiment with business models. But without a sensible legal structure, the experimentation may only be in aid of externalizing costs on other parties and suppressing competition.
May 20, 2013 Paul Ohm
Lauren Willis, When Nudges Fail: Slippery Defaults, 80 U. Chi. L. Rev. ___ (forthcoming 2013) available at SSRN.
If Jotwell is meant to surface obscure gems of legal scholarship, which might go unnoticed otherwise, I might be missing the point by highlighting a work forthcoming in the not-so-obscure University of Chicago Law Review on the au courant topics of nudges and liberal paternalism. But Lauren Willis’s new article, When Nudges Fail: Slippery Defaults, might escape the attention and acclaim it deserves as a work of information privacy law, so it is in that field I hope to give the article its due.
Willis’s article takes on the pervasive idea that all default choices are sticky. Defaults can sometimes be sticky, but Willis carefully deconstructs the economic, social, and technological preconditions that tend toward stickiness, and then demonstrates how firms can manipulate those conditions to render defaults quite slippery.
This article deserves to become a standard citation in information privacy law scholarship, important in at least three ways. Most obviously, the article uses online behavioral advertising and the Do Not Track debate as a recurring example, revisiting it throughout. This article makes a very useful contribution to the Do Not Track debate, which continues to rage.
Deeper, and more generally, the article delivers a blow—perhaps fatal—to the age-old “opt-in versus opt-out” debate. Should new, privacy invasive practices affect only people who opt-in to them, or should they instead apply to all except those who opt-out of them? Willis helps us understand that this debate, which has generated so much energy and discussion, may matter less than we think. “When firms have significant control over the process for opting out or the context in which the defaults are presented, firms can undermine the stickiness of policy defaults.” In other words, firms can, and do, encourage, cajole, push, and deceive into opting in those customers who rationally should not.
As proof, the heart of the article presents a lengthy examination of failed attempts by regulators to limit what they have seen as predatory bank practices surrounding checking account overdraft coverage. Although it might seem like an unequivocal convenience to have banks cover rather than reject ATM withdrawals and debit card payments from accounts with insufficient funds, because of the fees they charge for this “service”—$20 or even more—these amount to “low risk, high cost loan[s].” Low risk, because the bank is paid back automatically with the next deposit, and high cost—Willis gives a typical example amounting to an effective 7,000% APR. In some cases, these banks offer alternative services that provide basically the same protection at orders of magnitude less cost. For one who worries about consumer welfare, this is a maddening story, told with detail and care. Banks lied about the benefits of the coverage. They deluged the holdouts under a flood of paper and harassed them on the telephone. And in the end, they spurred droves of customers—according to one study, 75% of all customers and 98% of customers who overdraft more than ten times per year—to switch.
Going forward, one will be able to skim the first few footnotes of any article that uses the words “privacy,” “opt-in,” and “opt-out” to apply the “Willis Test.” Any such article that doesn’t cite this piece probably needn’t be taken seriously.
But at its deepest, and to my mind most interesting, level this article chips away at the faith we have placed in notice and choice, which is to say, at the foundation of most contemporary information privacy laws. Notice implies the transmission of accurate and fair information giving rise to fully informed consumers, and choice presupposes freedom of action and the absence of coercion. Watching the banks manipulate their customers into making bad choices brings home the challenges that face those who yearn for honest notice and choice. The lesson Willis offers repeatedly is crucial for information privacy law: companies control the messages that consumers see, and they are masters at manipulation.
And these are banks. Banks! Dinosaurs of the old economy that build websites that users merely suffer to use rather than enjoy and whose executives probably think UI and UX are stock ticker symbols. To flip the overdraft protection default, these fusty old companies resorted to costly Jurassic techniques involving the phone, ATM, and email account. Consider how much more consumers are outmatched by the media owners who run today’s engaging-to-the-point-of-addictive mobile apps and social networking sites. Online, consumers notice only what these master manipulators want them to notice and choose what they are preordained to choose.
This matters for information privacy a lot. We still rely on notice and choice as the most important tools regulators have to guarantee user privacy. Proposals for tackling new privacy concerns—from location tracking, to remote biometrics, to genomic information, and beyond—continue to center on creating the conditions for meaningful notice and consent. Willis’s article suggests that firms will provide clear notice and obtain meaningful choice only when they see no reason to oppose either choice, which is to say, when it doesn’t count for much. It might be time for regulators to reach for different tools.
We have known for some time that notice and choice are plagued by information quality problems. But Willis’s article demonstrates the still unmet need for scholarship that deconstructs the mechanics of how companies manipulate these problems to their benefit to subvert individual privacy. This builds on the groundbreaking work of scholars from outside law such as Lorrie Cranor and Alessandro Acquisti. And while legal scholars like Ryan Calo have built on this work (and I’ve started to do so, too), we need more of this. We need thorough and careful accounts of the landscape of notice and choice. With this article, this very necessary research agenda now has a fine blueprint.
Mar 18, 2013 Herbert Burkert
At a conference hosted jointly by Peking University Law School and the Carter Center, ex US-President Carter, as reported recently by freedominfo.org, a highly recommendable information source on access to government information by the way, encouraged the Chinese government “to take critical steps toward institutionalizing the right to information, including reviewing the experiences to date under the current Open Government Information regulation and developing it into a more powerful legal regime with the statutory strength of a law.”
What these “Regulations of the People’s Republic of China on Open Government Information of April 5, 2007, effective May 1, 2008” are about, how and why they came into existence and what is keeping them alive, is described in Weibing Xiao’s book. According to Xiao, a Professor of Law at Shanghai University of Political Science and Law, the fight against corruption did not cause this development, but rather administrative problems with managing secrecy led to first tentative research and policy initiatives for greater transparency. These initial steps were then encouraged by an improved information flow environment in which – also in part due to technological developments – information exchanges increased between administrations and between citizens and administrations. Xiao’s account suggests a push-model of government information, one which while being encouraged for all levels of government seems to be particularly vital on the local level, where it is supported by long-standing and far-reaching administrative reforms.
Beyond this historical-analytical account I recommend the book for four reasons:
The book provides a highly readable account of how sensitive legal-political subjects – sensitive because they are perceived as overly dependent on Western concepts of the democratic law state – find their way into Chinese research agendas, how they are challenged and how they eventually legitimize themselves. The reader interested in public law will note – perhaps with some surprise – the importance of the Chinese Constitution and administrative law reform in this context.
Secondly, while it is obvious that a lot remains to be done in China, the account also serves as a mirror for the historical dependencies and shortcomings in those countries that might see themselves as champions of access to government information. Following the arguments and counter-arguments in China, readers will recall similar debates in their own countries, including references to cherished historical practices of secrecy in the past. Xiao’s quotation from Lao Zi (老子) in this context “People are difficult to govern when there is too much knowledge”(民之难治, 以其智多 (P. 29.) may still reflect the thoughts of many government officials here when they are faced with transparency requests.
The third reason for recommendation goes well beyond the subject proper of the book: In his analysis of the Chinese situation the author makes constant references to information flows in society, their structures, their impact on social and political developments and to the importance of how and with which objectives to address them by regulation. This is information law properly freed from the appearances of technology. And this is why Xiao’s book belongs in the Cyberlaw section: We still have to make substantial efforts to discover what is accidental about technology and what is the essence of information and its flow. Reading Xiao and his account of the Chinese developments we get a critical assessment of some of those approaches. While not yet providing a detailed methodology himself, he encourages us to look more closely at what law does to information flows.
Let me add a fourth argument, a puzzling one perhaps, based perhaps even on a cultural misunderstanding, but so intriguing for someone like me to whom English – as to Professor Xiao – has not been the first language. Coping with a second language limits the capabilities you may have in your first language to hide behind the flash work of oratory. You are forced to state your case simply and argue it closely, point by point. If then it is done so elegantly, economically and intellectually pleasingly, as by Xiao, it does make refreshing reading. – This, by the way, is the reason why I prefer to read German philosophers in their English translations.
Mar 1, 2013 Ann Bartow
“What is a Legal Information Institute when the transcripts of judgments are refused for publication – even by the courts themselves – by the company contracted to provide the transcription service on some very shady grounds of copyright?” That is one of the questions lingering in the wake of a very ambitious recent Free Access to Law project.
The mission of the Legal Information Institutes (LIIs) it to maximize free access to public legal information such as legislation and case law from as many countries and international institutions as possible. To that end they produced the publications linked above. The “Local Researcher’s Methodology Guide” explains the reasons for the “Free Access to Law – Is It Here to Stay?” project in detail, and then provides instructions for researchers, including an “environmental scan matrix” and associative questionnaires.
The “Environmental Scan” is the first component of the “Free Access to Law – Is it Here to Stay?” global study on the sustainability of Free Access to Law initiatives. This report looks at the situation for the free open distribution of legal information in Kenya, Uganda, Hong Kong, India, Indonesia, Philippines, and Canada. The collected information includes a brief overview of each legal system, the legal environment (with a focus on copyright law, privacy, and secrecy based restrictions), legal education, the legal research environment (both online and off) and situates it in the context of each national economy.
The “Good Practices Handbook” adds depth and clarity to the instructions set out in the “Local Researchers Methodology Guide.” All three reflect the output of an undertaking that Mariya Badeva-Bright describes as an effort to “link two central concepts – the concept of success of a free access to law project and the concept of sustainability”. The objective is that by making law freely available, a legal information institute (LII) produces outcomes that benefit its target audience, thereby creating incentives among the target audience or other stakeholders to sustain the LII’s ongoing operations and development.”
The written portions of this project reflect an extensive and very thoughtful effort to map out ways that people can work toward consistent archiving and dissemination of legal information so that citizens have access to their own laws. As Kerry Anderson has noted In a VOXPOULII blog post, Free Access to Law matters the most to the poorest and most unstable communities:
Zimbabwe has not been able to publish its Law Reports since 2003 owing to the devastating collapse of infrastructure resulting from the political situation. Swaziland last published Law Reports in the 1980s. Many other countries have out-of-date Law Reports with no resources to continue the Law Reporting function. Others have written more eloquently than I on the necessity of having contextual law, particularly in common law jurisdictions. The point is singular and self-evident: how can the laws of a country be known if the laws of the country are not available?
Some of the project’s lessons are that “digitization of print materials and/or manual capturing of metadata … cannot be deemed a successful strategy in the long run – it is simply uneconomical to continue to do so past a certain stage. Engaging stakeholders in education of use of technology or development of IT solutions to support workflows for delivering of judgments or passing legislation may be a way of dealing with issues of digitizing and automating delivering of law to the public. Standards of preparation of legal material … adopted by all originators of legal information in a particular jurisdiction, will ease its dissemination and re-use.” In other words, dead trees are not nearly as helpful as electrons, even in very poor countries, in providing access to law. Part of me wants to resist this conclusion even though I concede that it is undoubtedly correct. Paper publications may be traditional, resilient, and fairly copyright-restriction-defying once they are published but they add a cumbersome step to any knowledge-distribution chain. And as we learn from these publications, money for Free Access to Law initiatives is scarce.
It may be, as Eve Gray concluded that “[t]he most promising and sustainable future looks to be in small and innovative digital companies using open source publishing models, offering free content as well as value-added services for sale.” But librarians are a hardy and relentless people, and if there is a way to bring a Legal Information Institute to every corner of the globe, these are the people who will figure it out.
Jan 25, 2013 Frank Pasquale
Economists are beginning to lose faith in technological progress. As one wag puts it: instead of cancer cures, “Captain Kirk & the USS Enterprise, we got the Priceline Negotiator and a cheap flight to Cabo.” Even formidable companies like Google have fled the health field, daunted by the complex legal environment. Some have called for radical deregulation as a solution. But a more viable approach is to turn to the work of some of the smart, committed, and impartial legal scholars who are pioneering the field of cyberhealth law. Particularly instructive is Sharona Hoffman & Andy Podgurski’s article, Improving Health Care Outcomes through Personalized Comparisons of Treatment Effectiveness Based on Electronic Health Records.
In an information economy, even cheesecake can be optimized using data-driven methodology. Unfortunately, leading health care providers often resist such methods of improvement. Pharmaceutical firms have sometimes continued to market drugs even after reports emerge that undermine the rationale for taking the drug, let alone paying for it.That troubling method of attaining short term profits at the cost of long term sustainable business models needs to be countered by sophisticated methods of analyzing (and disseminating) data on the real effect of medical interventions. Hoffman and Podgurski help develop a legal and technical framework for assuring that happens.
Promoting Pharmacovigilance
The President’s Commission Advising on Science and Technology has endorsed aggressive use of health data to ensure new research opportunities. The PCAST authors conclude that many clinical research studies today are “out of date before they are even finished,” “burdensome and costly,” and too narrowly focused. They endorse health information technology that is enabled for “syndromic surveillance,” “public health monitoring,” and “adverse event monitoring” by aggregating observational data.
The free flows of data elevated to constitutional status in the case of Sorrellv. IMS Health Inc. may eventually improve pharmacovigilance, including efforts to understand the effectiveness of drugs on a population-wide level, beyond clinical research. But it will take a great deal of computing power for them to do so. Cyberlawyers will need to rethink how privacy, IP, and health law interact in order to help researchers and physicians make the most of the oncoming data deluge.
Hoffman and Podgurski have detailed how advanced programs of observational research on effectiveness could work. They explain the benefits of personalized comparisons of treatment effectiveness (PCTEs), a form of personalized medicine, that uses information obtained through a large database search to “find a cohort for a patient needing treatment.” Their proposal for new forms of personalized medicine takes to the individual level what has often been envisioned for population-wide analysis:
We propose the development of a broadly accessible framework to enable physicians to rapidly perform, through a computerized service, medically sound personalized comparisons of the effectiveness of possible treatments for patients’ conditions. A personalized comparison of treatment effectiveness . . . for a given patient (the subject patient) would be based on data from EHRs of a cohort of patients who are similar to the subject patient (clinically, demographically, genetically), who received the treatments previously and whose outcomes were recorded. (P. 425.)
As they explain, such a database query could identify “for a given patient, an appropriate reference group (cohort) of similar, previously treated patients whose EHRs would be analyzed to choose the optimal treatment for the patient at issue.” Their proposal is a logical extension of an idea promoted in an Institute of Medicine report known as the “Wilensky Proposal,” which called for more targeted comparative effectiveness research. Research has already demonstrated that pharmacogenetic algorithms can sometimes outperform algorithms that consider only clinical factors.
From Transparency to Intelligibility
Of course, there are challenges to this type of research. Systems must move beyond mere transparency to data entry standards that allow for the intelligibility required by personalized medicine. As Hoffman and Podgurski recognize, “the need to code all presenting comorbidities” and to identify “patients who have the specific condition to be studied” is crucial to data quality. There is a tension between untrammeled innovation by vendors at any given time and later, predictable needs of patients, doctors, insurers, and hospitals to compare their records and to transport information from one filing system to another.
For example, one system may be able to understand “C,” “cgh,” or “koff” as “cough,” and may well code it in any way it chooses. But to integrate and to port data, all systems need to be able to translate symptoms, diagnoses, interventions, and outcomes into commonly recognized coding. Competition also depends on data portability: health care providers can only credibly threaten to move their business away from an unsatisfactory vendor if they can transport those records. Patients want their providers to seamlessly integrate records. Hoffman and Podgurski show the necessity of Stage II of meaningful use rulemaking to promote a common language of medical recordkeeping. As they recommended in 2008:
[I]t is necessary for all vendors to support what we will call a “common exchange representation” (“CER”) for EHRs. A CER is an artificial language for representing the information in EHRs, which has well defined syntax and semantics and is capable of unambiguously representing the information in any EHR from a typical EHR system. EHRs using the CER should be readily transmittable between EHR systems of different vendors. The CER should make it easy for vendors of EHR systems to implement a mechanism for translating accurately and efficiently between the CER and the system’s internal EHR format.
There are also important opportunities for standardization in the security field. The discussion can quickly become technical, but the underlying purpose is clear: to develop some standard forms of interacting in a realm where “spontaneous order” is unlikely to arise and network effects (as well as what David Grewal describes as network power) could lead to the lock-in of suboptimal patterns of data storage and transfer.
Better health information technology infrastructures in the United States can enable forms of surveillance that are more rigorous, comprehensive, and actionable in the world of policy, and more user-friendly for patients. Rather than getting between doctor and patient, advanced EHR stands poised to silently monitor and improve their relationship. The same record systems that are designed to digitize health diagnoses and interventions can also generate outcome data if they are configured appropriately. Such data would help ensure patients and authorities are truly informed about the risks and benefits of drugs.
Hoffman and Podgurski are among the first legal academics to convincingly merge literatures of health system transformation and cyberlaw. They suggest the practical feasibility of productivity gains in the health sector that we usually associate only with Silicon Valley. Just as U.S. Department of Homeland Security (“DHS”) and National Security Agency (“NSA”) have advanced domestic intelligence capabilities by querying distributed databases from diverse public and private sector partners, we can now apply such technology toward improving population health. Hoffman and Padgurski demonstrate a “proof of concept” for reallocating more of these technologies from the diminishing marginal returns of seeking an “enemy within,” to fighting the truly pervasive menace of disease.
Dec 12, 2012 James Grimmelmann
Legal academics who write about norms risk becoming armchair anthropologists. But the armchair is precisely the place anthropologists avoid; good ethnography cannot be done alone. As one of my college professors said, “The specific antidote to bullshit is field work.”
E. Gabriella Coleman has spent much of her career doing field work with a computer. Her first monograph, Coding Freedom: The Ethics and Aesthetics of Hacking, is based on an extended study of free software programmers. She lurked on their email lists, hung out in their IRC chat rooms, went to their conferences (she even helped organize one herself), and spent countless hours simply talking with them about their work. The result is a fascinating study of a community substantially defined by its tense engagement with law. (More recently, she has been closely observing the anarchic carnival-esque collective paradoxically known as Anonymous, with equally fascinating results.
On one level, this is a book to savor simply for its empathetic ethnography. The “hackers” it describes–despite the pejorative, transgressive overtones that years of media overreaction have given the term–play at the intersection of esthetic beauty and practical utility. Coleman describes coding as a species of creative craft work, with a perceptive eye for detail. One of the best passages is dedicated to a close reading of a code snippet written by the free-software advocate Karl Fogel in which he grinds his teeth in frustration at having to work around a bad design decision in another piece of software. He creates a function named “kf-compensate-for-fucking-unbelievable-emacs-lossage” to solve the problem. As Coleman explains, quoting Erving Goffman:
Fogel’s code is an apt example of “face work”–when a hacker is sanctioned to perform a “line,” which is the “pattern of verbal and nonverbal acts by which he expresses his view of the situation and through this his evaluation of the participants, especially himself.” Within such a presentation, hackers can declare and demarcate their unique contribution to a piece of software while at the same time proffering technical judgment. One may even say that this taunting is their informal version of the academic peer-review process. In this particular case, Fogel is declaring the code he patched as an utter failure of the imagination.
Anyone who thinks about programmers, open source, online communities, or the politics of intellectual property should have a copy of Coding Freedom on the shelf. It is an invaluable portrait of how free-software coders work, individually and collectively.
What makes Coding Freedom truly stand out, however, is that “free software hacker” is an identity significantly constituted in relation to the law. To write free software is to choose to release one’s code using a carefully crafted copyright license; Coleman’s hackers elevate this legal issue to prime significance in their working lives. Coding Freedom is thus both the oft-told story of a legal idea–free software–and the lesser-known story of how numerous hackers, following personal but parallel tracks, have engaged with copyright law.
Coleman describes two crossing trajectories in copyright: the rise of an increasingly expansive domestic and international copyright system and the simultaneous rise of the free software movement, The former is bent on restricting uses; the latter on enabling them. The two collided in the early 2000s in the fights over the implementation of the DMCA, particularly the DeCSS case and the arrest of Dmitry Sklyarov. The result was the politicization of copyright in code: inspired by legal scholars and free software evangelists, many hackers saw themselves as participants in a struggle against a repressive copyright system.
Coding Freedom makes these familiar stories fresh. Free-software hackers were receptive to a fight-for-your-rights narrative precisely because they were already embedded in a professional context that foregrounded the political and ethical implications of copyright law. What is more, they engaged with copyright law as law, drafting licenses to achieve a free-software goals, endlessly debating the minutiae of license compliance, and critiquing copyright’s inconsistencies with the playful creativity of appellate litigators.
Coleman artfully demonstrates how the anti-DMCA trope of “code is speech” resonated with hackers’ lived experiences creating software alone and together. They were used to communicating both their individual expression and their shared endeavor in source-code comments and elegant algorithms. When Seth Schoen critiqued the DMCA’s prohibition on circumvention tools by rewriting
DeCSS “in haiku, he was drawing on a long hacker tradition (also artfully described by Coleman) of linguistic play, of writing programs not merely to compute but also to amuse.
This leads into a thoughtful discussion of the extent and limits of a hacker-oriented critique of the existing order of things. On the one hand, some coders have been politicized by their engagement with copyright, and connect it to a larger transformative movement concerned with the intellectual commons and global access to knowledge. On the other, free-software licenses are built around a deep core of apolitical neutrality: they pointedly refuse to take any position on the relative worth of what downstream users use the software for. Feeding the homeless is fine; so is building doomsday devices.
Coding Freedom offers a nuanced analysis of hackers sometimes-closer sometimes-further dance with liberal ideals – particularly in its clever discussion of how Debian (a leading free software project) cycles between majoritarian democracy, technical meritocracy, and informal consensus. None of these governance modes is fully satisfactory, either ideologically or pragmatically: each has broken down as Debian has gone through growth spurts and awkward adolescent phases. But at the same time, each of them reflects larger commitments its members hold dear: equality, excellence, and collaboration.
Debian — which Coleman describes as a Coverian nomos— is the heart of the book. Its social practices of production, education, and self-governance receive careful treatment. In an early chapter, Coleman convincingly argues that hard work of creating and sustaining hacker communities does not happen solely online. She gives a thoughtful description of “cons” — the regular gatherings at which hackers come together to teach each other, discuss project direction, code intensely, and socialize. She convincingly argues that a con is a ritual-laden lifeworld, an in-sense experience that helps hackers understand themselves as part of a larger collaborative collective. These and other in-person interactions are an important part of the glue that makes the global networked hacker public possible; online and offline appear as complements in her story, rather than as modalities in opposition.
Coleman’s portrait of how hackers become full-fledged members of Debian is eerily like legal education. They learn a specialized subset of the law, to be sure, with a strong and narrow emphasis on a thin slice of copyright. But the hackers who are trained in it go through a prescribed course of study in legal texts, practice applying legal rules to new facts, learn about legal drafting, interpretation, and compliance, and cultivate an ethical and public-spirited professional identity. There is even a written examination at the end. Law schools and regulators ought to be interested in her careful portrait of informal but successful legal training in a lay community.
There is a deep parallel between software and law as formal rule-bound systems of control and creation. Coding Freedom breaks important ground in teasing out some of the implications of this connection. Hopefully others will also take up the project.
Nov 6, 2012 Ian Kerr
When I first encountered Nora Young’s new book —The Virtual Self—I thought, omg, another book about that?! Don’t get me wrong; earlier this year I devoured Julie Cohen’s Configuring the Networked Self just as quickly as I did Daniel Solove’s The Digital Person back when it first came out.
But if I include an exciting new edited volume by Cynthia Carter Ching and Brian Foley released earlier this year, then by my count there are more than a dozen books in the last couple of years about constructing the self in the digital world.
It’s a good topic and if I had the jets I would read them all. Eventually. But, as soon as I saw that Nora had a book out in this domain, I immediately bought and read it. And, let me tell you, it jots well!!
This should come as no surprise to any fan of CBC radio. In my view, Nora Young’s weekly program, Spark, is probably the smartest show there is on digital culture and 21st century living. On the radio, she has this amazing ability to do deep, hard, careful thinking in a lighthearted and conversational manner. The same is true of her first book.
Academics: don’t be fooled by its informal style or paucity of footnotes. This is a meticulously crafted, authoritative investigation of one of the more interesting shifts in digital media—a study of the cultural explosion of self-tracking.
More and more, people track what they eat, or how they move. They register the places they go during the day using their cellphones, record their mood changes, rate the restaurants they’ve eaten in, track the length and pace of their runs. You can too: you can sign up for any number of online services, many of them free, that let you track the movies you’ve watched, the purchases you’ve made, the routes you have walked, or the beverages you’ve consumed. As the saying goes, there’s an app for that. More and more of us are keeping track of the statistical minutiae of daily life, leading lives that are increasingly numerically documented. But why? What is the particular pleasure in seeing daily experience converted into numbers? (P. 1-2)
Young is smart enough to anticipate the response of her more cynical readers: “I can imagine what you are probably thinking right now: that self-tracking is a kind of behavior that neurotics and narcissists engage in, a sort of digital scab-picking that most people wouldn’t even dream of.” (P. 4).
But Young understands the practice more charitably and, consequently, more profoundly. This isn’t just about Weight Watchers or the Running Room gone digital. It is a much broader range of digital culture that includes everyone who has updated his or her status online.
As Young explains, “[w]hat is posting status updates on Facebook if not a sort of ritualized documentary practice that you freely share with others, a way of taking the shifting moments of mood and behavior and preference and activity and staking them to the ground?” (P. 4-5) “I think of the status update as a sort of Horton Hears a Who means of saying ‘I am here. I am here! It’s a continual registering of presence, and is, in a sense, a way of being ‘seen’ by others. It’s the urge to create the self as a documented, persistent, even curated, object.” (P.24-25)
Her insight of the digital self as an intentionally curated object (rather than what Haggerty and Ericson have called the “surveillant assemblage”) extends the subject under investigation beyond online tracking. Hence Young’s totally awesome coinage of an intriguing new term: auto-reportage.
If I understand Young correctly, an important element of digital technologies and culture is that they permit a radically enhanced ability to create extensive bodies of documentary coverage of the individual— reportage in the journalistic sense. But in this case, the “eye witness” report is by the individual him- or herself—auto, as in ‘autobiography’ but also in the sense of ‘automatic’.
Auto-reportage is “the continual registering of attitudes, tastes, and whereabouts.” (P. 59) Young sees it as fulfilling our human predisposition to apophenia: the tendency to see patterns in random data, offering “a sense that life isn’t random or arbitrary, that, over time, the trivial acts of our mundane daily life shape a picture of who we are. We see our data bloom into patterns like a kind of emergent intelligence, becoming a self-generated portrait.” (P .48) “This sharing self is often dismissed as narcissistic, but I don’t think that is it at all.” (P. 63)
With this, Young offers us a very different take on social media. One of my favorite aspects of this work is that it casts aside the received view that we are all stupid users who are ourselves to blame for the harms of over-sharing. Young treats her readers to a much more nuanced, original, interesting, sympathetic and persuasive account of auto-reportage, tying it back to the American Enlightenment and, before that, the European tradition of keeping diaries and journals.
Her poster-fella is none other than Benjamin Franklin who, in his Poor Richard’s Almanack, undertakes “the bold and arduous project of arriving at moral perfection”, and offers a methodology for achieving it. (P. 32). Franklin’s project involved the enumeration of a list if virtues and a paper-based means of tracking lapses and successes. Franklin’s goal: “I should be happy in viewing a clean book, after thirteen weeks’ daily examination (P. 34). Not surprisingly, Franklin’s approach was empirical and scientific. Franklin’s ultimate objective was to manage his interior states by making them more objectively observable.
Young tells us that we “share with him an understanding of the self as a project to be undertaken and observed. … To aim for this personal, individual betterment, our self-tracking also shares with Franklin’s the drive toward a sort of personal accountancy. This is perhaps what is most familiar to us about Franklin’s little book; the drive to document the self, to create a vision of ourselves that we can refer to, track, and evaluate.” (P. 37)
By seeing the incredible potential for individual agency in auto-reportage, Young provides a much-needed account of our ability to transcend the superficial, egomaniacal understanding of digital culture. Instead, she thinks we should observe and understand our virtual selves as more enlightened aspirations regarding the potential for moral development through personal accountancy.
Through a series of chapters that very attentively explore of the implications of the ‘data-mapped self’, the coming age of ‘big data’, and the pernicious use of legal instruments like standard form contracts to sabotage privacy, Young not only recognizes the perils of auto-reportage but also offers some interesting prescriptions.
Among them, in the final two chapter of the book, she encourages us all to become data activists. “Who says you ought to list the commodities that you are interested in as a way of describing yourself? … It’s our choice where we choose to track our data, and we can choose our tools wisely. … If the goal of these technologies is partly to give us insight into ourselves, we ought to think in a much more open-ended and critically minded way about what they are measuring and tracking.” (P. 194-95)
Although much of the book expresses important concerns about how to protect personal privacy in a world where self-tracking is mediated by corporations, Young also expresses great hope in the “potential for us to opt into using self-tracking for the public good.” (P. 198) She believes that “we can map our communities, our neighborhoods, and our lives according to the values we articulate.” (P. 199)
Part of the problem to date, she thinks, is that we have spent too much time focusing merely on the relationship between individuals and corporations. The exciting thing, she thinks, is that the data maps we create through self-tracking offer feedback loops that afford us a deeper understanding of ourselves, and how we might apply those to the world around us. It is not just about stroking consumer preferences. Self-tracking and auto-reportage enable enlightened users to connect with what we truly value as a community while, at the same time, bringing our digital selves “back to the ground, back to the physical.” (P. 203)
To the cyberlaw-types reading this review, the issues addressed in Young’s book may not appear entirely new, as they might to the uninitiated—arguably the book’s target audience. Still, Nora Young offers even the most seasoned cyberians some very fresh perspectives.
For example, her novel and compelling account of enlightened self-tracking provides an exciting counter-narrative to the superficial, ridiculous, reductionist approach adopted more and more by our courts in determining reasonable expectations of privacy. Of course we don’t abandon or waive privacy expectations whenever we auto-report. Although this may be a well-entrenched intuition for all those who reflect regularly on privacy or the 4th Amendment, Young offers a robust explanatory account of why this is so. Privacy lawyers should pay attention.
Young also expresses important concerns about the ease with which standard form contracts have displaced our ability to be data activists. According to Young, “we ought to be thinking differently about the sorts of contracts ordinary citizens sign with online companies.” (P. 178). Young further prescribes the need for new laws that limit the ability of data collectors and aggregators to use standard form contracts to undermine moral development and treat self-trackers as mere means to corporate ends.
Although law is not the primary domain of The Virtual Self, this lovely piece of intellectual prose motivates legal thinking. It has inspired me to try to tackle some of these looming social issues. I hope it does the same for you.
Sep 10, 2012 Rebecca Tushnet
Peter Decherney has written an excellent book about the ways in which copyright laws have shaped and responded to the movie industry in the US. Professor Decherney, who, not incidentally, was instrumental in achieving the first context-specific exemption for ripping DVDs (for use in teaching film studies, renewed in the 2009 cycle), has a sharp eye for the way the movie industry has exploited and reacted to law as part of its business models over time. He suggests that the usual reaction of the industry to legal rulings has been self-regulation either to confirm or to avoid the formal law, depending on what works best for the people in charge.
History repeats, not just in the oft-told story of new media relying on unauthorized copying from old media—plays into films, for example—but also in the smaller details. The relationship between technological measures designed to prevent copying and unauthorized copying, for example, goes back to the start of moviemaking, when different producers used film with different sprocket holes in order to preserve their control over their own preferred, often patented, technologies. This incompatibility didn’t deter copying, though. Instead, it led people who wanted to show movies to make their own copies to fit on their own equipment, just as technical protection measures still do today.
Decherney begins early in the movies’ history, when it was unclear whether the performances therein qualified for copyright. In some cases, legal decision-makers deemed films, which were much lighter on plot than the texts we think of as movies today, insufficiently dramatic to be legally protected. In other instances, judges considered films immoral. As is consistently the case with copyright, sex confounds the law. It was also unclear who was responsible for a recorded performance, assuming that the recording infringed someone else’s right; in one important case, a film company claimed that it wasn’t responsible for infringing the novel Ben-Hur because it had merely filmed a chariot race staged by the Brooklyn Fire Department. (The novel sparked a vogue for such recreations, which just confirms my belief that media fandom is everywhere.)
Later, studios fought with directors over artistic control. When films were first being edited for television broadcast, critics often worried over their “emasculation,” a gendered term indicating some of the cultural meanings of control over broadcast versions. As Decherney points out, the passage of time turns outrages against art into high art. Just as directors for years fended off charges that they were mutilating novels and plays in their adaptations, now directors became believers in the inviolability of their own art. Their quest for recognition as auteurs was largely successful outside the law, but largely a failure within the US legal system.
Hollywood’s history with copyright law is full of these ironies, including the studios’ fear of the VCR that ultimately brought them great riches. Decherney points out that Disney, one of the great opponents of the VCR, was a niche studio until the profits enabled by videotape sales gave it the capital to fund its next great wave of films.
More recently, Decherney argues, 1970s avant-garde filmmaking developed in the context of various assumptions about what could legally be done, especially with music. Even when these assumptions didn’t exactly follow the law, they shaped behavior. “Underground” works were ignored by copyright owners, but still used music cautiously, and their makers licensed rights in order to show them at international festivals or on TV. Kenneth Anger’s “avant-garde classic Scorpio Rising (1964) … freely used old film clips, advertisements, and cartoons. Some viewers were shocked by the sexual situations depicted in the film. Many filmmakers were more surprised by Anger’s flagrant use of popular music to create counterpoint and commentary. Anger’s 30-minute film used a ‘wall-to-wall’ string of popular hits ….” What they didn’t know was that Anger had actually cleared the rights for the songs (though apparently for nothing else). This more than doubled his budget and cost more than the total budget of most avant-garde films.
Martin Scorcese watched Anger’s film and was shocked—his NYU professors had always told him not to use music in a student film. He said: “That gave me the idea to use whatever music I really needed.” While film’s gatekeepers enforced strict rules on music, refusing to consider fair use at all, Scorcese decided to use unlicensed music in his own student films. This practice got him ready to make breakthrough uses of music, this time licensed, in his later feature films. Among the complicated lessons here is that “misinformation can be as powerful as accurate information.” Another is that tomorrow’s lasting art comes from experimentation, often experimentation perceived as illegitimate by today’s gatekeepers. When we suppress the amateur, among the costs is that we suppress tomorrow’s professionals.
Decherney also tells the story of the unusual case in which experimental video was suppressed by copyright owners: Todd Haynes’s 1987 Superstar: The Karen Carpenter Story, blocked not by Mattel but by Richard Carpenter. Haynes decided to proceed without licensing the music—based in part on his beliefs about Scorpio Rising—but was ultimately forced to stop allowing it to be shown. Of course, this all made Superstar more attractive as a bootleg, and it’s now even easier to find. Haynes’s story created its own myths about copyright and trademark overreaching among filmmakers, even though Decherney didn’t find any other instances of such legal threats until the rise of online video sites like YouTube. Hollywood in general hasn’t been very aggressive about pursuing self-proclaimed video artists, in part, Decherney suggests, because the law of fair use is “underdeveloped and highly unpredictable” in this area. In addition, the economic harm from video artists’ use was realistically nonexistent and the public relations risks are real.
Decherney argues that YouTube was not a disruptive technology because it created a video-sharing culture. Plenty of people were primed to share their videos already. Instead, he suggests, YouTube brought a number of different video-making cultures—and their expectations around copyright and fair use—into contact and occasional conflict, and made them all more visible to each other and to copyright owners. “The fans, avant-garde artists, home video makers, and other fair use communities had spent decades learning when they should worry about attracting the attention of copyright holders…. They all became subject to increased surveillance, and their cultures of fair use were homogenized as large media companies sought one-size-fits-all solutions to employing the DMCA to control copyright infringement.” This creates a need for continued scholarly and activist engagement in pushing back against the (new) norm of total copyright owner control that the industry would like to establish.
Hollywood’s Copyright Wars works as historical narrative and contemporary reminder: the law’s role in film’s creative process is not and has never been as simple as providing incentives for creation. Decherney’s readable book provides a century of evidence about the complicated relationship between film, law, and power.
Jul 12, 2012 Andres Guadamuz
Daithí Mac Síthigh,
Legal Games: The Regulation of Content and the Challenge of Casual Gaming, 3
J. Gaming & Virtual Worlds, no. 1 at 3-19 (2011)
available at SSRN.
Mainstream coverage of gaming regulation has usually centered on the possible danger of violent games to children, usually accompanied by stills from the latest Grand Theft Auto, Call of Duty, or Mortal Kombat to instil a righteous level of outrage in the public. The underlying message in most of these stories ranges from “something must be done about this” to “ban this filth.” Thankfully, such often uninformed commentary has not been translated into legal scholarship, where the coverage has been more nuanced. With few exceptions, authors dealing with the nascent field of gaming regulation have produced a growing body of work that is both thorough and well-written. A recent addition to the group of scholars interested in games is Daithí Mac Síthigh from the University of East Anglia in the UK, and soon to join the University of Edinburgh.
In Legal Games: The Regulation of Content and the Challenge of Casual Gaming, Mac Síthigh tackles both the public perception of games regulation in the UK, and the actual practice of such regulation. He comments that most legal studies into games fall into three categories: the study of game production and development, studies into the debate on the effects of video game violence, and more rarely discussions about copyright. Mac Síthigh accurately comments that some of the higher level discussions in gaming studies, for example, the literature that studies the ludic nature of the gaming experience, has been somewhat left out of legal and regulatory commentary in general. So, Mac Síthigh’s article is in part a response against this trend.
The article starts by describing the current practices at the British Board of Film Classification (BBFC), the entity in charge of rating video games in the UK. While one could be forgiven for thinking that this section may not be of interest to international readers, it is actually a very enlightening discussion that is relevant elsewhere for comparative purposes and because of the BBFC’s unique structure and lack of transparency.
However, the article really shines when the author turns his eye to the discussion of video game scholarship itself when contrasted to legal writing on the subject. Here Mac Síthigh shows not only that he understands the wider discussions in game studies, but also tries to draw connecting lines between the interdisciplinary research into games, and the potential legal interest from this angle. Here the author points out that there is a marked lack of understanding of the nature of games in general that could be better informed from reading games scholarly output.
Finally, the most interesting contribution to the existing analysis is that so far legal writing on the subject has concentrated on what could be called hard-core gaming experience, namely virtual worlds, first person shooters, and role-playing games. However, the fastest growing games sector is the casual gaming market. Here, once again, the cluelessness of the regulatory sphere is shown, as the casual online games market is not regulated, and lacks any oversight or rating system. This seems like a huge regulatory black hole given the rise in popularity of games apps for mobile devices, such as the wildly successful Angry Birds, to the growing phenomenon of social gaming as exemplified by Facebook games like Zynga’s Farmville.
The article concludes that game regulation is made more difficult because of the very difficult nature of games. Whenever games do not fit the traditional narratives in the media, regulators seem to struggle considerably with regulating them..
As mentioned, this article is highly recommended for anyone involved in games regulation regardless of jurisdiction, and it may also be of interest to those whose focus is Internet regulation as a whole. For some time now, games regulation has been mirroring some of the early discussions in control in Cyberspace, and this article is no exception.
Cite as: Andres Guadamuz,
The Player of Games, JOTWELL
(July 12, 2012) (reviewing Daithí Mac Síthigh,
Legal Games: The Regulation of Content and the Challenge of Casual Gaming, 3
J. Gaming & Virtual Worlds, no. 1 at 3-19 (2011)
available at SSRN),
https://cyber.jotwell.com/the-player-of-games/.
Apr 13, 2012 Herbert Burkert
Julie E. Cohen,
Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Yale University Press, 2012), available at
juliecohen.com.
We always look for writings that make sense–both by themselves, treating their subjects adequately, and by making sense to us as cyberlaw people. These writings help us to understand better the world around us; they also give us something that the knowledge of positive law and a vague understanding of technological change alone cannot give us. And so we become what Julie Cohen names so aptly “disciplinary magpies collecting alluring bits of this and that and cobbling them together.”
What is the recipe for sense-making? I see two universal elements: (i) The phenomena hitherto seen separately are being seen as connected in an exercise of reconfiguration, and (ii) a methodology is applied that gives us new insights into the forces that might be at work in these reconfigurations. There are two more for our special needs as law people: (iii) a normative stand, and (iv) pragmatic suggestions deriving from the new insights. The outcome is a sort of new magnifying glass that helps us to see new connections, to detect structures and processes at work, and to inspire speculation on new connections. Of course, we would not expect that the new world model would replace others as the sole explanation, we are content with having obtained yet another supplement. Cohen delivers: she connects, introduces methodology, takes a normative stand and makes suggestions, but her model is not one that would allow us to contentedly label yet another drawer and close it with satisfaction. Rather, she keeps us exposed to the unruliness of life and culture that the lawyer in us so abhors.
Cohen connects cherished, yet somewhat contradictory, cyberlaw views on copyright, on privacy, and on the design of network architecture and their access points. She exposes those contradictions as stemming from limitations of underlying ideologies, namely liberal political theory and our technology projections. She explores these assumptions in rich detail and in a well-structured rhythm using concepts drawn from cultural studies, science, technology, and society research. She uses concepts more generally from what has become known as postmodernist approaches (although she keeps some distance to such labeling), emphasizing the importance of culture as a living amalgam of ideological, political, economic and technological interplays in which we experience and practice our material lives as “situated” and “embodied” individuals and communities. While there seems to be a preferred immunizing tradition among legal scholars to remain somewhat coy about normative assumptions, Cohen is outspoken. She sees her method justified by joining those that seek to set the conditions for “the capabilities for human flourishing.” As operators to arrive at such conditions in the legal policies of copyright, privacy and networks, Cohen identifies “access to knowledge”, “operational transparency” and “semantic discontinuity.” Access to knowledge (which comprises access to networked information resources) has to be read broadly and in the tradition of social enabling rights. Operational transparency ranges from the transparency of the functioning of networks and devices, to the transparency of the performance that is expected from them by public as well as by private actors. The most interesting operator is “semantic discontinuity”, essentially meaning to go against the grain of seamless integration whether it is encountered in legal reasoning or in submission under authorizing regimes.
With these operators at play copyright reemerges as the enabler of “the play of everyday practice” including its play with transgressions, and as the grantor of cultural reproduction. Privacy evolves as management of boundaries to enable subjectivity and difference, and the control design of network architecture reappears as context conscious bi-directional permeability. Again, it has to be emphasized that Cohen sees these readings as an encouragement to weight shifting, but not as a total substitution of the currently prevailing understandings. This essentially nudging intention becomes more visible in the listings of suggested policy changes. Cohen argues — just quoting from some of the copyright related suggestions — for a recalibration of the restrictions on private use, including an interpretation of “private” that is more aware of cultural practices. Copyright law should “clearly reserve a broad range of remix privileges to users”. Extraordinary social benefits should be acknowledged; indirect liability strictly limited.
In spite of the nudging intention Cohen delivers her arguments with verve and does not suffer methodological foolishness kindly. Then again, such energy may be needed when, in a US context, you shout from a non-modernist ground up against law faculty walls of prevailing opinion. While it is true that many of us here in Europe have embraced, in times of methodological drought, ways of thinking like law and economics as eagerly as Rolling Stones music (both arriving here from the US at approximately the same time), we feel we can deal with the challenges of Cohen’s semantic discontinuities in a slightly more serene manner.
Making sense? The way to make sense (in the Cohen way) — as I read the text — is not by mechanistic models, but by remaining eternal ethnographers exposed to the richness of cultures. It is necessary to be always striving thus achieving at best — again as Cohen has put it — better stories and, as I would like to add by quoting from the German Ethnographer Michael Oppitz, by aiming at more “Genauigkeit“. And as the indigenous architects of legal policies (as we all like to see ourselves), we can only hope — again with Cohen — to avoid the pitfalls of technological determinism by securing a multiplicity of pathways of technological advancement guided by the welfare of the people.
Each review has its situated and embodied reviewer who cannot help but looking forward to moments when he finds what he has not been looking for, the small big insight that sets the mind traveling. I had referred to such “fruits of reading” in a previous review. And Cohen has not failed me either. Two of the many fruits I encountered I would like to share: The thought experiment to think of the library as the cultural default and the bookstore as the exception in Cohen’s pragmatics-oriented chapter nine: This experiment can also be read — and simultaneously so — as a pointer to the importance of default rules in social discourse as grantors of unquestioned stability as well as indicators for where to apply the levers of change. Very much towards the end, Cohen talks about the limitations of risk management by legal policies and the assumption that imperfect policies may serve as risk enhancers. Against the symphonic sound of her whole text the term “risk management” takes on a whole new meaning for me: Don’t we need an understanding of risk management that literally manages risks as a cultural resource and a challenge for creativity?
Maybe I have been carried away, maybe I am only revealing my self-imposed limited access to knowledge, but isn’t it for such moments that we read books — beyond trying to make sense?
Cite as: Herbert Burkert,
Making Sense, JOTWELL
(April 13, 2012) (reviewing Julie E. Cohen,
Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Yale University Press, 2012), available at juliecohen.com),
https://cyber.jotwell.com/making-sense/.